• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

How Silly is Climate Change Denial?

Free episodes:

Once again you comeup empty..

Maybe you will have better luck with sea levels.




Hollywood and the media have helped created a popular perception that humans are causing dramatic sea level rises by man-made global warming. This perception comes from an exaggeration of more modest, though still dramatic, computer model predictions of 1-2 metre rises by the end of the 21st century. However, the actual experimental data shows, at most, a slow and modest increase in sea levels, which seems completely unrelated to CO2 concentrations.

The main estimates of long-term sea level changes are based on data from various tidal gauges located across the globe. These estimates apparently suggest a sea level rise of about 1 to 3mm a year since records began. This works out at about 10-30cm (4-12 inch) per century, or about a 1 foot rise every 100-300 years, hardly the scary rates implied by science fiction films like The Day After Tomorrow (2004) or Waterworld (1995).

Importantly, the rate still seems to be about the same as it was at the end of the 19th century, even though carbon dioxide emissions are much higher now than they were during the 19th century.

Moreover, there are a number of problems in using the tidal gauge data which have not been resolved yet. So, despite claims to the contrary, it is still unclear if there has actually been any long term trend! In this essay, we will summarise what is actually known about current sea level trends.


1. Introduction
Under conditions of global warming, sea levels are expected to rise for two main reasons:

  1. In general, when liquids warm, they tend to expand. Therefore, if the oceans warm up, their volume should also increase, leading to a rise in the global sea level.
  2. If global warming causes glaciers or ice sheets (i.e., ice on land) to melt, then the meltwater should increase the ocean volume.
Note that sea ice melting or forming shouldn’t alter ocean volumes, as the ice is already floating on the oceans. Due to Archimedes’ principle, if floating ice melts, it doesn’t increase the volume of water. You can test this for yourself by placing a few ice cubes in a glass of water. If the ice cubes are floating (i.e., not stacked), then when the ice melts, the water line will remain at the same point.
For the same reasons, global cooling should similarly lead to falling sea levels. So, if global temperatures have been changing dramatically over the centuries, then we might expect this to have also caused substantial changes in global sea levels.

As we discuss on this website, many people believe that global temperature trends have been dominated by “man-made global warming”, and that this man-made global warming will become increasingly dramatic over the next century. For this reason, there is a widespread belief that increasing concentrations of CO2 are leading to unusual rises in sea level.

A number of the man-made global warming computer models have tried to simulate how much “sea level rise” to expect from man-made global warming, e.g., Meehl et al., 20015 (Abstract; Google Scholar access); Jevrejeva et al., 2010 (Abstract; Google Scholar access); Jevrejeva et al., 2012 (Abstract; Google Scholar access).

Vermeer & Rahmstorf, 2009 (Open access) believe that the computer models underestimate future sea level changes. So, they didn’t actually simulate sea level changes, but instead estimated how much sea level rise they would expect from man-made global warming, and then used computer model predictions of temperature changes, to predict that sea levels will have risen by 0.8-2 metres by 2100. (The blogger Tom Moriarty has heavily criticised the Vermeer & Rahmstorf, 2009 study on his Climate Sanity blog)


Figure 1. In theory, if a large mass of glaciers or ice sheets melted, this could cause a global sea level rise. Similarly, if glaciers or ice sheets expanded, this could cause a global sea level fall. Photo of Antarctica's Mt. Herschel by Andrew Mandemaker, taken from Wikimedia Commons. Click to enlarge.

Other researchers, such as Dr. James Hansen of NASA GISS, have hypothesised that man-made global warming will be so strong that it could cause the large ice sheets on Greenland, East Antarctica or West Antarctica to suddenly melt, leading to sudden and dramatic sea level rises of several metres, e.g., Hansen, 2005 (Abstract; .pdf available on NASA GISS website). Hansen and others have been promoting these scary scenarios since the early 1980s (e.g., see this New York Times article from August, 1981), and it is from such sources that the Hollywood stories seem to originate (e.g., Hansen was the main scientific advisor for Al Gore’s An Inconvenient Truth film and book).
So, are these scary model predictions reliable, and should we be worried?
Well, in this post, we will forget about the models, and look at what the actual data says. We will find that the data suggests that at most sea levels have risen by 15-20cm since the end of the 19th century. That’s not particularly dramatic. But, if you believe in man-made global warming theory, then you might say “aha, that’s due to CO2, and it will get worse”.

However, the apparent sea level rise seems to have been relatively constant over the last century. If it was due to CO2, we would expect to see a dramatic acceleration since the 1950s, as CO2 concentrations increased. Since this doesn’t seem to have occurred (despite some claims to the contrary – see Section 5), it suggests that the apparent sea level rise is a naturally-occurring phenomenon (perhaps due to natural global warming).

We will also find that there are a number of serious problems with the available sea level data. As a result, much (perhaps all) of the apparent sea level rise might be due to problems with the data. In other words, we do not actually know if there has been any significant sea level trends over the last century!

Astute readers will object and complain that there must have been some sea level trends over the last century. This is because, from the discussion above, we would expect to see sea level changes, since global temperatures do seem to have changed over the last century (whether the temperature trends are man-made or natural in origin). However, as we will discuss in the next section, this is not necessarily the case.

We might expect “global warming” (i.e., an increase in average surface air temperatures over a few decades) to lead to a rise in global mean sea levels. But, for the reasons we will discuss in Section 2, it is also theoretically possible that it could have no detectable net effect on global mean sea levels, or even lead to a net fall! Hence, when we look at the actual sea level records in Sections 3, 4 & 5, we should avoid biasing our analysis with our own views of what we think “should happen”:

“If a man will begin with certainties, he shall end in doubts. But, if he will be content to begin with doubts, he shall end in certainties” – Francis Bacon, Sr. (1561-1626)


2. Problems predicting global sea level changes
The density problem
The density of a liquid tells you the volume that a given mass of that liquid occupies. If the total mass of a liquid in a container (e.g., an ocean basin!) remains constant, but its density increases, then the volume of that liquid will decrease. Hence, the maximum height of the container that is reached by the liquid will decrease. In our case where the “container” is an ocean basin, this would mean a fall in the “global mean sea level”. Similarly, if the liquid density decreases, the maximum height reached will increase. This is the basis for the first theoretical prediction for the effects of global warming mentioned in Section 1 – if global warming causes the oceans to heat up, this should (in theory) cause sea levels to rise, from “thermal expansion”.


Figure 2. The anomolous expansion of liquid water with cooling at temperatures less than 4 °C, means that the bottom of a frozen garden pond can remain relatively warm, if it is deep enough. Schematic taken from Wikimedia Commons. Click to enlarge.

One problem with the thermal expansion theory is that the relationship between temperature and density is different for water than for most liquids. Like most liquids, when you cool pure liquid water from high temperatures, its density steadily decreases. However, unlike most liquids, water actually reaches its maximum liquid density several degrees above its freezing point, i.e., at 4°C instead of 0°C. Between 0°C and 4°C, water actually expands as it cools.
This is why ice (frozen water at 0°C or less) floats! It is also why fish that can survive at 4°C can overwinter in a garden pond that has ice on the top, by staying near the bottom of the pond (see Figure 2). This means that if global warming uniformly warmed up all of the oceans by 0.5°C (for example), some parts would certainly expand (water above 4°C), but other parts would actually contract (water above 0°C, but less than 4°C).


Figure 3. Maps of ocean salinity at different ocean depths - top: surface; middle: 1km deep; bottom: 4km deep. Note that the colour scales are different for each map. Also, some areas are grey in the lower maps, because the ocean floor is not that deep in those areas. Taken from NOAA NODC's World Ocean Atlas (2009). Click to enlarge.

Another problem is that the oceans are not pure freshwater. As anyone who has swam in the sea knows, seawater is salty. The amount of salt in seawater (known as its “salinity”) affects both its density and its freezing point.
Salty water freezes at lower temperatures than pure water – that’s why we grit roads with salt if we’re expecting icy conditions. Salty water is also more dense than pure water. For this reason, the density of seawater depends not only on its temperature, but also its salinity. As can be seen from the maps in Figure 3, the salinity of the oceans varies across the world, e.g., the Atlantic Oceans are slightly saltier than the Pacific Oceans. This regional variability also varies with depth.

Indeed, this complex dependence of ocean density on both temperature and salinity is believed to be one of the main drivers of the ocean circulation, as redistribution of more dense and less dense sea water leads to various different “thermohaline circulations” patterns. [The name “thermohaline” is derived from “thermo” for temperature and “haline” for salinity].

As a result, the effects of global warming or global cooling on ocean densities are complex, and still poorly understood. For instance, suppose the oceans were to uniformly heat up or cool down by 0.5°C (for instance). If that were to occur, then the density changes at a particular spot would depend on not just the temperature change, but also the absolute temperature and salinity at that spot. In reality, global warming or cooling (whether man-made or natural in origin) is unlikely to occur uniformly throughout the oceans, e.g., temperature changes usually vary with latitude and depth, and depend on ocean circulations.

Effects of human activity on water storage

Figure 4. To meet water requirements for expanding populations, many groundwater pumps have taken to pumping water from further and further below ground. In some cases, they may be extracting water that has been underground for thousands of years. Schematic of water age taken from Wikimedia Commons. Click to enlarge.

Another complexity is that the actual amount of water involved in the “water cycle” may have changed over the years. Rapid expansion in groundwater exploitation (e.g., using well water) occurred during 1950–1975 in many industrialized nations and during 1970–1990 in most parts of the developing world – see Foster & Chilton, 2003 (Open access), for a discussion. After this groundwater has been extracted from the ground, it is used and recycled. Eventually, much of it will end up in the oceans. In this way, humans could be significantly raising sea levels by taking groundwater that had until recently been trapped underground, and putting it back into the water cycle.
If this phenomenon is significant, it would mean that the sea level rise which had been specifically attributed to global warming has been overestimated, e.g., see Sahagian et al., 1994(Abstract).


Figure 5. Have humans been reducing sea levels by building so many dams? Photo of the reservoir next to the city of Embalse, Córdoba, Argentina taken from Wikimedia Commons. Click to enlarge.

On the other hand, humans have also built a lot of dams to store water, particularly during the 20th century. Perhaps by doing so, we are reducing sea levels by preventing water from returning to the oceans. If this phenomenon is significant, then it would mean that the sea level rise which had been attributed to global warming may have been underestimated, e.g., see Gornitz et al., 1997 (Abstract; Google Scholar access).
Other types of human activity, such as deforestation/reforestation, could also be having significant effects on global sea levels (either increases or decreases). So, since the 1990s, a number of groups have tried to calculate these various contributions to changes in sea levels, although typically most studies have concentrated on just one contribution at a time.

The relative contributions of these different factors has been a subject of much debate, and seems to be ongoing. For example, studies such as Chao et al., 2008 (Abstract; Google Scholar access) claim human dam building has led to an underestimate of sea level rises due to global warming, while other studies, such as Wada et al., 2010 (Abstract; Google Scholar access) argue that ground water extraction has led to an overestimate of sea level rises due to global warming.

Effects of climate change on water storage
The above discussion of the effect of changes in water storage on land from human activity on global sea-levels, should not be confused with possible changes in water storage on land from climate change (whether man-made or natural).

Climate change could involve changes in the water cycle, thereby altering how much and how long water stays on land instead of in the sea, i.e., the amount of ground water and soil water. It could also affect the amount of snowfall. In addition, global warming or cooling (the most commonly thought of examples of climate change) could decrease or increase the length of time snow remains un-melted (and thereby keeping water “trapped” on land).

These climate-related land storage effects could be significant for global sea-levels, though unfortunately there seem to be very few direct experimental measurements of the factors involved, and so the only studies of these effects seem to have been from computer modelling of data from weather data “reanalysis” models (e.g., ERA-40).

These studies often yield contradictory results. For instance, Milly et al., 2003 (Open access) used computer simulations and results from the CMAP reanalysis of precipitation levels to calculate that climate-related changes in water storage on land were causing a sea-level rise of about 0.12 mm/year in the period 1981-1998 (although, they admitted they couldn’t calculate an error bar for that estimate). But, Ngo-Duc et al., 2005 (Abstract; Google Scholar access) obtained a much smaller value of 0.08 mm/year for the same period. They also found that if they looked over the longer period of 1948-2000, there was no significant trend in sea-level rise or fall, and that the 0.08 mm/year sea-level rise they calculated in the period 1981-1998 was probably due to natural variability.

What else?
In this section, we discussed several mechanisms whereby the expected sea level rises (or falls) from global warming (or cooling) might not actually occur. There may be other mechanisms which we haven’t mentioned, too. For instance, if global warming were to increase the volume of water in the oceans by causing glaciers or other ice bodies to melt, this would cause the weight of water in the oceans to increase. But, in doing so, this could in turn cause the ocean floors to sink, and thereby slightly reduce the expected sea level rise.

Before considering all the complexities mentioned above, it might have seemed a relatively easy problem to calculate how much sea level rise (or fall) to expect for a given global warming (or cooling) of, e.g., 0.5°C. This seems to be the popular assumption, even amongst climate scientists. But, in reality, it is a very complex problem. Readers should remember the American satirist, H. L. Mencken (1880-1956), who observed that:

There is always an easy solution to every human problem – neat, plausible, and wrong. - Henry Louis Mencken, 1917

We appreciate that many people like to have an easy answer to what they consider a simple question. But, unfortunately, if a simple answer is too simplistic, it is often wrong. With this in mind, rather than trying to make simplistic models to understand sea level changes, perhaps a better approach is to look at what the experimental data actually says. This is what we will attempt to do in the next sections.
 
3. Tidal gauge estimates

Figure 6. Photo of a tidal gauge at a harbour in Alaska, by Daniel Cornwall. Taken from Flickr, under Creative Commons (BY-NC-SA 2.0). Click to enlarge.

The main data sources available for estimating sea level changes are records from tidal gauges, such as the one in Figure 6. The Permanent Service for Measuring Sea Level or PSMSL, based in Liverpool, UK (est. 1933), maintains a database of monthly and annual tidal gauge records from around the world. Most tidal gauge-based sea level studies use this PSMSL archive.
Unfortunately, tidal gauges do not actually record “global sea levels”. Instead, they only tell us about local, relative sea level changes. As we will discuss in this section, this makes it extremely difficult to reliably estimate what global, absolute sea level changes have been.


Figure 7. Hourly tide gauge measurements for Malin Head, Ireland for 1999. Data taken from University of Hawai'i Sea Level Center (UHSLC). Click to enlarge.

Anybody who has lived near the coast will know that tides can rise and fall by several metres in a given day. For instance, for the Malin Head, Ireland data in Figure 7, the average daily range for 1999 varied from 1 to 4 metres, with a mean of 2.5 metres. However, the sea level rises which have been proposed by man-made global warming theory supporters are measured in millimetres/year (mm/yr). To investigate these small changes, researchers typically calculate the mean sea level, averaged over the year for a given station (3018mm in 1999 for the Malin Head station). These annual means are then studied to see if there are any long term trends from year to year (or more importantly decade to decade).

Figure 8. Location of the 524 tidal gauges whose linear trends have been calculated by PSMSL. Gauges with negative ('falling') or positive ('rising') trends are indicated. Click on image to enlarge.

The PSMSL have calculated linear trends for 524 of their tidal gauge stations (available here). They stress that they have not applied any correction for vertical land movement or assessed the validity of any individual fit.
Moreover, we also have argued elsewhere that linear trends should be treated cautiously when the data shows non-linear trends, as many tidal gauges do. Nonetheless, the linear trends do offer us a crude method of seeing how global the apparent “global” sea level rise is.

In the map in Figure 8, the gauges are sorted into those with negative trends (i.e., suggesting falling sea levels) and positive trends (i.e., suggesting rising sea levels). Although, most of the gauges show positive trends (396 out of 524, i.e., 76%), nearly a quarter show negative trends (128 out of 524, i.e., 24%). Also, we can see that much of the globe has no data. The so-called “global” sea level rise is not as global as you might think.


Figure 9. Histogram of the PSMSL linear trends. Click to enlarge.

We also can see from the histogram in Figure 9 that it isn’t just a case of dividing stations into “rising” or “falling” sea levels – there is actually quite a broad distribution of trends.
The mean average of all the linear trends is slightly positive (+1.0 mm/yr, with a standard error of 0.1 mm/yr), but there are a large number of gauges with substantially lower or higher trends.

So, there is a major problem in calculating what the overall global (called “eustatic”) sea level trend is – different tidal gauges suggest linear trends ranging from as much as -10mm/year to +10mm/yr. In other words, there is no single “global” value.

Readers who found the Hollywood stories of films such as The Day After Tomorrow, Waterworld or An Inconvenient Truth scary, might like to note that the mean average value of +1.0mm/yr is not quite as dramatic. To put it in context, +1.0mm/year would mean an average rise of 10cm per century. At that rate, it would take a thousand years to rise 1 metre!

Nonetheless, what if we forget about linear trends and instead average together the annual trends from year to year. This data is again available from the PSMSL’s website – see Woodworth & Player, 2003 (Abstract).

We calculated the annual deviations of each PSMSL gauge from its 1961-1990 average (the 30 year period which most stations have data for). Then, we simply averaged together the deviations of all gauges with data for each year, starting with 1807 (when the Brest, France records began). Note that we didn’t bother calculating a gridded average – this is just the simple arithmetic mean of the deviations.


Figure 10. Simple mean relative sea level trends of the PSMSL stations in the map above. Error bars represent the standard errors of the annual means. Data taken from Permanent Service for Mean Sea Level (PSMSL), 2012, 'Tide Gauge Data', Extracted from database 18 Jul 2012 from Obtaining Tide Gauge Data.
See Woodworth & Player, 2003 (Abstract). Bottom panel shows the number of stations with data for each year. Click to enlarge.

The results are shown in Figure 10 (along with the number of stations available in each year). If the trends at all tidal gauges were entirely due to changes in global sea levels, then the graph should tell us what the long term trends since the 19th century have been.
The graph does seem to suggest that there has been a significant sea level rise since the 19th century, as had been claimed. However, it doesn’t seem to be related to CO2 concentrations, because most of the rise occurred in the mid-19th century, yet the rise in CO2 concentrations only started to become significant around the mid-20th century. Also, it suggests that the highest sea levels occurred in 1899, i.e., at the end of the 19th century!

Astute readers will complain that the number of stations with data for the mid-19th century was very low (bottom panel in Figure 10). It was only around the 1950s that the number of stations started to reach modern levels. So, the estimates from before then are not particularly reliable. We totally agree.

However, that means that the estimates which have been claiming there has been a significant sea level rise “since the 19th century” are also unreliable. In other words, the data for before the second half of the 20th century is very limited. This means we can’t really use the tidal gauge data for comparing sea levels during the recent 1980s-2000s warm period to those during the earlier 1920s-1940s warm period, for instance.

Moreover, there are other problems with the tidal gauge data…

You might have noticed from Figure 8 that some parts of the world showed a lot of “falling” stations, while other parts showed a lot of “rising” stations. This suggests that much of the apparent sea level changes are localised, and therefore not an indication of global sea level changes. The biggest difficulty in using tidal gauges to study global sea level trends is separating local changes from global changes.

To properly appreciate the problem, it is worth thinking in more detail about what exactly a trend on a tidal gauge indicates.

Suppose a particular tidal gauge shows an apparent trend of +3mm/yr. What does that mean? Your first guess might be that sea levels are rising. But, tidal gauges are located on land, so if the land (where the gauge is located) moves up or down over time, this would cause an apparent change in the relative sea level, without the sea level actually changing.

So, an apparent “rising” (or “falling”) trend in a tidal gauge record might actually be due to any one of several factors:

  1. The land is sinking (or rising).
  2. Local sea levels have changed
  3. There is an instrumental error or change, e.g., the gauge or dock itself has moved, or been moved.
  4. Global sea levels have changed.
  5. A combination of the above.
In the next section, we will discuss why the first factor can seriously bias estimates of global sea level trends.


4. Is the sea rising or is the land falling?
Tectonic activity

Figure 11. Schematic diagram of the main tectonic plates, generated by Topinka, USGS/CVO, 1997. Taken from US Geological Survey website (a US government website). Click to enlarge.

Why would the land move? There are actually many reasons. For instance, many coastlines are in tectonically active areas, particularly those on the Pacific coast. Indeed, as can be seen from Figure 11, the edges of the Pacific Oceans are so active that they are often referred to as the “Pacific Ring of Fire”. Tectonic activity tends to be quite slow, i.e., of the order of a few mm/year. But, remember that is a similar rate to the tidal gauge trends.
When the tidal gauges in Figure 8 are compared to the tectonic map in Figure 11, it becomes apparent that a surprisingly high percentage of the tidal gauges are near plate boundaries. Many readers will be familiar with the tragedies caused by recent earthquakes in Christchurch, New Zealand (2010,2011), Haiti (2010), or the 2011 earthquake near Japan, whose ensuing tsunami caused the Fukushima nuclear plant to breakdown. Tidal gauge records which have been subjected to such an event are unlikely to be reliable.

Fortunately, earthquakes and other dramatic tectonic land movements might be detectable in once-off jumps in tidal records. So, a close inspection of the records could overcome most of that problem.

A far more insidious problem is identifying the gradual movements (a few mm/yr) which are continuously occurring wherever tectonic plates are colliding. Earthquakes are relatively rare, but in between such events the plates continue to move – just very slowly. If a coastline is gradually rising or falling due to plates colliding, it would cause the tidal gauges to show an artificial “sea level” trend.

Only a few areas with tidal gauges seem to be far enough away from plate boundaries for it not to be a possible factor, e.g., northern Europe, eastern North America, central Pacific islands. Some of these areas are tectonically active anyway, e.g., the central Pacific islands of Hawaii (US) are volcanic in origin. And even regions which are not traditionally associated with tectonic activity, can also show geological movement at relatively high rates, e.g., using a GPS study, Dokka et al., 2006 (Abstract; Google Scholar access) found that the land at southeast Louisiana (USA), including New Orleans and the larger Mississippi Delta, is naturally subsiding at a rate of -5.2 ± 0.9 mm/yr.

Unless the tectonic land movement at each station is accurately calculated, it is very difficult to estimate what the real sea level changes have been at these stations. Some work has been done in recent years to try and do this. One approach has been to place GPS detectors near tidal gauges, e.g., Wöppelmann et al., 2007 (Abstract; pre-print version). Another approach is to compare tidal gauge-based estimates to satellite-based estimates, e.g., Ostanciaux et al., 2012 (Abstract; Google Scholar access). However, it turns out that most of the sea-level studies have simply neglected the problem, often making the excuse that it is too hard to properly solve.

Some researchers try to remove stations which are in specifically earthquake-prone areas, but generally ignore the fact that non-earthquake prone areas in tectonically active areas may also be slowly moving. For example, Holgate, 2007 (Abstract; Google Scholar access) chose 9 long records, and excluded earthquake prone stations, but two of his nine stations were San Diego, California (US) and Honolulu, Hawaii (US), both tectonically active areas.

Recovery from the peak of the ice age

Figure 12. Estimates of the Northern Hemisphere ice coverage at the height of the Last Glacial Maximum compared to modern summers. Taken from NOAA NCDC's Paleo Slide Set: The Ice Ages. Click to enlarge.

20,000 years ago, the Earth was at the height of a glacial period, and glaciers are believed to have stretched as far south as France and the British Isles in Europe and New York in US (see model estimates in Figure 12). It is believed that as these glaciers grew, their increasing weight slowly pushed the tectonic plates down into the Earth’s crust at rates of a few mm or cm a year.
When these glaciers melted at the start of the Holocene, about 10,000-15,000 years ago, that weight was no longer there, and the plates would have started to “rebound”. A number of researchers have argued that this “post glacial rebound” is relatively slow, and is still taking place today.

The last glacial period is popularly referred to as “the ice age”, as in the popular children’s movie series “Ice Age“. However, in glaciological terms “ice age” refers to a geological period where there are extensive ice sheets in both the southern and northern hemispheres. Since there are currently ice sheets on Greenland (northern hemisphere) and Antarctica (southern hemisphere), as well as a number of mountain glaciers in both hemispheres, we technically are still in “an ice age”!
The current ice age is thought to have begun about 2.5 million years ago, and has alternated between periods of extensive glaciation, known as “glacial periods”, and periods of more modest glaciation, like we have today, known as “interglacial periods”, roughly every 100,000 years or so. The current interglacial period (known as the “Holocene”) started 10,000-15,000 years ago, and the previous interglacial (known as the “Eemian”) occurred about 115,000-130,000 years ago. The height of the last glacial period occurred about 20,000 years ago, and is known as the “Last Glacial Maximum”.


Figure 13. Schematic of the post-glacial rebound effect. Click to enlarge.

As can be seen from the schematic in Figure 13, the rebound could still be causing some areas to rise (making sea levels seem to “fall”) and other areas subside (making sea levels seem to “rise”), depending on where they lie on the moving plates.
Indeed, if we closely look back at the map of the “rising”/”falling” tide gauges in Figure 8, we can see that some areas which would have been under or near the ice sheets during the glacial era show mostly “falling” trends (e.g., Fennoscandia in northern Europe, Alaska in US), while neighbouring areas show mostly “rising” trends (e.g., the parts of northern Europe south of Fennoscandia, northeastern North America).

Some groups have tried to develop models of the rebounding land, so that sea level researchers can apply “Glacial Isostatic Adjustments” (GIA) to their data to correct for the effects. [An “isostatic” sea level change is a local change, as opposed to “eustatic” or global sea level changes]. The most popular one is that developed by Richard Peltier et al., and the current version is called ICE-5G – see Peltier, 2004 (Abstract; Google Scholar access) and Peltier’s website or the PMIP-2 website. However, the problem is that these models are just that – models. We might know roughly what is happening, but establishing exactly which areas are rising and falling, and by how much, is tricky. We can get some idea of this from the fact that Peltier’s model has gone through several different versions before the current one.

As a result, it is still unclear how accurate the models are, e.g., which parts (if any) of the British Isles are rising or falling, and is the Mediterranean Sea too far south to be affected or not? A number of groups have suggested that there are substantial inaccuracies in the current models, e.g., see the Ostanciaux et al., 2012 paper mentioned earlier. This means that the model adjustments which have been applied in sea level studies may have been inadequate (i.e., failed to remove all of the post glacial rebound effects), or even inappropriate (i.e., removed a “falling” trend from a region which was actually “rising”, or vice versa).

Coastal subsidence
Another systemic problem in tidal gauge analysis of sea levels, is that of coastal subsidence. Coastlines have always been dynamic, and over the millennia, coastlines can subside or rise. In itself, these natural trends could be sufficient to bias tidal gauge estimates of sea levels. But, human activity can also substantially aid or abet these natural trends, e.g., land reclamation, groundwater extraction, etc.

Syvitski et al., 2009 (Abstract; Google Scholar access) recently carried out a study of 33 delta regions associated with large metropolitan regions around the world. They concluded that urban development was causing many of these areas to subside.

Syvitski et al., pointed out that trends at tidal gauges in delta regions depend on several factors:

  1. The rate at which land is building up in delta regions, due to sedimentation (a process known as “aggradation“). They found this rate typically varies from +1 to +50 mm/yr.
  2. Rates of natural compaction of the land in the region. When soil first forms or is deposited in an area, it is often loosely packed. But, over the years, it can settle, causing the land to compact. Syvitski et al. suggested that this rate is typically less than -3 mm/yr.
  3. Rates of accelerated compaction. If humans are extracting water, oil or gas from the area (“subsurface mining”), are altering soil drainage (e.g., irrigation or drainage of land), or just generally altering the local land use, this could dramatically speed up the natural compaction rate. Syvitski et al. mentioned that the Chao Phraya Delta has shown compaction of -50 to -150 mm/yr from groundwater withdrawal, while the Po Delta has subsided 3.7m in the 20th century, 81% of which has been attributed to methane mining in the area.
  4. Rates of vertical movement of the land surface. In addition to the tectonic activity and post-glacial rebound factors mentioned above, Syvitski et al. also noted other factors, such as long-term (millennial) geological subsidence of the land. They argued these rates were typically about 0 to -5 mm/yr.
  5. And, finally, global mean sea level trends, i.e, the bit we’re trying to calculate!
Syvitski et al. assumed that the last component had been reliably determined by the IPCC, and so for their study they used the IPCC’s estimate of a global sea level rise of +1.8 to +3.0 mm/yr. However, as we have seen throughout this section, the tidal gauge estimates the IPCC used to estimate global sea level trends are contaminated by local trends, such as tectonic activity, post-glacial rebound… and the coastal subsidence that Syvitski et al. identified!


Figure 14. Various studies have found that land in the Mississippi delta (USA) has been subsiding for thousands of years. Photo of Mississippi River travelling through New Orleans taken from Wikimedia Commons. Click to enlarge.

The factors Syvitski et al. considered include natural processes which have been occurring independently of human activity. For instance, Törnqvist et al., 2008 (Abstract; Google Scholar access) calculate that the natural compaction of Holocene sediments which reached the Mississippi Delta (USA) after the melting of glaciers at the end of the Last Glacial Maximum has been resulting in subsidence of up to -5 mm/yr over the last 1200-1600 years.
They also include processes related to nearby human development. Syvitski et al. found that human engineering (e.g., construction of levees, redirection of rivers and the construction of modern dams) has made the rivers less muddy for many of the deltas they studied. While this might have positive aesthetic effects, it has reduced the rate of aggradation, meaning that the natural build-up of sediments in delta regions has been artificially reduced.

In many delta areas, there has been a lot of groundwater extraction to meet the water demands of the expanding populations who live there. This has led to considerable subsidence of the land, e.g., Holzer & Gabrysch, 1987 (Abstract; Google Scholar access). More purely commercial activities, such as oil or gas drilling can also cause subsidence, e.g., Morton et al., 2002.

In other words, many delta regions have been steadily subsiding from both natural and human activity-related processes. This will have introduced an artificial “sea level rise” trend into the tidal gauge records for those areas, which is actually due to the local land subsiding. As these processes are occurring in areas across the world, it will mistakenly introduce a “global sea level rise” bias into estimates constructed from tidal gauges.

Changes in local sea levels
One factor which can significantly influence local sea levels is the weather. Therefore, if weather patterns change, this could also influence local sea levels.

Changes in wind patterns can be particularly influential. For instance, Ryan & Noble, 2006 (Open access) found a strong correlation between wind and sea level changes over an 18 year period at three tidal gauges on the west coast of USA.

Another major factor is the atmospheric pressure at sea level (sometimes called “sea level pressure” or “barometric pressure”). The weight of the atmosphere pushing down on the oceans varies with the atmospheric pressure at sea level. So, we might expect that when this pressure increases, sea levels slightly fall, and vice versa. Indeed, Heyen et al., 1996 (Open access) found a strong correlation between atmospheric pressure and winter sea levels in the Baltic Sea, and Bergant et al., 2005 (Open access) found strong correlations between monthly sea levels and atmospheric pressures along the Adriatic coast, particularly in the winter. The exact relationship between sea levels and atmospheric pressures is still being debated, e.g., Mather et al., 2009 (Abstract; Google Scholar access), but it may be significant.

In addition to atmospheric circulation affecting sea levels, changes in ocean circulation can also have effects. We saw in Section 2 that water densities (and hence volume) are quite variable throughout the oceans. This leads to thermohaline circulation patterns. But, the flipside of this is that changes in the thermohaline circulation patterns can alter local water densities, and hence local water volumes, i.e., local sea levels.

Ocean and atmospheric circulations are often strongly interconnected, most famously during the so-called El Niño events. Using satellite data (see Section 5), Nerem et al., 1999 (Abstract) found that during the 1997–1998 El Niño, global mean sea levels rose by 20 mm, and then fell by as much! But, regionally, these large sea level changes varied by different amounts. So, the effects of such events on local trends would vary from tidal gauge to tidal gauge.


Figure 15. Schematic of the influence of the monthly lunar cycle on tides. Image by KVDP and Surachil, based on an image by Richard Vooren and Paul Van den Keybus. Taken from Wikimedia Commons. Click to enlarge.

The interaction between the Earth’s rotation and gravitational pull from the moon (and also the sun) are the main drivers of the tides, and hence the heights of the tides (i.e., the “sea level”) recorded at the tidal gauges.
Most of the variability in this gravitational pull occurs over time scales of less than a year. For example, at most places, high and low tides generally occur twice a day, and the main lunar cycle (“full moon” to “no moon”) only takes about a month (roughly 29.5 days). In fact, the word “month” is derived from the word “moon”. So, you might suppose that this wouldn’t affect annual trends. However, there do also seem to be lunar and solar cycles which take place over longer timescales, e.g., the 18.6 year lunar cycle. It is possible that such changes could significantly influence decadal tidal gauge trends, e.g., see Gratiot et al., 2008 (Abstract; Google Scholar access) or Currie, 1987(Abstract).

Summary of tidal gauge data
Tidal gauge records are still the main data source for researchers analysing global sea level changes. Unfortunately, while they do offer relatively long records, they are severely limited by the fact that tidal gauges measure the relative height of the land to the local sea levels in the area.

In order to use tidal gauges to reliably estimate global sea level changes, researchers have to successfully separate the components of shifting land heights and local sea level variability from any global trends. These are not trivial problems, and attempts to solve them have usually involved making questionable assumptions.

Researchers have known of all of the above problems for decades, and the PSMSL even warn users of their data of most of the problems here. But, it seems in a race to find evidence of the global sea level rises predicted by man-made global warming models, a number of researchers have underestimated how problematic the data is. Prof. Robert W. Stewart had even warned against this tendency in the late 1980s:

“The prospects of a forthcoming climate change induced by the effects of [increasing greenhouse gas concentrations] is receiving increasing attention, both in the scientific literature and in the popular media. It is now commonplace to learn from the media about “the future sea-level rise” without any expressed scepticism and often with considerable alarm. Much of the technical literature also takes future sea-level rise as established fact.

However, what are we to say if most sea-level changes are not associated with any very recent climate change, but are, in fact, the result of crustal movements? We had better get it right!” – Stewart, 1989 (Open access)

Unfortunately, his warning seems to have been largely ignored.

Despite the various problems with the tidal gauge data, it is possible that the various estimates of global sea level trends of 1-2 or maybe 2-3 mm/year might coincidentally be correct. But, it is likely that these estimates are strongly biased by local effects, and that actual global trends have been overestimated or underestimated. It is even possible that all (or most) of the apparent trends are local effects, and there have been no significant long term global trends in recent decades.

Whatever the case, until the above issues have been adequately resolved, tidal gauge-based estimates of “global mean sea level” trends should be treated with extreme caution.
 
5. Satellite estimates

Figure 16. Artistic rendering of the TOPEX/Poseidon and subsequent Jason satellite missions. Taken from Wikimedia Commons. Click to enlarge.

From Sections 3 & 4, it should be clear that attempting to extract genuine “global mean sea level trends” from the local trends of tidal gauges is highly problematic, and generally requires making a number of subjective assumptions and/or approximations. Satellite measurements, on the other hand, are not limited to local, coastal trends – they provide data from the entire oceans (well, almost – individual satellites generally have a “blind spot” they can’t measure due to their orbit path, e.g., near the poles). So, there has been considerable optimism that the results of various satellite missions since the early 1990s will provide more reliable estimates of global trends, e.g., see this 1990 report by UNESCO.
In August 1992, a US-French collaboration launched the TOPEX/Poseidon satellite altimeter. This satellite ran until 2006, but in 2001, a second satellite (“Jason-1″) was launched, which is still running. Several other satellite altimeters have also been launched, and the data from these have been used to estimate global mean sea level trends since 1993. Various different estimates from these different satellites are available from the AVISO website.


Figure 17. Global sea level trend estimates from the merged satellite datasets. Data taken from AVISO website. All datasets have had inverted barometer adjustments applied. Click to enlarge.

Composite estimates constructed by combining data from all of the satellites (see Figure 17) suggest there has been a global mean sea level trend of about +2.9 mm/yr (or +3.2 mm/yr, depending on the adjustments applied) over the entire satellite period (1993-present).
These trends are larger than the 20th century trends calculated from the tidal gauge estimates which we discussed in the previous sections, because the tidal gauge trends were mostly in the range +1.0 to +2.0 mm/yr. Some researchers have argued that the higher trends from the satellite measurements proves that there has been an “acceleration” in sea level rise, e.g., Church & White, 2006 (Abstract; Google Scholar access) or Cazenabe & Nerem, 2004 (Abstract; Google Scholar access).

Some of these researchers had been predicting an acceleration on the basis of their man-made global warming models, e.g., one of the authors of Church & White, 2006 had already been predicting an acceleration in Church et al., 1991 (Open access). So, they seem to believe that their predictions have been vindicated. This has also been used as another “proof” of man-made global warming.


Figure 18. Global sea level trend estimates from the four satellite datasets. Data taken from AVISO website. Of the optional adjustments, all datasets have had inverted barometer adjustments applied, but seasonality has been kept and no Glacial Isostatic Adjustment has been applied. AVISO no longer provide the raw data used by Morner, 2004. Click to enlarge.

Remarkably, however, as newer satellites have been introduced to replace the TOPEX/Poseidon estimates, they have shown less and less sea level rise, each time (see Figure 18). This is the opposite of the claim that the alleged sea level rise is “accelerating” due to man-made global warming. Indeed, when Church & White, 2011 (Open access) attempted to update the Church & White, 2006 paper mentioned above, their reported “acceleration” had slightly decreased.
So, are the satellite estimates reliable? Well, in order to answer that, we have to learn a little bit about how they were actually constructed.

Unfortunately, satellite altimeters don’t actually measure sea levels directly. Instead, they measure the length of time it takes light signals sent from the satellite to bounce back. In general, the longer the signal takes, the further the satellite is from the sea surface. So, in theory, this measurement could be converted into a measure of the sea surface height, i.e., the mean sea level.


Figure 19. Gravity anomaly map from NASA's GRACE (Gravity Recovery And Climate Experiment) measurements. The gravitational pull felt by a satellite varies as it travels over the Earth's surface. Therefore, the distance of a satellite from the Earth can vary depending on location. Taken from Wikimedia Commons. Click to (slightly) enlarge.

However, the conversion is complicated, and a number of other factors need to be estimated and then taken into account. For instance, the distance of the satellite from the Earth’s surface varies slightly as it travels along its orbit, because the gravitational pull of the Earth is not exactly uniform – see the Wikipedia page on “geoid”, and the maps in Figure 19.
So, in order to convert a particular “satellite-sea surface distance” into a sea level measurement, the “satellite-Earth’s surface distance” also needs to be independently measured, e.g., using the DORIS system.

Another complexity is that light takes slightly longer to travel when travelling through water vapour than dry air. So, the water vapour concentrations associated with a given satellite reading also need to be estimated, and accounted for.

As a result, satellite estimates of sea levels involve the use of complex models, approximations, other measurements and calculations. Unfortunately, this means that if there are problems in any of those stages, it could introduce artificial biases into the estimates, possibly making them unreliable… or even worse, wrong.


Figure 20. Raw satellite-based trends in global mean sea levels over the period 1992-2000, according to Morner, 2004. Click to enlarge.

Mörner, 2004 (Abstract; Google Scholar access) managed to track down a graph of the raw satellite trends from the TOPEX/Poseidon satellite up to 2000. When he looked at this graph (Figure 20), he didn’t see much of a “sea level rise”. Instead of the +2.8 or +3.1 mm/yr trends commonly reported, it appeared to him that sea levels had been essentially constant from 1993 to 1996. He agreed that from 1997 to 1999, there were considerable sea level changes. But, they comprised falls as well as rises, and were probably related to the unusual 1997-98 El Niño event.
Mörner, 2004 was a controversial paper, and several of the researchers involved with the TOPEX/Poseidon analysis objected to Mörner’s analysis, e.g., Nerem et al., 2007 (Abstract). However, surprisingly, these objections were not over his claim that the raw satellite data showed little trend. They agreed with Mörner that the original satellite data didn’t show much of a sea level rise. Instead, their objection was that he should have used their adjusted data. They felt the raw data was unreliable, and had developed a series of adjustments which they believed made the trends more realistic.

For example, Keihm et al., 2000 (Abstract; Google Scholar access) had decided that the TOPEX satellite was showing an instrumental negative drift of 1.0-1.5 mm/yr between October 1992 and December 1996. So, they adjusted the data by adding a positive trend of 1.0-1.5 mm/yr to that period. Chambers et al., 2003 (Abstract; Google Scholar access) decided that even more negative biases were introduced when the TOPEX satellite switched to its backup instrument in February 1999. So, they introduced more adjustments. This set of adjustments increased the apparent sea level rise from +1.7 mm/yr to +2.8 mm/yr. Neither set of adjustments affected the period January 1997-January 1999, but as Mörner had noted the raw data already showed significant variability for that period due to the 1997-98 El Niño event. Finally, they believe that an adjustment of +0.3 mm/yr is necessary to account for Peltier’s Glacial Isostatic Adjustments (see Section 4).

It turns out that almost all of the +2.8 mm/yr (or +3.1 mm/yr if Peltier’s post-glacial rebound adjustments are applied) sea level rise in the 1993-present satellite estimates are due to adjustments! The raw data (which no longer seems to be in the public domain) doesn’t show much of a trend, after all.


Figure 21. Figure 1 of Morner, 2008. Morner argues that we should treat the unadjusted TOPEX/Posiedon trends of roughly 0 mm/yr as the actual instrumental record, rather than using the +3 mm/yr trends of the adjusted estimates. Click to enlarge.

Mörner, 2008 (Abstract) responded to his critics by pointing out that these adjustments were too subjective. For this reason, he recommended that we should stick to the unadjusted trends.
It is plausible that the unadjusted trends are unreliable, as Nerem et al., 2007 claimed. However, that doesn’t automatically mean that Nerem et al.’s adjustments are valid! If the adjustments are either inappropriate or inadequate, then the adjusted trends may be the unreliable ones. Or perhaps, both the adjusted and the unadjusted trends are unreliable…

If we are to rely on Nerem et al.’s claim that the adjustments are an improvement, we should look at their justification. Remarkably, it seems that their main justification for applying the adjustments has been that they make their estimates better match the tidal gauge estimates, even though the reason why the satellite measurements were being carried out was because the tidal gauge estimates were unreliable!

For instance, Mitchum, 1998 (Open access), and later Mitchum, 2000 (Abstract) found that the tidal gauges were showing a sea level rise of 1.0-1.5 mm/yr more than the TOPEX satellite. But, rather than taking that as evidence that the tidal gauge estimates were unreliable (as we discussed in Section 3 & 4), he concluded that the satellite was at fault! He decided there must be some sort of drift bias in the satellite estimates.

Motivated primarily by Mitchum’s conclusion, Keihm et al., 2000 (Abstract; Google Scholar access) actively tried to come up with something that could cause a “drift” in the satellites, and eventually decided that a temporary problem in the “TOPEX Microwave Radiometer path delay measurements”, which stopped in December 1996 could do that. So, they adjusted the data by adding a positive trend of 1.0-1.5 mm/yr for that period.

Chambers et al., 2003 (Abstract; Google Scholar access) decided that there could be a negative 7.3mm bias in February 1999, when the TOPEX satellite switched to its backup instrument. Again, to justify applying another positive adjustment (this time, nearly doubling the trend from +1.7 mm/yr to +2.8 mm/yr), they relied on a comparison with tidal gauge trends!

One of the main motivations for developing a satellite-based estimate of sea level trends was that it would be an independent estimate which wouldn’t be affected by the problems affecting the tidal gauge estimates. Indeed, Ostanciaux et al., 2012 (Abstract; Google Scholar access), explicitly assumed that the satellite estimates were independent of the tidal gauge estimates, for identifying problems in the tidal gauge estimates!

But, it turns out that the only satellite estimates which are actually independent of the problematic tidal gauge records are the raw, unadjusted estimates. As Mörner had pointed out, these estimates don’t show much of a sea level rise.

This effectively leaves us with two choices:

  1. We can choose to accept the unadjusted estimates as reliable (as Mörner recommended). In this case, global sea levels don’t seem to have shown much of a trend since 1993, after all.
  2. We can accept Nerem et al., 2007’s claim that the unadjusted estimates are unreliable. But, since we know from Section 3 & 4 that the tidal gauge estimates Nerem et al. used for justifying their adjustments are problematic, we can’t rely on the adjusted estimates either. In other words, we then don’t have any reliable satellite estimates of sea level trends!

6. Final remarks
Should people living by the coast worry about sea levels? Well, there is no evidence that increasing CO2 concentrations are causing anything unusual with regards to sea levels. But, there are still the natural hazards which have been around since long before the Industrial Revolution, e.g., tsunamis and storm surges.


Figure 22. There is some evidence that the devastating flood of 1607 in south Wales, UK, was due to a tsunami. Photo of woodcut engraving by Prof. Simon Haslett. Taken from Flickr under Creative Commons (CC BY-NC-SA 2.0). Click to enlarge.

Tsunamis are relatively rare, but their effects can be utterly devastating, as we witnessed after the tragic 2004 Indian Ocean tsunami. The devastation of tsunamis is likely to become an even greater problem in the future, as the world’s population increases, and more and more people live and vacation by coastal areas.
While most tsunamis occur in the Pacific ocean (a tectonically active region), they can also occur in other oceans. For instance, Haslett and Bryant have suggested that the devastating floods in Wales, UK in 1607, illustrated in Figure 22, were due to a tsunami, e.g., see Haslett & Bryant, 2002.

As we discuss in our “Is man-made global warming causing more hurricanes?” essay, storm surges from hurricanes and other tropical cyclones have always been a serious problem for coastal-dwellers, and as populations increase, the problem will only become greater in the future.

So, we should be investing in improved storm protection for coastal dwellers, and more research into developing better tsunami detection and communication networks. But, this has nothing to do with CO2 or man-made global warming theory – tsunamis and storm surges are naturally occurring hazards.

Also, as we discussed in Section 4, the land in many urban areas near delta regions has been steadily subsiding, often due to human activity, e.g., groundwater extraction, irrigation. Unfortunately, this subsidence seems to be particularly pronounced in heavily urbanized delta regions, i.e., places where a lot of people live – see Syvitski et al., 2009 (Abstract; Google Scholar access).

So, we should also be concerned about the harmful effects of land subsidence, and ideally we should try to minimise them. But, we should remember that land subsidence has nothing to do with “man-made global warming”. People who believe we can somehow help stop land subsidence in at-risk areas by reducing our “carbon footprint” are mistaken.
 
Flipper, not sure why you are posting this to a Climate Change Denial thread - care to say? Is it because of the focus on coal power plants? :confused:

I did watch a good portion of this video and find nothing within it: there is no evidence, just assertions. Not persuasive. As you know I have been doing some reading due to your interest in the matter. I'm pretty much in the unconvinced camp at this point.

I did check out MetaBunk on the video and there is a thread on the video as it happens: Claim: Chemtrails are Coal Ash | Metabunk

Mick West is pretty scathing: "It's a nonsensical idea, almost reads like a hoax, or some kind of misguided attempt to indirectly get chemtrail enthusiasts to focus on coal power plants. It's such a stream of things that are wrong that it's hard to know where to start - or even if it's worth addressing at all.

One can point out numerous problems with the theory - like the fact that spraying a highly abrasive powder through the engines would destroy them, and the fact that the optical density of the trails would require more mass than planes could carry, and so must be water from the atmosphere. Then there's the fact that the trails persist and spread like contrails do, whereas a sprayed powder would quickly dissipate.

"But these are objections that have been ignored before, and the fundamental point here is they are trying to fit an explanation to a misunderstanding (contrail persistence). So it's somewhat pointless to address this silly theory, and it would be better if we could get back to addressing the persistence of contrails."

Please note the post after Mick West's post that indicates that the 'chemmies' are also questioning the video: "Members - there is a new video out from The HAARP Report, titled "Chemtrails are COAL ASH." Until the background sources on it are disclosed, I do not want it posted. [...] If you unlocked a secret and a solution here, good. Tell us where you came up with these facts please. [...] Here is my primary concern. This information isn't sourced. If activists start to watch/photo/try to gain entry to power plants they could be mistakenly taken as someone who is suspect for potentially interfering with the nation's grid. That as you know, is a serious offense. This worries me. We all want to find the source of the materials and operations serving the geoengineering aerosols, but this piece (video) is reckless in my opinion. He needs to show proof this is the smoking gun.
**As of 1PM MST 2/2/15, he has still not answered my questions."
 
Last edited:
Flipper, not sure why you are posting this to a Climate Change Denial thread - care to say? Is it because of the focus on coal power plants? :confused:

I did watch a good portion of this video and find nothing within it: there is no evidence, just assertions. Not persuasive. As you know I have been doing some reading due to your interest in the matter. I'm pretty much in the unconvinced camp at this point.

I did check out MetaBunk on the video and there is a thread on the video as it happens: Claim: Chemtrails are Coal Ash | Metabunk

Mick West is pretty scathing: "It's a nonsensical idea, almost reads like a hoax, or some kind of misguided attempt to indirectly get chemtrail enthusiasts to focus on coal power plants. It's such a stream of things that are wrong that it's hard to know where to start - or even if it's worth addressing at all.

One can point out numerous problems with the theory - like the fact that spraying a highly abrasive powder through the engines would destroy them, and the fact that the optical density of the trails would require more mass than planes could carry, and so must be water from the atmosphere. Then there's the fact that the trails persist and spread like contrails do, whereas a sprayed powder would quickly dissipate.

"But these are objections that have been ignored before, and the fundamental point here is they are trying to fit an explanation to a misunderstanding (contrail persistence). So it's somewhat pointless to address this silly theory, and it would be better if we could get back to addressing the persistence of contrails."

Please note the post after Mick West's post that indicates that the 'chemmies' are also questioning the video: "Members - there is a new video out from The HAARP Report, titled "Chemtrails are COAL ASH." Until the background sources on it are disclosed, I do not want it posted. [...] If you unlocked a secret and a solution here, good. Tell us where you came up with these facts please. [...] Here is my primary concern. This information isn't sourced. If activists start to watch/photo/try to gain entry to power plants they could be mistakenly taken as someone who is suspect for potentially interfering with the nation's grid. That as you know, is a serious offense. This worries me. We all want to find the source of the materials and operations serving the geoengineering aerosols, but this piece (video) is reckless in my opinion. He needs to show proof this is the smoking gun.
**As of 1PM MST 2/2/15, he has still not answered my questions."
I posted here because I did not think the video was scientific enough to post on your New World: Climate Change. Your research is right on here. Good Work and THANKS. I will reconsider posting on Metabunk. I agree what is being sprayed is not slag but a very sophisticated polymer. I have a great deal of respect for your conclusions because I think that you are trying to come to the truth in the matter. Do you care to post some of what you have been reading? Any way you have me thinking:)
 
I posted here because I did not think the video was scientific enough to post on your New World: Climate Change. Your research is right on here. Good Work and THANKS.
Still unclear how 'chemtrails' are relevant to Climate Change. Either thread seems off-topic but maybe I'm missing something. :confused:
I will reconsider posting on Metabunk.
It's an excellent site imo. Very rigorously upheld posting protocols. No trolling possible there. It's kept very 'clean' for the science.
I agree what is being sprayed is not slag but a very sophisticated polymer.
What evidence for a 'polymer'? Shall we see each other over on Metabunk?
I have a great deal of respect for your conclusions because I think that you are trying to come to the truth in the matter. Do you care to post some of what you have been reading? Any way you have me thinking:)
High praise - thank you - but you ascribe to me too much. On the face of it my bias is counter to the theory as I begin, so my 'quest' may not be so 'pure'.

IMO None of the theory hangs together - the planes, the delivery system - the whole of it. It has fantastical elements. The conspiracy aspect is especially a red-flag. There's been hoaxing, too - bogus pictures of the supposed delivery system, for example. Reads very much like some aspects of ufology, though I do not doubt the sincerity of most of the people subscribing to this view.

Anyway, Metabunk is a good place to start.
 
Thank you for the opportunity to explain myself:
Climate change is long term weather changes. Therefore since I have been watching the persistent contrails for at least seven years that have turned the sky to a milky white. I would say that this is changing the long term weather patterns. The sun light now looks like it is coming through a translucent window. We might get one, two, three, even four days before they start doing the grid pattern. When they do the grid pattern by the end of that day the sky will be milky white and then three to four days we will get some form of precipitation. They fly all over the sky four to six planes, sometimes side by side, sometimes one behind the other, but mostly different planes in different parts of the sky. How long can this go on before we can say this is changing the climate?

When I started investigating this those who said that this is not happening were saying that under certain conditions contrails could persist. Now that people are seeing that persistent contrails are the norm they are not any longer saying that. If the atmosphere has changed so that contrails are persistent, as some argue, then is this not climate change?

I too have a great deal of problems with the idea that there is a conspiracy. It makes no sense to me to turn the sky from the deep blue I remember to the milky blue sky that I now live under. Those in the conspiracy would have to live under that same dim sun. I cannot imagine any one would want that.

Most people are like the weather man on the weather channel. He was explaining the weather when that he said he took the people who did the weather forecasting out to see what the weather looked like. He was impressed by the beauty of the day, so he showed us the watchers a picture of the bright blue sky which had amazed him. In the centre of the picture was huge contrail which he could not see. There was no conspiracy, he just could not see it.
I will do my best to find the research about polymers and any scientific evidence
 
Thank you for the opportunity to explain myself:
Climate change is long term weather changes. Therefore since I have been watching the persistent contrails for at least seven years that have turned the sky to a milky white. I would say that this is changing the long term weather patterns. The sun light now looks like it is coming through a translucent window. We might get one, two, three, even four days before they start doing the grid pattern. When they do the grid pattern by the end of that day the sky will be milky white and then three to four days we will get some form of precipitation. They fly all over the sky four to six planes, sometimes side by side, sometimes one behind the other, but mostly different planes in different parts of the sky. How long can this go on before we can say this is changing the climate?

When I started investigating this those who said that this is not happening were saying that under certain conditions contrails could persist. Now that people are seeing that persistent contrails are the norm they are not any longer saying that. If the atmosphere has changed so that contrails are persistent, as some argue, then is this not climate change?

I too have a great deal of problems with the idea that there is a conspiracy. It makes no sense to me to turn the sky from the deep blue I remember to the milky blue sky that I now live under. Those in the conspiracy would have to live under that same dim sun. I cannot imagine any one would want that.

Most people are like the weather man on the weather channel. He was explaining the weather when that he said he took the people who did the weather forecasting out to see what the weather looked like. He was impressed by the beauty of the day, so he showed us the watchers a picture of the bright blue sky which had amazed him. In the centre of the picture was huge contrail which he could not see. There was no conspiracy, he just could not see it.

I understand your thinking now.

What I see is more pollution. More planes. Different kinds of planes. But to the 'polymer' -

I will do my best to find the research about polymers and any scientific evidence

Metabunk continues to be a good source. I searched for 'polymer' on the site and came up with this - a rebuttal by Senior Member Jay Reynolds, who has a history following this story of the chemtrails. Strongly urge reading the entire two pages of the thread from Metabunk - I have only excerpted the parts that seem relevant to me. Will be interested in your opinion.

LINK: How did barium get into chemtrails? | Metabunk

TEXT: "This is an historical and factual examination of how the claim that chemtrails contain barium came about.
Historical: Chemtrail Central :: View topic - An Oral History of Chemtrails
June 15th, 2000
A.C. Griffith introduces his idea that "electroactive polymer fibers" are in contrails, which he attempts to support by quoting a DARPA document which refers to these fibers being used in robotics and image processing. He also issues a veiled threat to scientists and government employees whom he mistakenly believes are "spraying" fibers on them: "If the government told the people of the nature of the experiments, I believe the citizens would physically attack the government employees and scientists involved." : Electroactive Polymer Fibers in Military Projects - Research Forum

[...]

June 16th, 2000: Griffith uses the phrase, "The experiments are layered from the ground of the earth up to the ionosphere and beyond." This phrase is found within "The Spotlight" article: Chemtrails Said Part Of Top Secret Military Maneuvers Over America

"A member of the research group told The Spotlight that "an over-whelming array of ongoing military research and development and defense-related activity is layered from ground level into space,"
Electroactive Polymer Fibers in Military Projects - Research Forum

"July 6, 2000: Posted by Clifford Carnicom "on behalf of Griff" were this photo and text entitled "Enemy Radar View".
ENEMY RADAR VIEW in Barium Research Forum

"Rather than being the work of either A.C. Griffith or Carnicom, both the image and the text was taken from this 1996 "Popular Mechanics" article. : http://books.google.co.vi/books?id=...g=PA26#v=onepage&q="enemy radar view"&f=false

"In order to alter the original so that it served his purpose, Griffith has added the final sentence:" The Variable Terrain Radio Parabolic Equation (VTRPE) model has been tested and perfected after aerosol barium titanate salt mixture was released from military aircraft, forming chemical trails in the atmosphere across America. This was ONE experiment / project conducted with barium salt mixture in the atmosphere."

"At the same time, realizing that if people knew VTRPE was simply an equation used to approximate real-life situations of radar propagation, Griffith removed the phrase "A complex mathematical model that currently can run only on a supercomputer" from his version of the article. Griffith lifted a "Popular Mechanics" article, including an image, without attribution, and altered it, adding and deleting text. His goal was to impress readers with what he hoped they would accept as his "research". Within several days, I was able to uncover and expose the hoax for all to see.


"July 14th, 2000: Griffith introduces his idea that Barium is a component of contrails: "I have every reason to believe that a barium salt mixture has been released into the atmosphere - but not limited to barium salt. If I were not sure of what I'm telling you I would not post this document." However, Griffith proceeds to tell none of the reasons he believes this to be true.

[...]

"June 5, 2001: A.C. Griffith and several others created the following website called "Chemtrails Over America":
http://web.archive.org/web/20010812...quakker/Documents/Chemtrails_Over_America.htm

"The site claimed to have been produced by NSA and CIA associated people, and claimed that chemtrails were four projects:
1. Reduce solar radiation due to an ozone crisis
2. "The most secretive project" is the Radio Frequency Mission Planner(RFMP)
and the Variable Terrain Radio Parabolic Equation(VTRPE)
3. Weather control using Haarp
4. Biological detection and decontamination programs


"The website goes into its greatest detail describing the RFMP and the VTPRE. The site includes many images and text yet does not link to any of them. The authors did this to achieve the appearance of secret knowledge being revealed, which is what they had opened with. There is, however, no secret about either the RFMP or the VTPRE. In a nutshell, the RMP and VTPRE are computer modeling programs and apps which help military planners determine if they wll have successful communications between given sites under particular weather and terrain conditions.

[...]

"Conclusion: Griff and his compadres engaged in some levels of deception. They deliberately omitted the original sources for the material they claimed was secret to give the appearance of having special knowledge. They have never shown proof of their CIA and NSA connections, indeed most of the participants remained anonymous and never explained any qualifications at all.

"Though they claimed to have "gotten inside the chemtrail program", they never named any participants, locations, dates, or anything of real value. This begs the question of why they would say anything at all if no usable information had been gathered? The whole set of claims essentially informs the readers of very little, and appears designed to simply develop a sense in the audience that the persons telling the story have credibility.

"Now that a decade has passed since this gambit was floated, we find the original participants have again gone under ground. Their website is defunct, the persons are no longer participating, and what occurred to start the "Barium In Chemtrails" idea is probably unknown to 99% of people who make the claim at present. I hope that this article can help the current crop of believers understand how easy that a wild claim with no real basis can become a tenet of faith underlying a movement."

 
Last edited:
From the Metabunk thread linked above - the entire 2-page thread is a good exploration of the origins of the 'polymer' and Chemtrail idea imo. What do you think, flipper?

A.C. Griffith dead at 72. His son Charles Griffiths wrote: "AC Griffith was my father. His actual name was Arvon Calcote Griffiths. He was not really an expert on any government activities. I find these untrue and intentionally misleading claims to be offensive with his passing. He served in the USAF and worked for the US postal service, not the NSA. It's a lie. He simply had a way of storytelling that captured the imagination and seemed like he knew everything about the subject. Ask me anything you want to know about his life. Charles Griffiths 4 days ago."

Metabunk Senior Member Jay Reynolds writes: "I just got off the phone with Charles Griffiths, who confirmed that the above quote is from him and is truthful. He told me that anyone can contact him through his youtube channel and he will tell them whatever they need to know about his father. He has been aware for thirty years that his father was becoming more and more into conspiracy culture and saying things that weren't true. With his father's passing, the stories that were told are an embarrassment to the family, and he wants them to stop. He is in possession of all his father's military and work-related records which show that he was never associated in any way with the CIA or NSA, and as he stated on youtube, A.C was only an enlisted man and postal service employee.

"Charles told me that A.C. was an attention seeker who was able to convince people with stories and that he had heard these stories his whole life. When the stories weren't believed, A.C. became alienated from his family and increasingly depended on internet contacts for friendship. A.C.'s companion, Kimberly Dawley, is living in the home with a dozen cats and will not leave, the situation was descibed as "hoarding", and he believes that she is also posting using his ID. He is seeking a court order to straighten these things out.

"I hope that Charles Griffiths will be able to find some closure on this episode. For now, this seems to be the end of the trail for the claims of A.C. Griffith, who I credit for initiating the hoax that barium was part of "chemtrails". The idea has become firmly engrained in the myth, which should be astonishing to anyone reading this account, but also holds a very good lesson for the people that have believed it and those who promote this hoax need to take in the lesson.

"In particular, the followers of Clifford Carnicom need to know that Carnicom has based much of what he says directly on the stories that were told to him by A.C. Griffith. The lesson here is that Carnicom has been manipulated and has willingly carried on the hoax material given to him by A.C. Griffith to many many others."
 
Last edited:
What I see is more pollution. More planes. Different kinds of planes. But to the 'polymer' -
When aircraft increase in numbers by the thousands and more people are flying by the millions, then there is going to be more air pollution in the upper atmosphere. ANY pilot flying since the 1960's can tell you of the dramatic atmospheric changes that have occurred, especially, haze and visibility reductions "in general".

Whole "micro climates" are being effected over hundreds or even thousands of miles concerning atmospheric pollution from the ground too! Fly into cities and areas with major pollution problems, and it is easy to understand Humans are killing and dramatically altering their living environments. There was a PBS NOVA program that covered this issue.

How are the governments going to reduce the population of our planet? I see no visible efforts to seriously enforce this into a reality. One suggestion, for those families having more than two children, then they should be fined with very high taxes with high incomes. Those that are too poor should have to give-up their 3+ children to adoption. Sterilization is an option for those that won't adhere to "the plan". Those that want more than two children and can afford it can adopt those children from overpopulation.

What I said above has an Orwellian shadow that we are casting upon ourselves, but what are the other options to reduce population that are more humane? Starvation? Wars? Sterilization? Radicalized Governments?

Growth is an engine of our worldwide economies, so there is going to have to be a change in those systems too!

Global leadership is key, but I haven't seen ANY leadership that can do this kind of worldwide change. Can that even happen within our lifespans?

Since evolution has its own solutions and its own plan, I fear the worst for humankind. Technology can NOT prevent these problems with such a rapid rise in human populations, when there is NO LEADERSHIP GLOBALLY to allow for technology to work in the limited time we seem to have to prevent massive traumas to the entire planet.

As you or someone posted today, Grass Roots politics are often co-opted by money interests too. The Tea Party is a perfect example.
 
What about the temperature data tampering, and turning a 30 year cooling trend into global warming.

Thats fraud.

They jacked the global temperatures by 0.5 degrees.

Homogenisation is supposed to take heat out of raw temperature data for the heat island effect, not add 0.5c to raw global temperature data.
 
Last edited:
Satellite images do show obvious trends of melting ice off both polar regions and Greenland too. What's worse is the permafrost is also warming in major polar regions too. Whether some of the old data has been altered by some conspiracy or "science reasons", I don't think it alters the fact that polar regions are warming.

The primary concern and question is:

What can humankind do to prevent major crisis's caused from climate change in the near term, whether humans are to blame or not?

I'm very interested in Ocean "weather" too. Change the currents and temperatures and water levels of the oceans, and we're going to be screwed too, imo.
 
The 'crisis' is manmade alright, its made right here.

5. Paper 2. An assessment of the NASA GISS urbanization adjustment method
Summary of Paper 2
The NASA Goddard Institute for Space Studies (NASA GISS) is the only group using weather records to construct global temperature estimates that explicitly adjusts their data to correct for urbanization bias.In Paper 2, we carefully studied and analysed their adjustment method to see if it works. We found that it doesn’t!
We identified several serious flaws in their adjustment method which make their adjustments unreliable, inadequate and inappropriate. We found that their adjustments actually introduced about as many biases, as they removed.

In our paper we offer several recommendations which might help overcome some of these flaws. However, for now, the NASA GISS global temperature estimates are just as unreliable after adjustments, as before.

The NASA Goddard Institute for Space Studies

Figure 20. NASA GISS is located in New York City, in one of the Columbia University buildings. Photograph taken from their website. Click on image to enlarge.

The NASA Goddard Institute for Space Studies (NASA GISS) is one of the five groups that currently publish global temperature trend estimates from weather station records, i.e., they produce one of the curves we showed you at the start of this essay in Figure 1.

They are located in New York City (NY, USA) in one of the Columbia University buildings (Figure 20).

Their offices are located over the iconic New York diner, “Tom’s Restaurant” which featured prominently in the popular TV sitcom, “Seinfeld”, and was the inspiration for Suzanne Vega’s 1987 a capella song, Tom’s Diner, which was in turn used by the inventor of the MP3 digital music format when he was developing the MP3 (see here):

The Goddard Institute has played a very prominent role on both sides of the debate over man-made global warming theory. The founding director of the institute, Dr. Robert Jastrow, who ran the institute from 1961-1981, was a major critic of man-made global warming theory until his death in 2008, e.g., see this 2001 essay.

However, Jastrow’s successor, Dr. James Hansen, who ran the institute from 1981-2013, was (and still is) a very vocal supporter of man-made global warming theory. Indeed, in his 1988 testimony to U.S. Congress he claimed:

It is time to stop waffling so much and say that the evidence is pretty strong that the greenhouse effect is here. – NASA GISS director, Dr. James Hansen, in testimony to U.S. Congress, 23rd June 1988

This 1988 testimony is believed to have been very influential in making man-made global warming theory a public concern, e.g., see here. In addition, he was one of the main scientific advisors to Al Gore for the popular 2006 An Inconvenient Truth film.

When Hansen and his colleagues first started publishing their global temperature trend estimates in the early 1980s, they did not seem to have considered urbanization bias, e.g., Hansen et al., 1981 (Abstract; Google Scholar access). Later, they recognised that it was a problem that needed to be considered, but on the basis of their Hansen & Lebedeff, 1987 (Abstract; Google Scholar access) study, concluded that it was a fairly minor problem. This was one of the 9 sets of studies we mentioned in Section 4, i.e., one of the papers we reanalysed in Paper 1.

However, by the late 1990s, they seem to have decided that it was probably a problem they should take more seriously. They decided to develop a computer program which would automatically search through the weather station records and apply adjustments to remove any urbanization bias.

Since 1999, they have been running this program on their data before they construct their global temperature trend estimates – see Hansen et al., 1999 (Abstract; Google Scholar access).

In the years since Hansen et al., 1999, they have made some modifications to their computer program which they describe in Hansen et al., 2001 (Abstract; Google Scholar access) and Hansen et al., 2010 (Abstract; Google Scholar access), and on their “GISTEMP” website.We discuss these modifications and their significance in Paper 2.
When the Goddard Institute ran their program on the roughly 6,000 station records that they use, they found that the program made quite a lot of adjustments. However, the net effect of the adjustments on their global temperature trend estimates was very small (roughly -0.05°C/century). As a result, their “urbanization bias-corrected” global temperature trend estimates was pretty much the same as the estimates of the other groups who didn’t apply any urbanization bias corrections (you can see this by looking back at Figure 1, at the start of the essay).

This seems to have convinced many people that the urbanization bias problem is fairly negligible.

In Paper 2, we decided to carefully analyse this computer program, and check to see if it actually worked. We found that it doesn’t, and actually introduces as many biases as it removes! But, before we discuss our analysis, it will probably be helpful to outline exactly what their computer program does.

The basic idea of NASA’s approach
The basic idea behind the Goddard Institute’s adjustments is as follows:

  1. Divide all of their weather stations into two groups: “urban stations” and “rural stations”. Since 2010, they have been doing this on the basis of the night-light intensity in the location of the station (using satellite data).
  2. For every station that the computer identifies as “urban”, the computer works out an individual urbanization bias adjustment, using the following method:
    • A “rural average” is calculated for the urban station by averaging together the trends of all of the rural neighbours in a 500 km radius (or 1000 km, if there aren’t enough within 500 km).
    • The difference between the urban station record and the rural average is then calculated, and assumed to be “the urbanization bias”.
    • The urbanization adjustment is calculated by approximating the difference using a linear fit. However, because urbanization is not a simple linear process, they use a two-part adjustment. In other words, they calculate two linear fits (see the middle panel of Figure 21) – “Leg 1″ is the linear fit for the first part of the record, and “Leg 2″ is the linear fit for the second part of the record. The point marking the transition between Leg 1 and Leg 2 is adjusted by the program in order to optimise the fits.
  3. This adjustment is added to the urban station’s record (Figure 21), and the record is then assumed to have been adequately corrected.

Figure 21. Example of how NASA adjust station records to account for urbanization bias. The values in the middle panel are added to the red record in the top panel to give the blue record in the bottom panel. Click on image to enlarge.

Currently, the Goddard Institute use about 6,000 stations, of which they identify about half as being “urban” and apply adjustments to. For about 200 of the urban stations, they do not have enough rural neighbours for their computer program to work, and so these unadjusted urban stations are not included in their global temperature estimates.

A surprising decision that the Goddard Institute made when they were writing their program was that they add their adjustments in reverse to their urban station records, rather than simply subtracting them.

To understand what we mean by this, have a careful look at the adjustments in Figure 21. At the start of the unadjusted Phoenix record, the average annual temperature was roughly 20-21°C. At the end of the record, this had risen to 24-25°C. The Goddard Institute’s computer program calculated that about 3°C of this warming was urbanization bias.

However, rather than subtracting this 3°C from the modern biased temperatures, their program adds 3°C to the early unbiased temperatures. In effect, the program is “rewriting history”, and pretending that Phoenix has always been 3°C warmer.

Now, you might argue that this doesn’t really matter. After all, they’re not interested in the actual temperature is at Phoenix, they’re just interested in what the trend is, i.e., is it getting warmer or colder. However, it is an unnecessarily confusing approach.

In addition, because urbanization bias continues to increase from year to year, this means that they have to keep on increasing their adjustments every year, meaning that “history is continuously being rewritten”. There doesn’t really seem to be any good reason for taking this approach. On the contrary, it seems to have led to a lot of unnecessary suspicion and cynicism amongst the public, e.g., here.

At any rate, the most important thing is figuring out how successful NASA’s adjustments are. You might have assumed that this would have been one of the first things that they would have tested. However, surprisingly, they don’t seem to have ever checked if their adjustments were actually removing the urbanization biases from their data!

Instead, it seems that once they had written their computer program, they assumed that it would work. As we will see below, this was an unwise assumption, because their method doesn’t work.

Most of NASA’s adjustments are nonsensical
Do you remember how each of NASA’s adjustments consists of two parts, and that each of the parts is a linear adjustment? We can define each of these parts in terms of its “slope”, which tells us whether the line goes “up” as we go from left to right (“positive slope”) or “down” as we go from left to right (“negative slope”).

If you are unfamiliar with the difference between negative and positive slopes, here is a 3 minute explanation:

According to NASA’s computer program, the more negative the slope is, the more urbanization bias there is. So, for a typical urban heat island we would expect the first part of the adjustment to be slightly negative, and then the second part to be even more negative.


Figure 22. Adjustments applied by NASA to the Piura (Peru) station record. Click on image to enlarge.

However, bizarrely, about half of the linear adjustments their computer program calculates have positive slopes, e.g., Figure 22. In other words, the program is calculating the “urbanization bias” to be due to “urban cooling“, and not “urban warming”!!!

We decided to group all of NASA’s urbanization adjustments into four different types, depending on the slope of each part:

  • Type 1: The slopes for both parts are negative, e.g., Figure 21. This is the normal expected adjustment, because it would remove urban warming bias. However only about 15% of the adjustments were of this type!
  • Type 2: The slopes for both parts are positive, e.g., Figure 22. This is the opposite of what should be occurring! About 9% of the adjustments were of this type.
  • Type 3: The slope for the first part is negative, but the second part has a positive slope, e.g. Figure 23. About 39% of the adjustments were of this type.
  • Type 4: The slope for the first part is positive, but the second part has a negative slope. About 37% of the adjustments were of this type.

Figure 23. Adjustments applied by NASA to the Dublin Airport (Ireland) record. Click on image to enlarge.

Figure 23 shows a typical Type 3 adjustment. The first part of the adjustment removes a warming trend from the urban record, as we would expect. However, the second part introduces a warming trend, i.e., the opposite of what we should expect.

Type 4 adjustments are similar, but just the other way round, i.e., the first slope is positive and the second slope is negative. The vast majority of NASA’s adjustments (76%) are either Type 3 or 4. Since both types include a “cooling bias” adjustment, that means that only 15% of NASA’s adjustments are the expected adjustments to remove urbanization bias.

Hansen et al. point out that urbanization can sometimes lead to an artificial “cooling” bias, e.g.,

Anthropogenic effects can also cause a non-climatic cooling, for example, as a result of irrigation and planting of vegetation, but these effects are usually outweighed by urban warming. - Hansen et al., 1999 (Abstract; Google Scholar access)

On this basis, they seem to have decided that it doesn’t matter whether their computer program removes a warming trend from the data or introduces one! It seems they reckon if their program decides there is a “cooling bias”, then it must be right.

However, as we discuss in Paper 2, while it is true that there are some types of urban development which can introduce cooling under certain conditions, these “urban cooling” trends are very limited and rare. Urbanization bias is almost entirely a warming bias – that’s why we get Urban Heat Islands.

At any rate, they certainly shouldn’t be occurring for 85% of the urban stations, which is what NASA’s adjustment program calculates. Indeed, it contradicts Hansen et al., 1999’s own (correct) claim quoted above that [urban cooling] effects are usually outweighed by urban warming.”!

NASA’s net adjustments are physically unrealistic

Figure 24. The gridded average net effect of NASA’s urban adjustments on all their urban stations. The dotted lines indicate the error bars associated with the averages (2 S.E.). Click on image to enlarge.

As we discussed above, for almost all of NASA’s adjustments to remove an urban warming trend there is an equivalent adjustment to remove an “urban cooling” trend from another station.

As a result, the net effect of all the adjustments was very small, i.e., about -0.1°C/century (Figure 24). Since NASA only identified about half of the stations as being urban, the overall effect on their global temperature estimates was only half of that, i.e., about -0.05°C/century.

We can see why this result helped convince the other groups that they didn’t really need to worry about urbanization bias. They probably thought that if NASA had developed a program to explicitly correct for urbanization bias, but it only removes about 0.05°C/century, then it’s not a big deal.

Of course, the reason why the net adjustments were so small is that the individual adjustments were mostly nonsensical, as we saw above!

At any rate, even if their computer program were right about the net magnitude of the adjustments being small, the actual year-to-year variations of the net adjustments are physically unrealistic. For instance, we can see from Figure 24 that their program calculated that urbanization led to a net “cooling” during two periods – 1880s-1890s and 1930s-1960s. This is in itself nonsense.

However, worse still, there is almost no net adjustment for the period from the 1970s to present. This is arguably the period which has seen the most urban development of all. So, the fact that NASA’s adjustments peter out to nothing during this period beggars belief.

NASA’s adjustments don’t even work on the most heavily urbanized metropolises
As a simple test to see how reliable NASA’s adjustments are, we selected a sample of stations from the largest urban areas in the world today. On average, these stations should be strongly affected by urbanization bias. So, if NASA’s computer program is at all effective, then it should be removing a lot of warming from these stations.

We identified the most highly urbanized stations in terms of associated population and night-light brightness. We only selected stations with an associated population greater than 2 million, and had a very high night-light brightness according to two different satellite estimates. This gave us 116 stations from a total of 47 urban metropolises (a lot of the metropolises had more than one station).


Figure 25. Top panel shows the gridded average adjustments applied by NASA to the stations in the most heavily urbanized metropolises. The bottom panel shows the population growth associated with those areas. Click on image to enlarge.

The average adjustments NASA’s computer program applied to this sample are shown in Figure 25.

We can see that, as expected, the program applied quite heavy adjustments to the early part of the records, removing about 0.8°C/century of a warming trend from the period from 1895-1980.

So far so good, … however, instead of these adjustments increasing for the 1980s, 1990s and 2000s, the adjustments actually dramatically decreased for the post-1980 period!!! By the 1990s, the net adjustments were essentially zero.

The bottom panel of Figure 25 shows the total population growth of the 47 metropolises. Although population isn’t an exact measure of urbanization, there is no justification in the population data for NASA’s sudden 1980 decrease in adjustments.

Their computer program failed the test.

Some of the flaws in NASA’s computer program
In our paper, we identified several major flaws in NASA’s adjustment methods:

  • Their rural identification is unreliable, and a lot of the stations they identify as “rural” are actually urban
  • If the rural records end before the urban records, NASA’s program stops adjusting the urban records. But, rather than deleting the bits of the record it couldn’t adjust, it keeps the rest of the record unadjusted! As we mentioned in Section 2, there is a severe shortage of long rural records (we will elaborate on this in Section 6). So, this is a very frequent problem.This is a doubling insidious problem. Not only are the urban records included unadjusted for those periods, but because there aren’t enough rural stations to carry out the adjustments, that also means that they aren’t being included in the regional trends for that area. That is, there are no non-urban stations in the area to in any way dilute the urbanization bias for the region.
  • NASA explicitly assume that the rural records have no non-climatic biases. This is a serious problem. For instance, if a rural station underwent a station move which accidentally increased the temperature, then this would increase the trend of the “rural average”, and would fool the computer program into thinking the urbanization bias at that station was less than it actually was. It could even fool the program into thinking the urbanization bias was a cooling bias.
We discuss the significance of these (and other flaws) in Paper 2, but the bottom line is that the adjustment method they programmed into their computer program is seriously flawed.

Conclusion: NASA’s adjustments don’t work.
The “urbanization bias” corrections their computer program calculates are unrealistic, unreliable, inadequate, and often just plain inappropriate!

Their program introduces at least as many biases as it removes.

The main problem seems to be that once they developed their computer program, they seem to have just assumed it would work, and started using it, without proper testing.

If their computer program was a commercial software package for a proper software company (e.g., Microsoft or Adobe), this would never have happened. As soon as customers started noticing that the program doesn’t actually work, they would have been inundated with complaints. However, because the developers of the computer program (the Goddard Institute) are the only “customer”, it seems they simply never noticed that it doesn’t work.

The urbanization bias problem is a very challenging one. NASA seem to have severely underestimated just how challenging it is.
 
Last edited:

Here are a few comments from Kevin Cowtan:
  • I only picked one point from Booker's article. He makes a load of arguments, most of which have been debunked many times before. The Paraguay one was new, interesting, and rather more complex than the rest.
  • We still don't know why there appear to be synchronized breaks across the Paraguay stations. Berkeley list station moves in 1971 for Puerto Casado and San Juan, but we don't have a documented reason for the rest. That's an interesting question for further research.
  • Trying to do this kind of work at the speed of the news cycle is hard. The video would be much better if we worked on it for a week. But it would be far less relevant.
  • One of the things we really hope to acheive with the MOOC is to equip anyone to be able to test claims like this for themselves.

LINK: Kevin Cowtan Debunks Christopher Booker's Temperature Conspiracy Theory
TEXT: "In The Telegraph, Christopher Booker accused climate scientists of falsifying the global surface temperature data, claiming trends have been "falsified" through a "wholesale corruption of proper science." Booker's argument focuses on adjustments made to raw data from temperature stations in Paraguay. In the video below, Kevin Cowtan examines the data and explains why the adjustments in question are clearly justified and necessary, revealing the baselessness of Booker's conspiracy theory."


NOAA Paraguay data
TEXT: "Published on Jan 26, 2015: A quick response to an article by Christopher Booker in the Telegraph."


Understanding adjustments to temperature data
LINK: Understanding adjustments to temperature data | Climate Etc.
 
It's a short video - just over 10 minutes - lots of relevant perspective, but the observations at 4:00 on Wikipedia interesting -

Astroturf and manipulation of media messages | Sharyl Attkisson | TEDxUniversityofNevada

TEXT: "Published on Feb 6, 2015: In this eye-opening talk, veteran investigative journalist Sharyl Attkisson shows how astroturf, or fake grassroots movements funded by political, corporate, or other special interests very effectively manipulate and distort media messages."
 
Back
Top