Author Topic: Pathological Science  (Read 576638 times)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72281
    • View Profile
Re: Pathological Science
« Reply #200 on: December 08, 2009, 01:48:54 PM »
BBG:

The works you aggregate here are growing into a valuable reference for literate people looking to become informed.


Body-by-Guinness

  • Guest
A Smoking Gun
« Reply #201 on: December 08, 2009, 06:19:52 PM »
A long piece with a lot of tables and formatting too intensive to replicate here. It follows how temp data from Australia was aggregated into various data sets, demonstrating very dubious methodology and a failure to abide by already lax standards when homogenizing disparate temp data:

http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

ETA: Another source digging even deeper into the Australian data. This stuff is a little dense, but incredibly damning:

http://joannenova.com.au/2009/12/smoking-guns-across-australia-wheres-the-warming/
« Last Edit: December 08, 2009, 08:27:58 PM by Body-by-Guinness »

Body-by-Guinness

  • Guest
Some Historical Perspective
« Reply #202 on: December 09, 2009, 04:56:28 PM »
Graphics heavy piece that amusingly demonstrates how meaningless hockey sticks are on a geologic time scale:

http://www.foresight.org/nanodot/?p=3553

Rarick

  • Guest
Re: Pathological Science
« Reply #203 on: December 10, 2009, 02:40:15 AM »
As I continue LOOKING at this, it is increasingly looking like Climatology is on a par, or mix of Astrology/Scientology.  The arguement against has no problem presenting data and facts pulled from actual thermometers.  Those that argue for warming are constantly using computer models and what everybody knows.

Those that argue against are looking at several scales of data sets, that all prove that this is not a new warming trend, and that there was a Larger One during the middle age warming period, and a still larger one predating civilization.  Given those factors, I would consider it arrogance on our part to believe we have had that kind of effect on the climate.

The place I would agree with the Greens would be that we do need to be more efficient and careful, because we don't know.  Finding Out The Hard Waytm is usually too expensive.  It would be a good ideas to continue our traditional "improvement over time" path through the Progresstm concept.

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #204 on: December 10, 2009, 06:55:00 AM »
I agree with the improvement over time is prudent argument, but note a lot of greenies do their damnedest to make sure that the sort of wealth needed to make prudent choices isn't created in the first place. A lot of the payments to developing countries being argued for currently in Denmark make little sense as other UN transfers of wealth to kleptocracies have very little to show for themselves. This habit extends into the nuclear energy arena where greenies work diligently to make sure the costs associated with new plants is very high. Areas where the greenies hold sway like ethanol production or foisting small cars on folks are essentially boondoggles that eat wealth while returning very little. As such I'd argue future efforts have to be green not only in the feel good sense, but also in a fiscal sense.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72281
    • View Profile
Re: Pathological Science
« Reply #205 on: December 10, 2009, 07:10:40 AM »
"Finding Out The Hard Way (tm) is usually too expensive."

What's up with the (tm) on a common phrase?

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Pathological Science
« Reply #206 on: December 10, 2009, 07:23:30 AM »
Ok, I get that ethanol made from corn is possible because of gov't funding and not economically viable on it's own. What of using sawdust, other organic waste rather than food? Does this not work for Brazil?

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #207 on: December 10, 2009, 09:07:18 AM »
I'm down with that GM, assuming it's economically feasible to retool to make it work. Lotta mechanics I know, however, claim ethanol eats gaskets and such in older cars not built to use that solvent as fuel. Think those sorts of unintended consequences ought to be kept in mind when trying gee whiz scheme.

Lotta folks in the dry county in KY where I do a lot of work know what to do with the ethanol they produce. Alas, the government isn't down with that use.

Body-by-Guinness

  • Guest
What's Wrong with Being Homo? I
« Reply #208 on: December 10, 2009, 05:06:27 PM »
One of the major areas of inquiry emerging as the CRU leak data is analyzed involves the issue of "homogenization" where disparate, putatively proximate data sets are hammered into a cohesive dataset, despite the fact parts of that set weren't cohesive in the first place. I posted a couple pieces earlier that looks at a very well illustrated example of this occurring in Australia; a statistician I am growing very fond of now picks up the story and looks at the methodological implications of homogenizing this data.

Homogenization of temperature series: Part I
Published by Briggs at 11:08 am under Climatology

Introduction

Time to get technical.

First, surf over to Willis Eschenbach’s gorgeous piece of statistical detective work of how GHCN “homogenized” temperature data for Darwin, Australia. Pay particular attention to his Figures 7 & 8. Take your time reading his piece: it is essential.

There is vast confusion on data homogenization procedures. This article attempts to make these subjects clearer. I pay particular attention to the goals of homogenizations, its pitfalls, and most especially, the resulting uncertainties. The uncertainty we have in our eventual estimates of temperature is grossly underestimated. I will come to the, by now, non-shocking conclusion that too many people are too certain about too many things.

My experience has been that anything over 800 words doesn’t get read. There’s a lot of meat here, and it can’t all be squeezed into one 800-word sausage skin. So I have linked the sausage into a multi-day post with the hope that more people will get through it.

Homogenization goals

After reading Eschenbach, you now understand that, at a surrounding location—and usually not a point—there exists, through time, temperature data from different sources. At a loosely determined geographical spot over time, the data instrumentation might have changed, the locations of instruments could be different, there could be more than one source of data, or there could be other changes. The main point is that there are lots of pieces of data that some desire to stitch together to make one whole.

Why?

I mean that seriously. Why stitch the data together when it is perfectly useful if it is kept separate? By stitching, you introduce error, and if you aren’t careful to carry that error forward, the end result will be that you are far too certain of yourself. And that condition—unwarranted certainty—is where we find ourselves today.
Let’s first fix an exact location on Earth. Suppose this to be the precise center of Darwin, Australia: we’d note the specific latitude and longitude to be sure we are at just one spot. Also suppose we want to know the daily average temperature for that spot (calculated by averaging the 24 hourly values), which we use to calculate the average yearly temperature (the mean of those 365.25 daily values), which we want to track through time. All set?

Scenario 1: fixed spot, urban growth

The most difficult scenario first: our thermometer is located at our precise spot and never moves, nor does it change characteristics (always the same, say, mercury bulb), and it always works (its measurement error is trivial and ignorable). But the spot itself changes because of urban growth. Whereas once the thermometer was in an open field, later a pub opens adjacent to it, and then comes a parking lot, and then a whole city around the pub.

In this case, we would have an unbroken series of temperature measurements that would probably—probably!—show an increase starting at the time the pub construction began. Should we “correct” or “homogenize” that series to account for the possible urban heat island effect?

No.

At least, not if our goal was to determine the real average temperature at our spot. Our thermometer works fine, so the temperatures it measures are the temperatures that are experienced. Our series is the actual, genuine, God-love-you temperature at that spot. There is, therefore, nothing to correct. When you walk outside the pub to relieve yourself, you might be bathed in warmer air because you are in a city than if you were in an open field, but you aren’t in an open field, you are where you are and you must experience the actual temperature of where you live. Do I make myself clear? Good. Memorize this.

Scenario 2: fixed spot, longing for the fields

But what if our goal was to estimate what the temperature would have been if no city existed; that is, if we want to guess the temperature as if our thermometer was still in an open field? Strange goal, but one shared by many. They want to know the influence of humans on the temperature of the long-lost field—while simultaneously ignoring the influence of humans based on the new city. That is, they want to know how humans living anywhere but the spot’s city might have influenced the temperature of the long-lost field.

It’s not that this new goal is not quantifiable—it is; we can always compute probabilities for counterfactuals like this—but it’s meaning is more nuanced and difficult to grasp than our old goal. It would not do for us to forget these nuances.

One way to guess would be to go to the nearest field to our spot and measure the temperature there, while also measuring it at our spot. We could use our nearby field as a direct substitute for our spot. That is, we just relabel the nearby field as our spot. Is this cheating? Yes, unless you attach the uncertainty of this switcheroo to the newly labeled temperature. Because the nearby field is not our spot, there will be some error in using it as a replacement: that error should always accompany the resulting temperature data.

Or we could use the nearby field’s data as input to a statistical model. That model also takes as input our spot’s readings. To be clear: the nearby field and the spot’s readings are fed into a correction model that spits out an unverifiable, counterfactual guess of what the temperature would be if there were no city in our spot.

Tomorrow

Counterfactuals, a definition of what error and uncertainty means, an intermission on globally averaged temperature calculations, and Scenario 3: different spots, fixed flora and fauna.

http://wmbriggs.com/blog/?p=1459

Body-by-Guinness

  • Guest
What's Wrong with Being Homo? II
« Reply #209 on: December 10, 2009, 05:13:47 PM »
Homogenization of temperature series: Part II
Published under Climatology, Good Stats, Philosophy

Be sure to see Part I
Aside: counterfactuals

A counterfactual is statement saying what would be the case if its conditional were true. Like, “Germany would have won WWII if Hitler did not invade Russia.” Or, “The temperature at our spot would be X if no city existed.” Counterfactuals do not make statements about what really is, but only what might have been given something that wasn’t true was true.

They are sometimes practical. Credit card firms face counterfactuals each time they deny a loan and say, “This person will default if we issue him a card.” Since the decision to issue a card is based on some model or other decision process, the company can never directly verify whether its model is skillful, because they will never issue the card to find out whether or not its holder defaults. In short, counterfactuals can be interesting, but they cannot change what physically happened.
However, probability can handle counterfactuals, so it is not a mistake to seek their quantification. That is, we can assign easily a probability to the Hitler, credit card, or temperature question (given additional information about models, etc.).

Asking what the temperature would be at our spot had there not been a city is certainly a counterfactual. Another is to ask what the temperature of the field would have been given there was a city. This also is a strange question to ask.

Why would we want to know what the temperature of a non-existent city would have been? Usually, to ask how much more humans who don’t live in the city at this moment might have influenced the temperature in the city now. Confusing? The idea is if we had a long series in one spot, surrounded by a city that was constant in size and make up, we could tell if there were a trend in that series, a trend that was caused by factors not directly associated with our city (but was related to, say, the rest of the Earth’s population).

But since the city around our spot has changed, if we want to estimate this external influence, we have to guess what the temperature would have been if either the city was always there or always wasn’t. Either way, we are guessing a counterfactual.

The thing to take away is that the guess is complicated and surrounded by many uncertainties. It is certainly not as clear cut as we normally hear. Importantly, just as with the credit card example, we can never verify whether our temperature guess is accurate or not.

Intermission: uncertainty bounds and global average temperature

This guess would—should!—have a plus and minus attached to it, some guidance of how certain we are of the guess. Technically, we want the predictive uncertainty of the guess, and not the parametric uncertainty. The predictive uncertainty tells us the plus and minus bounds in the units of actual temperature. Parametric uncertainty states those bounds in terms of the parameters of the statistical model. Near as I can tell (which means I might be wrong), GHCN and, inter alia, Mann use parametric uncertainty to state their results: the gist being that they are, in the end, too confident of themselves.

(See this post for a distinction between the two; the predictive uncertainty is always larger than the parametric, usually by two to ten times as much. Also see this marvelous collection of class notes.)
OK. We have our guess of what the temperature might have been had the city not been there (or if the city was always there), and we have said that that guess should come attached with plus/minus bounds of its uncertainty. These bounds should be super-glued to the guess, and coated with kryptonite so that even Superman couldn’t detach them.

Alas, they are usually tied loosely with cheap string from a dollar store. The bounds fall off at the lightest touch. This is bad news.

It is bad because our guess of the temperature is then given to others who use it to compute, among other things, the global average temperature (GAT). The GAT is itself a conglomeration of measurements from sites all over (a very small—and changing—portion) of the globe. Sometimes the GAT is a straight average, sometimes not, but the resulting GAT is itself uncertain.

Even if we ignored the plus/minus bounds from our guessed temperatures, and also ignored it from all the other spots that go into the GAT, the act of calculating the GAT ensures that it must carry its own plus/minus bounds—which should always be stated (and such that they are with respect to the predictive, and not parametric uncertainty).

But if the bounds from our guessed temperature aren’t attached, then the eventual bounds of the GAT will be far, far, too narrow. The gist: we will be way too certain of ourselves.

We haven’t even started on why the GAT is such a poor estimate for the global average temperature. We’ll come to these objections another day, but for now remember two admonitions. No thing experiences GAT, physical objects can only experience the temperature of where they are. Since the GAT contains a large (but not large enough) number of stations, any individual station—as Dick Lindzen is always reminding us—is, at best, only weakly correlated with the GAT.

But enough of this, save we should remember that these admonitions hold whatever homogenization scenario we are in.

Next time

More scenarios!

http://wmbriggs.com/blog/?p=1462

Rarick

  • Guest
Re: Pathological Science
« Reply #210 on: December 11, 2009, 02:41:41 AM »
"Finding Out The Hard Way (tm) is usually too expensive."

What's up with the (tm) on a common phrase?

over used enough it may as well be an over advertised brand name?

Body-by-Guinness

  • Guest
Context for the "Trick"
« Reply #211 on: December 11, 2009, 06:35:50 AM »
More damning deconstruction of Climategate data:

IPCC and the “Trick”

Much recent attention has been paid to the email about the “trick” and the effort to “hide the decline”. Climate scientists have complained that this email has been taken “out of context”. In this case, I’m not sure that it’s in their interests that this email be placed in context because the context leads right back to a meeting of IPCC authors in Tanzania, raising serious questions about the role of IPCC itself in “hiding the decline” in the Briffa reconstruction.

Relevant Climategate correspondence in the period (September-October 1999) leading up to the trick email is incomplete, but, in context, is highly revealing. There was a meeting of IPCC lead authors between Sept 1-3, 1999 to consider the “zero-order draft” of the Third Assessment Report. The emails provide clear evidence that IPCC had already decided to include a proxy diagram reconstructing temperature for the past 1000 years and that a version of the proxy diagram was presented at the Tanzania meeting showing the late twentieth century decline. I now have a copy of the proxy diagram presented at this meeting (see below).

The emails show that the late 20th century decline in the Briffa reconstruction was perceived by IPCC as “diluting the message”, that “everyone in the room at IPCC” thought that the Briffa decline was a “problem” and a “potential distraction/detraction”, that this was then the “most important issue” in chapter 2 of the IPCC report and that there was “pressure” on Briffa and other authors to show a “nice tidy story” of “unprecedented warming in a thousand years or more”.

The chronology in today’s posts show that the version of the Briffa reconstruction shown in the subsequent proxy diagram in the IPCC “First Order Draft” (October 27, 1999), presumably prepared under the direction of IPCC section author Mann, deleted the inconvenient portion (post-1960) of the Briffa reconstruction, together with other modifications that had the effect of not “diluting the message”. About two weeks later (Nov 16, 1999) came the now infamous Jones email reporting the use of “Mike’s Nature trick” to “hide the decline” in a forthcoming WMO (World Meteorological Organization) report. Jones’ methodology is different than the IPCC methodology. Jones’ trick has been described in previous posts. Today, I’ll describe both the context of the IPCC version of the “trick” and progress to date in reverse engineering the IPCC trick.

IPCC Lead Authors’ Meeting, Sept 1999

IPCC Lead Authors met in Arusha, Tanzania from September 1 to 3, 1999 (see Houghton, 929985154.txt and 0938018124.txt), at which the final version of the “zero-order” draft of the Third Assessment Report was presented and discussed. The “First-Order Draft” was sent out to reviewers two months later (end of October 1999).

By this time, IPCC was already structuring the Summary for Policy-makers and a proxy diagram showing temperature history over the past 1000 years was a “clear favourite”.

A proxy diagram of temperature change is a clear favourite for the Policy Makers summary. (Folland, Sep 22, 1999, in 0938031546.txt)

This desire already placed “pressure” on the authors to “present a nice tidy story” about “unprecedented warming in a thousand years”:

I know there is pressure to present a nice tidy story as regards ‘apparent unprecedented warming in a thousand years or more in the proxy data’ …(Briffa, Sep 22, 1999, 0938031546.txt)

The “zero-order” draft (their Figure 2.3.3a as shown below) showed a version of the Briffa reconstruction with little variation and a noticeable decline in the late 20th century.



Figure 1. IPCC Third Assessment Report Zero-Order Draft Figure 2.3.3a Comparison of millennial Northern Hemisphere (NH) temperature reconstructions from different investigators (Briffa et al, 1998; Jones et al, 1998; Mann et al, 1998;1999a)… All the series were filtered with a 40 year Gaussian filter. The problematic Briffa reconstruction is the yellow series.

No minutes of this meeting are available, but Climategate correspondence on Sep 22-23, 1999 provides some contemporary information about the meeting. Mann noted that “everyone in the room at IPCC was in agreement that the [decline in the Briffa reconstruction] was a problem”:

Keith’s series… differs in large part in exactly the opposite direction that Phil’s does from ours. This is the problem we all picked up on (everyone in the room at IPCC was in agreement that this was a problem and a potential distraction/detraction from the reasonably concensus viewpoint we’d like to show w/ the Jones et al and Mann et al series. (Mann, Sep 22, 1999, 0938018124.txt)

IPCC Chapter Author Folland of the U.K. Hadley Center wrote to Mann, Jones and Briffa that the proxy diagram was a “clear favourite” for the Summary Policy-makers, but that the existing presentation showing the decline of the Briffa reconstruction “dilutes the message rather significantly”:

A proxy diagram of temperature change is a clear favourite for the Policy Makers summary. But the current diagram with the tree ring only data [i.e. the Briffa reconstruction] somewhat contradicts the multiproxy curve and dilutes the message rather significantly… This is probably the most important issue to resolve in Chapter 2 at present.. (Folland, Sep 22, 1999, in 0938031546.txt)

After telling the section authors about the stone in his shoe, Folland added that he only “wanted the truth”.

Climategate Letters, Sep 22-23, 1999

The Climategate Letters contain a flurry of correspondence between Mann, Briffa, Jones and Folland (copy to Tom Karl of NOAA) on Sep 22-23, 1999, shedding light on how the authors responded to the stone in IPCC’s shoe. By this time, it appears that each of the three authors (Jones, Mann and Briffa) had experimented with different approaches to the “problem” of the decline.

Jones appears to have floated the idea of using two different diagrams – one without the inconvenient Briffa reconstruction (presumably in the Summary for Policy-makers) and one with the Briffa reconstruction (presumably in the relevant chapter). Jones said that this might make it “somewhat awkward for the reader trying to put them into context”, with it being unclear whether Jones viewed this as an advantage or disadvantage:

If we go as is suggested then there would be two diagrams – one simpler one with just Mann et al and Jones et al and in another section Briffa et al. This might make it somewhat awkward for the reader trying to put them into context. (Jones, Sep 22, 1999 Jones 093801949)

Another approach is perhaps evidenced in programming changes a week earlier (Sep 13-14, 1999), in which programs in the osborn-tree6/mann/oldprog directory appear to show efforts to “correct” the calibration of the Briffa reconstruction, which may or may not be relevant to the eventual methodology to “hide the decline”.

The correspondence implies (though this is at present not proven) that IPCC section author Mann’s first reaction to the “problem” was to totally delete the Briffa reconstruction from the proxy diagram, as the correspondence of September 22 seems to have been precipitated by Briffa being unhappy at an (unseen) version of the proxy diagram in which his reconstruction had been deleted.

Briffa’s length email of Sep. 22, 19990 (938031546.txt) should be read in full. Briffa was keenly aware of the pressure to present a “nice tidy story” of “unprecedented warming”, but is worried about the proxy evidence:

I know there is pressure to present a nice tidy story as regards ‘apparent unprecedented warming in a thousand years or more in the proxy data’ but in reality the situation is not quite so simple… [There are] some unexpected changes in response that do not match the recent warming. I do not think it wise that this issue be ignored in the chapter. (Briffa, Sep 22, 1999, 0938031546.txt)

He continued:

For the record, I do believe that the proxy data do show unusually warm conditions in recent decades. I am not sure that this unusual warming is so clear in the summer responsive data. I believe that the recent warmth was probably matched about 1000 years ago. I do not believe that global mean annual temperatures have simply cooled progressively over thousands of years as Mike appears to and I contend that that there is strong evidence for major changes in climate over the Holocene (not Milankovich) that require explanation and that could represent part of the current or future background variability of our climate. (Briffa, Sep 22, 1999, 0938031546.txt)

Thus, when Mann arrived at work on Sep 22, 1999, Mann observed that he had walked into a “hornet’s nest”. (Mann Sep 22, 1999, 0938018124.txt). In an effort to resolve the dispute, Mann said that (subject to the agreement of Chapter Authors Karl and Folland) he would add back Briffa’s reconstruction, but pointed out that this would present a “conundrum”:

So if Chris[Folland] and Tom [Karl] are ok with this, I would be happy to add Keith’s series. That having been said, it does raise a conundrum: We demonstrate … that the major discrepancies between Phil’s and our series can be explained in terms of spatial sampling/latitudinal emphasis … But …Keith’s series… differs in large part in exactly the opposite direction that Phil’s does from ours. This is the problem we all picked up on (everyone in the room at IPCC was in agreement that this was a problem and a potential distraction/detraction from the reasonably concensus viewpoint we’d like to show w/ the Jones et al and Mann et al series. (Mann Sep 22, 0938018124.txt)

Mann went on to say that the skeptics would have a “field day” if the declining Briffa reconstruction were shown and that he’d “hate to be the one” to give them “fodder”:

So, if we show Keith’s series in this plot, we have to comment that “something else” is responsible for the discrepancies in this case.…Otherwise, the skeptics have an field day casting doubt on our ability to understand the factors that influence these estimates and, thus, can undermine faith in the paleoestimates. I don’t think that doubt is scientifically justified, and I’d hate to be the one to have to give it fodder! (Mann Sep 22, 0938018124.txt)

By the following day, matters seem to have settled down, with Briffa apologizing to Mann for his temporary pangs of conscience. On Oct 5, 1999, Osborn (on behalf of Briffa) sent Mann a revised version of the Briffa reconstruction with more “low-frequency” variability (Osborn, Oct 5, 1999, 0939154709.txt), a version that is identical up to 1960, this version is identical to the digital version archived at NCDC for Briffa et al (JGR 2001). (The post-1960 values of this version were not “shown” in the version archived at NCDC; they were deleted.)

As discussed below, this version had an even larger late-20th century decline than the version shown at the Tanzania Lead Authors’ meeting. Nonetheless, the First Order Draft (Oct 27, 1999) sent out a few weeks later contained a new version of the proxy diagram (Figure 2.25), a version which contains the main elements of the eventual Third Assessment Report proxy diagram (Figure 2.21). Two weeks later came Jones’ now infamous “trick” email (0942777075.txt).

The IPCC Trick

Mann’s IPCC trick is related to the Jones’ trick, but different. (The Jones trick has been explained in previous CA posts here, here and consists of replacing the tree ring data with temperature data after 1960 – thereby hiding the decline – and then showing the smoothed graph as a proxy reconstruction.) While some elements of the IPCC Trick can be identified with considerable certainty, other elements are still somewhat unclear.

The diagram below shows the IPCC version of the Briffa reconstruction (digitized from the IPCC 2001) compared to actual Briffa data from the Climategate email of October 5, 1999, smoothed using the methodology said to have been used in the caption to the IPCC figure (a 40 year Hamming filter with end-point padding with the mean of the closing 20 years).



Figure 3. Versions of the Briffa Reconstruction in controversy, comparing the original data smoothed according to the reported methodology to a digitization of the IPCC version.
Clearly, there are a number of important differences between the version sent to Mann and the version that appeared in the IPCC report. The most obvious is, of course, that the decline in the Briffa reconstruction has, for the most part, been deleted from the IPCC proxy diagram. However, there are some other frustrating inconsistencies and puzzles that are all too familiar.

There are some more technical inconsistencies that I’ll record for specialist readers. It is very unlikely that that the IPCC caption is correct in stating that a 40-year Hamming filter was used. Based on comparisons of the MBH reconstruction and Jones reconstruction, as well as the Briffa reconstruction, to versions constructed from raw data, it appears that a Butterworth filter was used – a filter frequently used in Mann’s subsequent work (a detail that, in addition, bears on the authorship of the graphic itself).

Second, the IPCC caption stated that “boundary constraints imposed by padding the series with its mean values during the first and last 25 years.” Again, this doesn’t seem to reconcile with efforts to replicate the IPCC version from raw data. It appears far more likely to me that each of the temperature series has been padded with instrumental temperatures rather than the mean values of the last 25 years.

Finally, there are puzzling changes in scale. The underlying annual data for the Jones and Briffa reconstructions are expressed in deg C (basis 1961-1990) and should scale simply to the smoothed version in the IPCC version, but don’t quite. This may partly derive from errors introduced in digitization, but is a loose end in present replication efforts.
The final IPCC diagram (2.21) is shown below. In this rendering, the Briffa reconstruction is obviously no longer “a problem and a potential distraction/detraction”and does not “dilute the message”. Mann has not given any “fodder” to the skeptics, who obviously did not have a “field day” with the decline.



IPCC Third Assessment Report Figure 2.21: Comparison of warm-season (Jones et al., 1998) and annual mean (Mann et al., 1998, 1999) multi-proxy-based and warm season tree-ring-based (Briffa, 2000) millennial Northern Hemisphere temperature reconstructions. The recent instrumental annual mean Northern Hemisphere temperature record to 1999 is shown for comparison. Also shown is an extra-tropical sampling of the Mann et al. (1999) temperature pattern reconstructions more directly comparable in its latitudinal sampling to the Jones et al. series. The self-consistently estimated two standard error limits (shaded region) for the smoothed Mann et al. (1999) series are shown. The horizontal zero line denotes the 1961 to 1990 reference period mean temperature. All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.

Contrary to claims by various climate scientists, the IPCC Third Assessment Report did not disclose the deletion of the post-1960 values. Nor did it discuss the “divergence problem”. Yes, there had been previous discussion of the problem in the peer-reviewed literature (Briffa et al 1998) – a point made over and over by Gavin Schmidt and others. But it was not made in the IPCC Third Assessment Report. Not only was the deletion of the declining values not reported or disclosed in the IPCC Third Assessment Report, the hiding of the decline was made particularly artful because the potentially dangling 1960 endpoint of the Briffa reconstruction was hidden under other lines in the spaghetti graph as shown in the following blow-up:



Figure. Blow-up of IPCC Third Assessment Report Fig 2-21.

To my knowledge, no one noticed or reported this truncation until my Climate Audit post in 2005 here. The deletion of the decline was repeated in the 2007 Assessment Report First Order and Second Order Drafts, once again without any disclosure. No dendrochronologist recorded any objection in the Review Comments to either draft. As a reviewer of the Second Order Draft, I asked the IPCC in the strongest possible terms to show the decline reported at CA here:

Show the Briffa et al reconstruction through to its end; don’t stop in 1960. Then comment and deal with the “divergence problem” if you need to. Don’t cover up the divergence by truncating this graphic. This was done in IPCC TAR; this was misleading. (Reviewer’s comment ID #: 309-18)]

They refused, stating that this would be “inappropriate”, though a short discussion on the divergence was added – a discussion that was itself never presented to external peer reviewers.

Returning to the original issue: climate scientists say that the “trick” is now being taken out of context. The Climategate Letters show clearly that the relevant context is the IPCC Lead Authors’ meeting in Tanzania in September 1999 at which the decline in the Briffa reconstruction was perceived by IPCC as “diluting the message”, as a “problem”, as a “potential distraction/detraction”. A stone in their shoe.

http://climateaudit.org/2009/12/10/ipcc-and-the-trick/#more-9483

Body-by-Guinness

  • Guest
Pictures are Worth a Lotta Words
« Reply #212 on: December 11, 2009, 09:58:09 AM »
Second post:

An amusing little graphic that tells a tale in more ways than one:


Body-by-Guinness

  • Guest
High Sticking
« Reply #213 on: December 12, 2009, 01:03:28 PM »
Video that puts hockey sticks in perspective:

http://www.youtube.com/watch?v=DFbUVBYIPlI&feature=player_embedded

Body-by-Guinness

  • Guest
Build Your Own Hockey Stick
« Reply #214 on: December 12, 2009, 03:06:55 PM »
2nd post:

Iowahawk does a very detailed job of teaching all interested how they can build their own hockey stick. Anyone seeking to understand the statistical method--not to mention pitfalls and massage points--behind AGW math should check this out:

http://iowahawk.typepad.com/iowahawk/2009/12/fables-of-the-reconstruction.html

Body-by-Guinness

  • Guest
What's Wrong with Being Homo? III
« Reply #215 on: December 12, 2009, 09:56:10 PM »
Homogenization of temperature series: Part III
Published under Statistics

Be sure to see: Part I, Part II

We still have work to do. This is not simple stuff, and if we try and opt for the easy way out, then we are guaranteed to make a mistake. Stick with me.

Scenario 3: different spots, fixed flora and fauna

We started with a fixed spot, which we’ll keep as an idealization. Let’s call our spot A: it sits at a precise latitude and longitude and never changes.

Suppose we have temperature measurements at B, a nearby location, but these stop at some time in the past. Those at A began about the time those at B stopped: a little before, or the exact time B stopped, or a little after. We’ll deal with all three of these situations, and with the word nearby.

But first a point of logic, oft forgotten: B is not A. That is, by definition, B is at a different location than A. The temperatures at B might mimic closely those at A; but still, B is not A.
Usually, of course, temperatures at two different spots are different. The closer B is to A, usually, the more correlated those temperatures are: and by that, I mean, the more they move in tandem.

Very well. Suppose that we are interested in composing a record for A since the beginning of the series at B. Is it necessary to do this?

No.

I’m sorry to be obvious once more, but we do not have a complete record at A, nor at B. This is tough luck. We can—and should—just examine the series at B and the series of A and make whatever decisions we need based on those. After all, we know the values of those series (assuming the data is measured without error: more on that later). We can tell if they went up or down or whatever.

But what if we insist on guessing the missing values of A (or B)? Why insist? Well, the old desire of quantifying a trend for an arbitrary length of time: arbitrary because we have to, ad hoc pick a starting date. Additional uncertainty is attached to this decision: and we all know how easy it is to cook numbers by picking a favorable starting point.

However, it can be done, but there are three facts which must be remembered: (1) the uncertainty with picking an arbitrary starting point; (2) any method will result in attaching uncertainty bounds to the missing values, these must remain attached to the values; and (3) the resulting trend estimate, itself the output from a model which takes as input those missing values, will have uncertainty bounds—these will necessarily be larger than if there were no missing data at A. Both uncertainty bounds must be of the predictive and not parameteric kind, as we discussed before.

Again, near as I can tell, carrying the uncertainty forward was not done in any of the major series. What that means is described in our old refrain: everybody is too certain of themselves.
How to guess A’s missing values? The easiest thing is to substitute B’s values for A, a tempting procedure if B is close to A. Because B is not A, we cannot do this without carrying forward the uncertainty that accompanies these substitutions. That means invoking a probability (statistical) model.

If B and A overlap for a period, we can model A’s values as a function of B’s. We can then use the values of B to guess the missing values of A. You’re tired of me saying this, but if this is done, we must carrying forward the predictive uncertainty of the guesses into the different model that will be used to assess if there is a trend in A.

An Objection

“But hold on a minute, Briggs! Aren’t you always telling us that we don’t need to smooth time series, and isn’t fitting a trend model to A just another form of smoothing? What are you trying to pull!?”

Amen, brother skeptic. A model to assess trend—all those straight-line regressions you see on temperature plots—is smoothing a time series, a procedure that we have learned is forbidden.

“Not always forbidden. You said that if we wanted to use the trend model to forecast, we could do that.”

And so we can: about which, more in a second.

There is no point in asking if the temperature at A has increased (since some arbitrary date). We can just look at the data and tell with certainty whether or not now is hotter than then (again, barring measurement error and assuming all the values of A are actual and not guesses).

“Hold on. What if I want to know what the size of the trend was? How many degrees per century, or whatever.”

It’s the same. Look at the temperature now, subtract the temperature then, and divide by the number of years between to get the year-by-year average increase.

“What about the uncertainty of that increase?”

There is no uncertainty, unless you have used made-up numbers for A.

“Huh?”

Look. The point is that we have a temperature series in front of us. Something caused those values. There might have existed some forcing with added a constant amount of heat per year, plus or minus a little. Or there might have existed an infinite number of other forcing mechanisms, some of which were not always present, or were only present in varying degrees of strength. We just don’t know.

The straight-line estimate implies that the constant forcing is true, the only and only certain explanation of what caused the temperature to take the values it did. We can—even with guessed values of A, as long as those guessed values have their attached uncertainties—quantify the uncertainty in the linear trend assuming it is true.
“But how do we know the linear trend is true?”

We don’t. The only way we can gather evidence for that view is to skillfully forecast new values of A; values that were in no way used to assess the model.
“In other words, even if everybody played by the book and carried with them the predictive uncertainty bounds as you suggested, they are still assuming the linear trend model is true. And there is more uncertainty in guessing that it is true. Is that right?”

You bet. And since we don’t know the linear model is true, it means—once more!—that too many people are too certain of too many things.

Still to come

Wrap up Scenario 3, Teleconnections, Scenario 4 on different instruments and measurement error, and yet more on why people are too sure of themselves.

http://wmbriggs.com/blog/?p=1469

Body-by-Guinness

  • Guest
AGW Causes what it Doesn't Cause
« Reply #216 on: December 13, 2009, 01:00:39 PM »
Compendium of contradictory AGW panic mongering:

[youtube]http://www.youtube.com/watch?v=KLxicwiBQ7Q[/youtube]

Body-by-Guinness

  • Guest
One Continent's Temp Derived from a Single Station
« Reply #217 on: December 13, 2009, 06:44:47 PM »
An examination of Antarctic temp data that demonstrates some serious cherry picking. I've appended the conclusion to the post, but the piece is table heavy so I've only posted this link:

http://noconsensus.wordpress.com/2009/12/13/ghcn-antarctic-warming-eight-times-actual/

Ok, so for the regulars, you know I’ve maintained my calmness quite well. However, it’s not easy. I’m sick to death of advocate scientists pretending there are only minimal problems in the temperature record. Currently the ‘homogenized’ value added version of GHCN has a trend that is EIGHT times higher than actual for the ENTIRE ANTARCTIC CONTINENT. So I wonder if we can now, spend some of the ‘BILLIONS OF DOLLARS’ on cleaning up the temperature record!!! It’s no coincidence that AGW scientists aren’t demanding this be done in my opinion either.

Which of these records is used in CRU, GISS, NOAA — hell if I know (nobody else does either because at least CRU won’t say) but it’s pretty clear none of this data should be used in this condition.

It’s time the GOOD scientists demand GOOD TEMPERATURE DATA. It’s time the world embarked on a real project for gathering the true warming data rather than this kludged mess. It’s past time that the whole thing was done in an open and transparent way. The whole experience with GHCN this weekend felt like looking through a box of old socks.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72281
    • View Profile
Re: Pathological Science
« Reply #218 on: December 14, 2009, 06:57:42 AM »
BBG et al:

Please break down for me in SUPER simplistic terms the case for and against the meliting glaciers line of thought:

Thank you,
Marc

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #219 on: December 14, 2009, 10:49:48 AM »
Simplest:

Somewhere glaciers are melting. Other places they are growing. To whit:

http://wattsupwiththat.com/2009/09/15/arctic-sea-ice-melt-appears-to-have-turned-the-corner-for-2009/

Most the breathless stuff is pretty anecdotal. I hear it a lot from my coworkers: I visited that glacier when I was 10 and it has receded since then. However, meaningful observations of a lot of this stuff over geologically significant periods of time are pretty hard to come by while it's a lot easier to observe stuff when it's warm and sunny--when things are at their minimum, than when its cold and snowy--when they are at their max.

Note the degree of glacier retreat since the last ice age seen here:



Presumably whoever you are dealing with doesn't argue for a return to that glacial extent and hence is presumably smart enough to realize that not all change connotes something manmade and or bad.

ETA more links:

http://wattsupwiththat.com/2009/12/14/a-look-at-sea-ice-compared-to-this-date-in-2007/#more-14159

http://wattsupwiththat.com/2009/12/09/hockey-stick-observed-in-noaa-ice-core-data/

http://wattsupwiththat.com/2009/11/02/oh-no-not-this-kilimanjaro-rubbish-again/

http://wattsupwiththat.com/2009/10/08/antarcticas-ice-story-has-been-put-on-ice/

http://wattsupwiththat.com/2009/08/10/new-study-casts-doubt-on-himalayan-glaciers-melting/

http://wattsupwiththat.com/2008/10/14/alaska-glaciers-on-the-rebound/

http://wattsupwiththat.com/2008/01/22/surprise-theres-an-active-volcano-under-antarctic-ice/

Et al. . . .
http://wattsupwiththat.com/2008/11/04/mount-shastas-glaciers-proxy-for-what/
« Last Edit: December 14, 2009, 11:28:40 AM by Body-by-Guinness »

DougMacG

  • Power User
  • ***
  • Posts: 19447
    • View Profile
Pathological Science, Glacial Melting
« Reply #220 on: December 14, 2009, 12:50:50 PM »
No simple answer except to note that reports come with an agenda.  They note the places and periods of ice loss.  Net ice loss is expected though, with or without man, as BBG noted, because we are still coming out of an ice age and that is the trend  (There was no hockey stick.).  If the last decade was the warmest or second warmest in our short recorded history, even though no warming occurred during the decade, then continued ice loss is a result, with or without the puny contribution from man.

When ice masses were increasing on Antarctica, that was attributed to global warming as well.  Warming caused the excess snowfalls in winter more htan could melt in summer.

We saw a similar argument for water levels in the great lakes.  In fact it is amazing how stable they are considering the huge amounts of water that go in and are evaporated every minute, every year.  Other factors there may tell the story, such as usage of water taken from the incoming stream sources.  

After climategate it is really hard to know temps or changes etc. with any real accuracy.  At best I think we see only  the smallest of samples, tweaked results and desired, catastrophic conclusions.  Still we are only seeing temperature variances of tens of a degree Celsius over a millenium.  Those who say they can notice a difference in local weather from their childhood deceive themselves.  Forecast here for tonight is -7 F (-22 C).  A century ago, that would be -22. and a fraction.  You won't convince me that a person or polar bear can tell the difference.  A glacier may know the difference over time but it is still a sample of the planet.

As Kilimanjaro and Himalayan reports indicate, perhaps it was unique weather patterns of the past that allowed that glacier to last unusually long as the rest of the remnants of the ice age around it have disappeared.

What's missing in this though is man's tiny part in it.  The melting of the glaciers argument implies that the melting is caused by us.  The burden has to go back to those who make that  implication.  What would the temp be if not for man-made CO2?  The answer is that man's contribution probably between 0.1% and 2.0% of the warming is well within the existing margin for error, so the glaciers would be melting anyway.

Also flawed are any projections forward because of the flawed assumptions in the model(s).  Will the next decade be warmer or colder than the last 10 years?  We don't even know that except to know that they were wrong for the last decade and roughly 5 of the last 10 decades if todays models had been used then.
-----
In my last 'debate' on this topic with a liberal friend, he posed that: the earth is warming (and man is causing it) or its not.  I insisted on changing the debate to: the science is settled on this or its not.  It didn't take many of the studies posted on this forum to 'prove' that the science is not settled, which leaves all other questions unanswered.

Water vapor is another emission, rarely mentioned, from burning hydrocarbons.  Water Vapor is a major greenhouse gas, but cloud cover has offsetting characteristics.  

And no one has proven whether CO2 causes warming of if warmer air simply holds more CO2, also true.
-----
If I ever see a comprehensive global ice mass loss chart, not selected places and periods, I would like to compare it with the chart of ice loss without man's involvement or ice loss after implementingKyoto / Copenhagen / economic collapse etc. or whatever they are proposing to do with the chart.
« Last Edit: December 14, 2009, 01:00:21 PM by DougMacG »

Body-by-Guinness

  • Guest
CRU Web Data Removed
« Reply #221 on: December 14, 2009, 01:08:25 PM »
2nd post:

CRU web data has been taken down. Though it could be a server issue, speculation is that it's being removed from all the intense scrutiny it's been receiving of late. More here:

http://wattsupwiththat.com/2009/12/14/whats-going-on-cru-takes-down-briffa-tree-ring-data-and-more/

Body-by-Guinness

  • Guest
Panic Monger of Many Hats
« Reply #222 on: December 14, 2009, 01:13:09 PM »
3rd post. For those who incessantly mewl that anyone who disputes AGW is in the pocket of big oil, here's what truly tainted associations really look like. Let's see if the whiners have the rigor and consistency to apply the standard they harp on to folks on their side of the aisle:

A busy man
Posted by Richard Monday, December 14, 2009 IPCC, Pachauri
Seek and ye will find. Our friendly part-time chairman of the IPCC, Dr Rajendra Kumar Pachauri, is quite a remarkable man. As well as his onerous post with the UN's IPCC, it seems he has a considerable number of other interests.

Dr Pachauri's main day job is, of course, Director-General of The Energy Research Institute (TERI) - which he has held since April 2001, having become its Director and head in 1981 when it was the Tata Energy Research Institute.

Intriguingly, for such an upstanding public servant though, he is also a strategic advisor to the private equity investment firm Pegasus Capital Advisors LP, which he became in February of this year. However, this is by no means Dr Pachauri's only foray into the world of finance. In December 2007, be became a member of the Senior Advisory Board of Siderian ventures based in San Francisco.

This is a venture capital business owned by the Dutch multinational business incubator and operator in sustainable technology, Tendris Holding, itself part-owned by electronics giant Philips. It acquired a minority interest in January 2009 in order to "explore new business opportunities in the area of sustainability." As a member of the Senior Advisory Board of Siderian, Dr Pachauri is expected to provide the Fund and its portfolio companies "with access, standing and industry exposure at the highest level."

In June 2008, Dr Pachauri became a member of the Board of the Nordic bank Glitnir, which that year launched the The Sustainable Future Fund, Iceland, a new savings account "designed to help the environment." Then, the fund was expected to accumulate up to €4 million within a few years, thus becoming one of the largest private funds supporting research into sustainable development. That same month of June 2008, Dr Pachauri also became Chairman of the Indochina Sustainable Infrastructure Fund. Under its CEO Rick Mayo Smith, it was looking to raise at least $100 billion from the private sector.

April 2008 was also a busy month when Dr Pachauri joined the Board of the Credit Suisse Research Institute, Zurich and became a member of the Advisory Group for the Rockefeller Foundation, USA. Then, in May he became a member of the Board of the International Risk Governance Council in Geneva. This, despite its name, is primarily concerned with the promotion of bioenergy, drawing funding from electricity giants EON and EDF. He has became Chairman and Member of the Advisory Group at Asian Development Bank that May.

However, Dr Pachauri also keeps some ties with his roots. In his capacity as a former railway engineer, he is a member of the Policy Advisory Panel for the French national railway system, SNCF and has been since April 2007. Long before that, Dr Pachauri became the President of the Asian Energy Institute, a position he took on in 1992.

One of his most interesting - and possibly contentious - positions, however, is his position as "scientific advisor" to GloriOil Limited. This is a company he set up himself in late 2005 - two years after he had become chairman of the IPCC. He is described as its "founder". It was set up in Houston, Texas, to exploit patented processes developed by TERI - of which he is Director-General - known as "microbial enhanced oil recovery" (MEOR) technology, designed to improve the production of mature oilfields. When it comes to "Big Oil", Dr Pachauri is very much part of the industry.

Running with the hare and the hounds though, he also serves as Director of the Institute for Global Environmental Strategies, Japan. And, crucially, he is a Member of the External Advisory Board of Chicago Climate Exchange, Inc. This exchange is North America's only cap and trade system for all six greenhouse gases, with global affiliates and projects worldwide, brokering carbon credits worldwide.

Yet Dr Pachauri is also a member of FEOP (Far East Oil Price) Advisory Board, managed by the Oil Trade Associates, Singapore, from April 1997 onwards. This is part of the so-called ForwardMarketCurve - a "ground breaking, all-broker methodology for achieving robust and accurate price discovery in forward commodity markets" - especially, as the name would imply, oil and petroleum products.

Very much in an allied field, he served as a Member of the International Advisory Board of Toyota Motors until 31 March 2009. Now he is a Member of the Climate Change Advisory Board of Deutsche Bank AG. He served as Chairman of the International Association for Energy Economics from 1989 to 1990.

He served as an Independent Director of NTPC Ltd (National Thermal Power Corp), from 30 January 2006 to January 2009. This is the largest power generation company in India. Before that, he served as non-official Part-time Director of NTPC Ltd, from August 2002 to August 2005. He was also a Director of the Indian Oil Corporation - India's largest commercial enterprise - until 28 August 2003. Dr Pachauri then served as a Director of Gail India Ltd, India's largest natural gas transportation company, from August 2003 to 26 October 2004.

Dr Pachauri serves as Member of National Environmental Council, Government of India under the Chairmanship of the Prime Minister of India. He also serves as a member of the lobbying organisations, the International Solar Energy Society, the World Resources Institute and the World Energy Council. He has been member of the Economic Advisory Council to the Prime Minister of India since July 2001 and also serves as Member of the Oil Industry Restructuring Group, for the Ministry of Petroleum and Natural Gas, Government of India.

Dr Pachauri also served as Director of Consulting and Applied Research Division at the Administrative Staff College of India, Hyderabad. He served as Visiting Professor, Resource Economics at the College of Mineral and Energy Resources, West Virginia University. He was also a Senior Visiting Fellow of Resource Systems Institute, East - West Center, USA. He is a Visiting Research Fellow at The World Bank, Washington, DC and McCluskey Fellow at the Yale School of Forestry & Environmental Studies, Yale University.

One wonders how he finds the time to save the planet. James Delingpole wonders too.

http://eureferendum.blogspot.com/2009/12/busy-man.html

Body-by-Guinness

  • Guest
Hokey Sticks
« Reply #223 on: December 14, 2009, 06:59:47 PM »
4th post or something. More Australian temp records that don't show gross warming until they are homogenized or otherwise aggregated:

http://joannenova.com.au/2009/12/smoking-guns-across-australia-wheres-the-warming/

Body-by-Guinness

  • Guest
Simply Dismissed Concerns
« Reply #224 on: December 14, 2009, 07:05:15 PM »
And a 5th post. It's not like the CRU issues were only noticed recently. Check out this missive written nearly 5 years ago:

January 17, 2005
Chris Landsea Leaves IPCC

Posted to Author: Others | Climate Change | Science Policy: General
This is an open letter to the community from Chris Landsea.

Dear colleagues,

After some prolonged deliberation, I have decided to withdraw from participating in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). I am withdrawing because I have come to view the part of the IPCC to which my expertise is relevant as having become politicized. In addition, when I have raised my concerns to the IPCC leadership, their response was simply to dismiss my concerns.

With this open letter to the community, I wish to explain the basis for my decision and bring awareness to what I view as a problem in the IPCC process. The IPCC is a group of climate researchers from around the world that every few years summarize how climate is changing and how it may be altered in the future due to manmade global warming. I had served both as an author for the Observations chapter and a Reviewer for the 2nd Assessment Report in 1995 and the 3rd Assessment Report in 2001, primarily on the topic of tropical cyclones (hurricanes and typhoons). My work on hurricanes, and tropical cyclones more generally, has been widely cited by the IPCC. For the upcoming AR4, I was asked several weeks ago by the Observations chapter Lead Author - Dr. Kevin Trenberth - to provide the writeup for Atlantic hurricanes. As I had in the past, I agreed to assist the IPCC in what I thought was to be an important, and politically-neutral determination of what is happening with our climate.

Shortly after Dr. Trenberth requested that I draft the Atlantic hurricane section for the AR4's Observations chapter, Dr. Trenberth participated in a press conference organized by scientists at Harvard on the topic "Experts to warn global warming likely to continue spurring more outbreaks of intense hurricane activity" along with other media interviews on the topic. The result of this media interaction was widespread coverage that directly connected the very busy 2004 Atlantic hurricane season as being caused by anthropogenic greenhouse gas warming occurring today. Listening to and reading transcripts of this press conference and media interviews, it is apparent that Dr. Trenberth was being accurately quoted and summarized in such statements and was not being misrepresented in the media. These media sessions have potential to result in a widespread perception that global warming has made recent hurricane activity much more severe.

I found it a bit perplexing that the participants in the Harvard press conference had come to the conclusion that global warming was impacting hurricane activity today. To my knowledge, none of the participants in that press conference had performed any research on hurricane variability, nor were they reporting on any new work in the field. All previous and current research in the area of hurricane variability has shown no reliable, long-term trend up in the frequency or intensity of tropical cyclones, either in the Atlantic or any other basin. The IPCC assessments in 1995 and 2001 also concluded that there was no global warming signal found in the hurricane record.

Moreover, the evidence is quite strong and supported by the most recent credible studies that any impact in the future from global warming upon hurricane will likely be quite small. The latest results from the Geophysical Fluid Dynamics Laboratory (Knutson and Tuleya, Journal of Climate, 2004) suggest that by around 2080, hurricanes may have winds and rainfall about 5% more intense than today. It has been proposed that even this tiny change may be an exaggeration as to what may happen by the end of the 21st Century (Michaels, Knappenberger, and Landsea, Journal of Climate, 2005, submitted).

It is beyond me why my colleagues would utilize the media to push an unsupported agenda that recent hurricane activity has been due to global warming. Given Dr. Trenberth’s role as the IPCC’s Lead Author responsible for preparing the text on hurricanes, his public statements so far outside of current scientific understanding led me to concern that it would be very difficult for the IPCC process to proceed objectively with regards to the assessment on hurricane activity. My view is that when people identify themselves as being associated with the IPCC and then make pronouncements far outside current scientific understandings that this will harm the credibility of climate change science and will in the longer term diminish our role in public policy.

My concerns go beyond the actions of Dr. Trenberth and his colleagues to how he and other IPCC officials responded to my concerns. I did caution Dr. Trenberth before the media event and provided him a summary of the current understanding within the hurricane research community. I was disappointed when the IPCC leadership dismissed my concerns when I brought up the misrepresentation of climate science while invoking the authority of the IPCC. Specifically, the IPCC leadership said that Dr. Trenberth was speaking as an individual even though he was introduced in the press conference as an IPCC lead author; I was told that that the media was exaggerating or misrepresenting his words, even though the audio from the press conference and interview tells a different story (available on the web directly); and that Dr. Trenberth was accurately reflecting conclusions from the TAR, even though it is quite clear that the TAR stated that there was no connection between global warming and hurricane activity. The IPCC leadership saw nothing to be concerned with in Dr. Trenberth's unfounded pronouncements to the media, despite his supposedly impartial important role that he must undertake as a Lead Author on the upcoming AR4.

It is certainly true that "individual scientists can do what they wish in their own rights", as one of the folks in the IPCC leadership suggested. Differing conclusions and robust debates are certainly crucial to progress in climate science. However, this case is not an honest scientific discussion conducted at a meeting of climate researchers. Instead, a scientist with an important role in the IPCC represented himself as a Lead Author for the IPCC has used that position to promulgate to the media and general public his own opinion that the busy 2004 hurricane season was caused by global warming, which is in direct opposition to research written in the field and is counter to conclusions in the TAR. This becomes problematic when I am then asked to provide the draft about observed hurricane activity variations for the AR4 with, ironically, Dr. Trenberth as the Lead Author for this chapter. Because of Dr. Trenberth's pronouncements, the IPCC process on our assessment of these crucial extreme events in our climate system has been subverted and compromised, its neutrality lost. While no one can "tell" scientists what to say or not say (nor am I suggesting that), the IPCC did select Dr. Trenberth as a Lead Author and entrusted to him to carry out this duty in a non-biased, neutral point of view. When scientists hold press conferences and speak with the media, much care is needed not to reflect poorly upon the IPCC. It is of more than passing interest to note that Dr. Trenberth, while eager to share his views on global warming and hurricanes with the media, declined to do so at the Climate Variability and Change Conference in January where he made several presentations. Perhaps he was concerned that such speculation - though worthy in his mind of public pronouncements – would not stand up to the scrutiny of fellow climate scientists.

I personally cannot in good faith continue to contribute to a process that I view as both being motivated by pre-conceived agendas and being scientifically unsound. As the IPCC leadership has seen no wrong in Dr. Trenberth's actions and have retained him as a Lead Author for the AR4, I have decided to no longer participate in the IPCC AR4.

Sincerely, Chris Landsea

Attached are the correspondence between myself and key members of the IPCC FAR, Download file.
Posted on January 17, 2005 11:39 AM

http://sciencepolicy.colorado.edu/prometheus/archives/science_policy_general/000318chris_landsea_leaves.html

Body-by-Guinness

  • Guest
What's Wrong with Being Homo? IV
« Reply #225 on: December 14, 2009, 08:33:39 PM »
6th post. A record?

Homogenization of temperature series: Part IV

Published by Briggs at 7:16 pm under Bad Stats, Climatology

How much patience do you have left? On and on and on about the fundamentals, and not one word about whether I believe the GHCN Darwin adjustment, as revealed by Eschenbach, is right! OK, one word: no.

There is enough shouting about this around the rest of the ‘net that you don’t need to hear more from me. What is necessary, and why I am spending so much time on this, is a serious examination of the nature of climate change evidence, particularly with regard to temperature reconstructions and homogenizations. So let’s take our time.

Scenario 3: continued

We last learned that if B and A overlap for a period of time, we can model A’s values as a function of B’s. More importantly, we learned the severe limitations and high uncertainty of this approach. If you haven’t, read Part III, do so now.
If B and A do not overlap, but we have other stations C, D, E, etc., that do, even if these are far removed from A, we can use them to model A’s values. These stations will be more or less predictive depending on how correlated they are with A (I’m using the word correlated in its plain English sense).

But even if we have dozens of other stations with which to model A, the resulting predictions of A’s missing values must still come attached with healthy, predictive error bounds. These bounds must, upon the pain of ignominy, be carried forward in any application that uses A’s values. “Any”, of course includes estimates of global mean temperature (GMT) or trends at A (trends, we learned last time, are another name for assumed-to-be-true statistical models).

So far as I can tell (with the usual caveat), nobody does this: nobody, that is, carries the error bounds forward. It’s true that the older, classical statistical methods used by Mann et al. do not make carrying error simple, but when we’re talking about billions of dollars, maybe trillions, and the disruption of lives the world over, it’s a good idea not to opt for simplicity
when more ideal methods are available.

Need I say what the result of the simplistic approach is?

Yes, I do. Too much certainty!An incidental: For a while, some meteorologists/climatologists searched the world for teleconnections. They would pick an A and then search B, C, D, …, for a station with the highest correlation to A. A station in Peoria might have a high correlation with one in Tibet, for example. These statistical tea leaves were much peered over. The results were not entirely useless—some planetary-scale features will show up, well, all over the planet—but it was too easy to find something that wasn’t there.

Scenario 4: missing values, measurement error, and changes in instrumentation

Occasionally, values at A will go missing. Thermometers break, people who record temperatures go on vacation, accidents happen. These missing values can be guessed at in exactly the same way as outlined in Scenario 3. Which is to say, they are modeled. And with models comes uncertainty, etc., etc. Enough of that.

Sometimes instruments do not pop off all at once, but degrade slowly. They work fine for awhile but become miscalibrated in some manner. That is, at some locations the temperatures (and other meteorological variables) are measured with error. If we catch this error, we can quantify it, which means we can apply a model to the observed values to “correct” them.

But did you catch the word model? That’s right: more uncertainty, more error bounds, which must always, etc., etc., etc.

What’s worse, is that we suspect there are many times we do not catch the measurement error, and we glibly use the observed values as if they were 100% accurate. Like a cook with the flu using day old fish, we can’t smell the rank odor, but hope the sauce will save us. The sauces here are the data uses like GMT or trend estimates that use the mistaken observations.
(Strained metaphor, anybody? Leave me alone. You get the idea.)

A fishy smell

Now, miscalibration and measurement error are certainly less common the more recent the observations. What is bizarre is that, in the revelations so far, the “corrections” and “homogenizations” are more strongly applied to the most recent values, id est, those values in which we have the most confidence! The older, historical observations, about which we know a hell of lot less, are hardly touched, or not adjusted at all.

Why is that?

But, wait! Don’t answer yet! Because you also get this fine empirical fact, absolutely free: The instruments used in the days of yore, were many times poorer than their modern-day equivalents: they were less accurate, had slower response times, etc. Which means, of course, that they are less trustworthy. Yet, it appears, these are the most trusted in the homogenizations.

So now answer our question: why are the modern values adjusted (upwards!) more than the historical ones?

The grand finale.

If you answered “It’s the urbanization, stupid!”, then you have admitted that you did not read, or did not understand, Part I.

As others have been saying, there is evidence that some people have been diddling with the numbers, cajoling them so that they conform to certain pre-conceived views.

Maybe this is not so, and it is instead true that everybody was scrupulously honest. At the very least, then, a certain CRU has some fast talking to do.

But even if they manage to give a proper account of themselves, they must concede that there are alternate explanations for the data, such as provided in this guide. And while they might downplay the concerns outlined here, they must admit that the uncertainties are greater than what have been so far publicly stated.

Which is all we skeptics ever wanted.

http://wmbriggs.com/blog/?p=1477

DougMacG

  • Power User
  • ***
  • Posts: 19447
    • View Profile
Pathological Science, Ice Melting
« Reply #226 on: December 14, 2009, 10:30:26 PM »
Two follow up points that I think weaken the melting ice masses argument, one is a post from 2006 via UPI, the BBC and a Dutch-British research team analyzing radar altimetry data gathered by the European Space Agency's ERS-2 satellite that discovered that in spite of Arctic ice mass loss, the Arctic oceans levels are falling.  Not what they expected and they have no idea why.  http://www.physorg.com/news69600618.html

Second is the conflicting info coming out of Antarctica.  I suppose it depends on where you look and when. This report indicates that Antarctica contains 90% of the world's land based ice and at least at that writing it was gaining ice mass: http://www.ecoworld.com/climate/antarcticas-ice-mass-is-it-really-losing-ice-gaining-ice-or-both.html

“In the March 25 2008 issue of EOS, there was a News item by Marco Tedesco titled “Updated 2008 Surface snowmelt Trends in Antarctica” (subscribers only). It reports the following:

Surface snowmelt in Antarctica in 2008, as derived from spaceborne passive microwave observations at 19.35 gigahertz, was 40% below the average of the period 1987–2007. The melting index (MI, a measure of where melting occurred and for how long) in 2008 was the second-smallest value in the 1987–2008 period, with 3,465,625 square kilometers times days (km2 × days) against the average value of 8,407,531 km2 × days (Figure 1a). Melt extent (ME, the extent of the area subject to melting) in 2008 set a new minimum with 297,500 square kilometers, against an average value of approximately 861,812 square kilometers.”

This evidence suggests that Antarctica, where 90% of the land based ice in the world resides, is increasing in mass. And this fact is ignored or downplayed in virtually every mainstream report available today, and indeed the mainstream press continues to infer that Antarctica is melting at an alarming rate. But on balance, the ice mass in Antarctica is not melting, it is probably getting bigger."

Other links: http://www.physorg.com/news4180.html   East Antarctic Ice Sheet Gains Mass and Slows Sea Level Rise, Study Finds

http://climatesci.org/2008/04/07/recent-data-on-surface-snowmelt-in-antarctica/  Recent Data On Surface Snowmelt In Antarctica

This site: http://www.skepticalscience.com/antarctica-gaining-ice.htm intending to debunk skeptics finds a 3 year trend of land ice decreasing and a 30 year trend of sea ice increasing.

My advice Crafty unfortunately is to not look for simple answers to climate questions.

Body-by-Guinness

  • Guest
Budgeting Error & Homogenizing Fiction, I
« Reply #227 on: December 15, 2009, 06:57:39 AM »
Brilliant piece using Al Gore's gravity metaphor to demonstrate just how little the AGW crowd knows about budgeting error:


How Not To Create A Historic Global Temp Index
Published by AJStrata at 11:53 am under All General Discussions, CRU Climategate, Global Warming

I was having this minor debate on the fallacies of alarmists’ science in the comment section of Air Vent and decided I needed to post on this because the infant climate science (and it is in its infancy) is making some fundamental mistakes.

The momentum that built up behind the man-made global warming fad (and it is nothing more than an unproven hypothesis surrounded by a silly fan club) has not allowed the basic approach to be tested or challenged. You had a movement build up around an idea, which launched the idea into ‘established fact’ before the idea was validated. It probably will never be validated because the methodology is flawed to its core – as I will explain in painful detail.

To create a global temperature index for the past 30 years – and then project that back to the 1880’s (when global temp records were began) and then project it back centuries before that – is not trivial. And in my opinion the current approach is just plain wrong.

I take this position as someone who works ‘in space’ – where we have complex and interrelated models of all sorts of physical processes. And yet we have to keep refining the models to fit the data to do what we do. Climate science naively (and ignorantly in my mind) does just the opposite; it keeps adjusting the data (for no good reason) until they get the result they want!

For this comparative exercise Al Gore, a genius in his own mind, provides the perfect analogy – gravity. Yes Al, it’s there. But we still can’t predict how a body will travel through the atmosphere or space to an accuracy that is stable beyond a few seconds (for the atmosphere) or days (for Earth orbits). Our window of certainty is not months, seasons, years, decades, centuries or millennia. And yet gravity is very well understood and simple mathematically.

Add to that the fact our measurement systems for space systems blow away those being used by alarmists, who claim a science fiction level of accuracy in measurement and prediction. Maybe that is why they have a cult following instead of scientific proof?

In Earth’s orbit a satellite is pretty much free of atmosphere and can fly for 10-20 years. But it cannot keep its position (for GEO birds) or we cannot know its position (for GEO, MEO and LEO birds) without constantly taking measurements and correcting the models. (Sorry, geeked out there for a moment: LEO = Low Earth Orbit, MEO = Mid Earth Orbit, GEO = Geosynchronous Earth Orbit (stays above one spot on Earth, very high altitude).

While gravity is well understood, it is not the only factor working on the satellite’s flight path. After nearly half a century of exploring space we have unraveled some of the factors (sunshine pushing on the satellite, heat escaping causing a small thrust, an atmosphere which expands and contracts daily and seasonally, solar flares, etc). We cannot accurately model these beyond a few days. After about 7 days these forces build up enough change in the orbit in completely random ways that we have to remeasure the orbit and compute another prediction.

Gravity is simple, but we cannot predict out beyond a week with any accuracy.

For satellite orbits it would make no sense at all to ‘adjust’ the data to fit a curve as the alarmists do for temperature. If a data point is bad it is either consumed inside a sea of good data points or rejected because we have a sea of good data points to use. If there are sufficient data points you don’t adjust the data – bottom line. Either you have enough data to draw a conclusion or you don’t. You don’t make up data to fill your need either.

If the rocket scientists can only predict the path of an object orbiting the globe for 7 days, what sane person thinks a hodge-podge of randomly accurate and aging sensors around the globe can measure a global index, let alone predict the future or unravel the past? It cannot. But what ’scientists’ do to the data to pretend they can is downright silly.

They make adjustments or homogenizing stations or fill in grids with pretend stations. A total unscientific joke. The measurement is the measurement. It has a fixed accuracy and uncertainty. Each station has a unique accuracy and uncertainty due to its siting, technology and the accuracy with which its readings are made each day. (geeked out again: if there is a drift in time when you read the sensor, or that sensor’s reference to UTC (the world wide time reference) is unknown or dynamic, then you increase the error bars to the measurement). If the sensor is sited wrong or has problems you extend the error bars. You DON”T adjust the data!

(Sorry, geeked out again. Error bars are the range in which the real world value could be. If I measure an orbit to a accuracy of +/- 50 meters, then I know what ever number comes out of my system is not real. Reality is in that 100 meter range around the value. Statistically all I know is the satellite is in that range, the value I compute only centers the box it can be found in.)

You cannot adjust data to remove error or increase accuracy on a single sensor! If you do a regular regime of calibrating the sensor against a known source, you can remove BIAS. But that is all. What we do with sensor nets when we combine them is we can remove some error by comparing measurements that overlap in time and space. But that again something totally different than what the global scientists think they are doing.

Another example: moving stations. When a temperature station is moved it should simply become a new station at that point in time, with a new set of siting errors (and accuracy if the sensor is upgraded). It has a different time window than it previous incarnation – it is a new data set. When I see crap like this I realize these people are just not up to this kind of complex analysis.

Before:



After:



You don’t ‘homogenize’ neighboring stations into a mythical (and fictional) virtual station. That is just clueless! And there is no need to.  When that happens start a new data set. Those stations measured real temperatures, as shown in the top graph. They are three independent data sets with fixed attributes for the locale. Whatever that mess is in the bottom graph, it is nothing more than shoddy modelng. It destroys the historic record and replaces it with someone’s poor mathematical skills or scientific understanding.

I mean think of what that graph says in my world. If I had measurements of the moon’s position in the night sky from these three points I could reproduce the Moon’s orbit. But what happens in that second ‘adjusted’ graph is silly. I would be changing the measured position of the Moon for two ‘adjusted’ stations to make it closer to the first station – while not moving the two stations physically! They would produce a lunar vector similar to the others, but did I really move the Moon? Of course not, all I did was insert a lot of error. Now my calculation on the Moon’s position over that period does not reflect reality (or the established gravitational model). The question is, does it fit someone’s half cocked new theory of gravity – yet unproven!

Basically what alarmists needed to do was not adjust data, they needed to create a thermal atmosphere model which would take into account siting characteristics both local and large. This would include distance from large bodies of water, altitude, latitude, etc. A three dimensional model that would explain why various stations have their unique siting profiles and temperature records. It would explain why temperatures near oceans fluctuate less than stations inland 100-200 miles. It would show how a global average increase of 1°C would result in a .6°C increase at high latitudes or altitudes. It would EXPLAIN the data variations in the measurements.

But we don’t have this model. Alarmists cannot explain with accuracy why stations 10 miles apart show different temperature profiles each and every day of the year. So they pretend to know how to ‘adjust’ the data and their groupies applaud them for their brilliance. Yet the result, like my Moon example, is they simply lost site of reality.

Another irresponsible gimmick is creating mythical stations in grids without measurements. As we all know the temperature for a town or city 20 miles away can be totally different than that for our home town. Just bring up a local weather map. As weather fronts move through the dynamics over a region are dramatic. These changes happen all year at any time of day. 20 miles down the road things can be totally different.

Yet the CRU and others create fictional stations 750 kilometers away from the nearest data point – as if that makes any sense at all. There is no data in these regions – don’t make up data and call it truth! No data means no measurement.

In my world we can interpolate a trend forward to fill measurement gaps. For example we don’t measure each point on the orbit, we measure a couple of times a day to get a set of points on the orbit from which we can derive an accurate orbit curve. Because gravity is so damn simple we have incredibly high confidence in those computed positions in the measurement gaps. But as I said, they decay over time (5-7 days). If we don’t remeasure, the errors increase with time.


Body-by-Guinness

  • Guest
Budgeting Error & Homogenizing Fiction, II
« Reply #228 on: December 15, 2009, 06:58:05 AM »
Global Climate is nowhere near as simple as gravity. It decays over distance and time rapidly, just as ballistic flight through the atmosphere is unpredictable over distance and time, no matter how predictable gravity is. If they wanted to validate that 3-D model I proposed, they would predict temperatures in regions without measurements and then go measure them to see if their model was right. You don’t mix models and raw data – that is just wrong (though that is the essence of Michael Mann’s “Nature Trick”).

Another disturbing problem with climate science is identification of error and uncertainty. In the alarmists’ world there is no degradation over distance or time – which makes their results pure science fiction. If they had a reviewed, verified and defendable error budget they could move from fiction to science. They would also understand why their conclusions are standing on seriously shaky ground.

OK, going full geek here. An error budget shows how much error is added to the final computed number at each stage of its processing from the base measurement. For the global warming problem this budget covers the point of a temperature measurement at a station to the point a global annual index is derived, and it must contain the following error steps:

1. Measurement error: all the errors associated with taking the raw measurement. This includes the sensor accuracy and biases (if unknown or measured these become noise), siting induced errors, time of measurement induced errors (one must take the measurement the same time every day to within a certain tolerance to create a consistent historic or annual record).

2. Local geographic error: A sensor measurement like temperature is only good for a certain distance. The farther you move from it the more the accuracy degrades as the error increases. No one has demonstrated the distance a single station’s measurement can be considered valid. At this stage we have a raw station temperature set from specified times of day.

3. Station integration error: when you take data from two or more stations to create a regional index, you must integrate the first two error sources described above and carry that to the integrated station level for a region or grid. In some systems combining sensors can increase accuracy. Land temperature sensor nets are not one of those kinds of systems. There are too many factors due to siting and distance (the temp decay problem) to increase accuracy. To do that you would need to have sensors located geographically close (under 5 miles I would estimate) to actually remove sensor and siting errors. At this stage we have a local regional data set (more than one station).

4. Day-to-Month integration errors: Temperatures are taken daily at fixed times and then integrated to make a daytime and nigh time index.  Then these daily indices are integrated into monthly indices. The error from the daily computations must then be integrated and added onto the monthly index.

5. Month-to-year integration errors: The AGW alarmists need to create a historic record, so they look at a yearly index (CRU actually looks at the 4 seasons first, then integrates). What ever the methodology, there will be additional error introduced to create an annual index for a geographic region. At this stage we have a local regional data set per for a single year.

6. Large geographic integration errors: Finally, you integrate mid sized regions from step 3 above into data sets for countries or hemispheres or the globe. Again, we are compounding the errors from the previous steps – some offset, some don’t. They all have to be accounted for – no hand waving!

Each local region going into step 3 has a very unique set of errors due to the unique nature of the errors defined in steps 1 & 2. From step 3 on you have a homogenized set of local regions, which have errors integrated in a consistent manner as we move from daily measurements to monthly and annual (steps 4 & 5). Finally we have a consistent method to capture additional errors as we integrate up to cover the globe (step 6).

A defendable error budget is an obvious requirement for any number spewed from any alarmists. Without it the numbers mean nothing. In my business we use these budgets to fly rockets (atmosphere) and spacecraft safely. We use them to understand when we need to remeasure and recompute a new predict. If space programs did not have a handle on this we could not fly through Al Gore’s gravity field and Earth’s atmosphere. The fact is, for launch and ascent and because the error in position can increase so quickly, we measure and adjust the guidance at incredible rates to make it into orbit safely.

We have to. We cannot adjust the data, we have to adjust to the data.

Now what I presented above is just the errors in making a measurement today for one year. What happens when you go back in time? Well you have to recompute the error budget for each station for each year. What you should see (if done correctly) is rapidly increasing error bars as the technological accuracy is lost as we go back in time. The errors in step 1 would start to increase by orders of magnitude.

But if you look at the silly claims of NCDC, GISS and CRU you see very small changes in uncertainty going back in time  - proof positive they screwed up their error budget. One of my first posts on Climategate was on errors in climate estimates over time, and I used space exploration again as the example.

I used the accuracy know as ‘image resolution’ as the example everyone can relate to (more pixels more detail, less error or blur). I used two pictures of Mars to demonstrate the state-of-the-art capabilities of humankind separated by ~50 years. First was an image taken in 1956 from the Mt Wilson Observatory:



Second was taken by the Hubble Space Telescope in 2001.



We can all see the effect going back in time has on accuracy and error. In 1880, when the global  temperature record began, humans were drawing Mars not photographing it.

The CRU data dump made public a very interesting document. It was an early attempt at an error budget, though it does not show the steps, just their initial estimate of a bottom line. Also interestingly enough, they computed it for 1969. The following graph is from that document and proves (per CRU) that the current temperature reconstructions are way too imprecise to confirm the warming claims of the IPCC and other alarmists (click to enlarge):



The title of this graph indicates this is the CRU computed sampling (measurement) error in C for 1969. It clearly shows much of the global temperatures for 1969 are +/- 1°C or more. Which means that until our current temperature rises well above 1°C over that computed for 1969 we are statistically experiencing the same temps as back then. And we know these error estimates will have to grow as we go back another 80 years to the 1880’s, let alone even farther back in time.

I don’t even think the CRU estimates are right and complete, but I do know they alone disprove the IPCC claims that there has been 0.8°C rise in temperature over the last century, mostly due to human activities. The data cannot make that determination.

Prior to 1880 there are no real global temperature records, so scientists tried to find proxies. One good proxy is ice cores, which capture the chemical composition of the snow and ice going back thousands and thousands of years. Chemical signatures are very accurately tied to temperature since these are physical processes. No surprise but the ice cores show no significant warming today. Instead, these ice cores show many warmer periods in the history of humankind. Update: WUWT has more ice core perspective. - end update

Therefore Mann and Jones and other alarmists went to a much less reliable measure of historic temperature – tree rings. Tree rings are effected by a lot of factors, the least of which is temperature (after a certain minimum has been attained to activate is growth processes). Tree growth depends on sunshine, nutrients, water and number days above the optimal temperature. A tree ring should show the same growth under 30 days of 40°F temps with plenty of moisture (afternoon showers) and nutrients as it would under 30 days of 55°F temps and the same conditions. Trees are not thermometers.

Using a living organism to measure temperature is dodgy compared to the physics of chemistry used with ice cores. The error bars on a tree ring mapped to a temperature range (and it can only be mapped to a range, not a value) are huge. But the alarmists don’t do proper science, they run statistics until they get the answer they like, then throw out the error bars as if they are meaningless. There is no way for trees rings to define any historic temperature value. Therefore claims that the MPW or Roman Period were a degree or two warmer or colder using trees is all bunk.

This post is too long and too geeky already, but I need to note that there are few people in the world capable of discerning which scientific argument is more sound. No journalist or politician can discern whether I am right or wrong. Al, give it up baby. You did not invent the internet (I know those who did) and you have a 3rd grade grasp of science (and I have two 4th graders who can prove it). I suggest you stay away from any debates outside talking to journalists. They never know when you drop one of your classic whoppers of ignorance.

The sad fact is the science behind man-made global warming is not good science. It is rather pathetic actually. I work with premier scientists and in fact review their missions for feasibility to return the results advertised. I would fail this mess without a second thought.

In addition, you cannot leave the verification of man-made global warming (AGW) in the hands of those whose careers and credibility rest on AGW being proven to be true. When you do, you get those questionable ‘adjustments’ that turn raw temp data (processed to at least step 5 in the error budget above) into something completely different. For example, here is my version of a classic graph now making its rounds on the internet (click to enlarge):



What is shows by the blue dots and lines is the raw temperature data for Northern Australia. This overlays perfectly with what the UN IPCCs Global Climate Models predict would be the temperature record for the region without AGW (the blue region in the underlying graph. The red dots and lines are what is produced from alarmists’ ‘adjustments”, and unsurprisingly these line up with the Global Climate Models predictions if there is AGW (reddish region).

Alarmists adjust temp data that magically proves alarmists’ theories, based on alarmists’ models. Impressed? I’m not. I tend more towards disgusted.

The science of global warming is a mess. They have no error budget that proves they can detect the warming they say they have detected. Their tree data is applied wrong by assuming a temp value when all you can estimate is a range (and tree ring recent divergence with current temps just proves trees are lousy indicators of temperature anyway). The alarmists have made all sorts of bogus and indefensible site adjustments, station combing while regularly making up stations from thin air to alter (or hide) the real temperature record.

Instead of explaining the data, they adjust the data to meet their explanations. The Global Climate research has not made it to a professional level of scientific endeavor as we see in more established areas of science.. If their science was so settled the supporters could answer these challenges without lifting a finger. But they cannot, instead they play PR games and smear their opponents. Houston, they have a problem.

http://strata-sphere.com/blog/index.php/archives/11824

Body-by-Guinness

  • Guest
Ice Cores Show CO2 Levels Rise Subsequent to Warming
« Reply #229 on: December 15, 2009, 07:39:50 AM »
2nd post:

Carbon rises 800 years after temperatures
Ice cores reveal that CO2 levels rise and fall hundreds of years after temperatures change



In 1985, ice cores extracted from Greenland revealed temperatures and CO2 levels going back 150,000 years. Temperature and CO2 seemed locked together. It was a turning point—the “greenhouse effect” captured attention. But, in 1999 it became clear that carbon dioxide rose and fell after temperatures did. By 2003, we had better data showing the lag was 800 ± 200 years. CO2 was in the back seat.

AGW replies: There is roughly an 800-year lag. But even if CO2 doesn’t start the warming trend, it amplifies it.

Skeptics say: If CO2 was a major driver, temperatures would rise indefinitely in a runaway greenhouse effect. This hasn’t happened in 500 million years, so either a mystery factor stops the runaway greenhouse effect, or CO2 is a minor force, and the models are missing the dominant driver.

Amplification is speculation. It’s a theory with no evidence that it matters in the real world.

Conclusion:

1. Ice cores don’t prove what caused past warming or cooling. The simplest explanation is that when temperatures rise, more carbon dioxide enters the atmosphere (because as oceans warm they release more CO2).

2. Something else is causing the warming.

Al Gore’s movie was made in 2005. His words about the ice cores were, “it’s complicated.” The lag calls everything about cause and effect into question. There is no way any honest investigation could ignore something so central.

Source: Carbon Dioxide Information Analysis Center http://cdiac.ornl.gov (See references at the bottom also).
A complete set of expanded full size graphs and print quality images is available from my Vostok Page.
Extra notes, references, and discussion about this page

The media blackout on “the lag” continues

The lag in the ice cores is old news to skeptics, but most people in the public still have no idea. This is page 5 of the HTML version of The Skeptics Handbook (the first booklet). I should have posted it long ago. The graph series and data are compelling. It’s one of the most basic features of climate science evidence, and yet it is so misused. Even tonight, I did a radio interview for NewstalkZB, New Zealand, and the pro-climate scare spokesman still referred to both the fraudulent Hockey Stick Graph and the Vostok Ice Cores as if they helped his case.

Between 1999 and 2003 a series of peer reviewed papers in the highest journals came out showing that carbon rises hundreds of years after temperature, and not before. What amazes me is that fully six years after Caillon et al in 2003 published their definitive paper, people still think the ice cores are evidence supporting the scare campaign.  “The climate is the most important problem we face,” yet somehow not a single government department, popular science magazine or education department thought it was worth doing a close up of the graph and explaining to the public that there was a definitive, uncontested long lag,  and that carbon always followed temperature.

The Al Gore style version (of which there are hundreds on-line, see below) hides the lag by compressing 420,000 years into one picture. If the public had known that temperatures lead carbon dioxide, Al Gore would not have been able to get away with using it they way he did.



In 2008, I marveled that with billions of dollars available to agencies and education campaigns, no one had graphed the lag as a close up. Why did it take an unfunded science communicator to get the data and graph it “as a hobby project”? I wanted to see that long lag; I wanted to be able to point at a graph and explain the lag to all the people who had no idea.

If you want to explore the thousands of years of those famous ice cores, the Vostok page has the full set of graphs, and this page right here is the place to comment and ask questions.

References

Petit et al 1999 — as the world cools into an ice age, the delay is several thousand years.
Fischer et al 1999 — described a lag of 600 ±400 years as the world warms.
Monnin et al 2001— Dome Concordia – found a delay on warming from the recent ice age 800 ± 600 years
Mudelsee 2001— over the full 420,000 year Vostok history, Co2 lags by 1,300 ± 1000 years.
Caillon et al 2003 — analysed the Vostok data and found a lag of 800 ± 200 years

http://joannenova.com.au/2009/12/carbon-rises-800-years-after-temperatures/

Body-by-Guinness

  • Guest
Russia States CRU Used Select Climate Data
« Reply #230 on: December 16, 2009, 04:08:01 PM »
This is coming out of a source that translate Russian newspapers and so should be confirmed before too much hay is made, but if true it demonstrates further, serious data massaging.

Russia affected by Climategate

A discussion of the November 2009 Climatic Research Unit e-mail hacking incident, referred to by some sources as "Climategate," continues against the backdrop of the abortive UN Climate Conference in Copenhagen (COP15) discussing alternative agreements to replace the 1997 Kyoto Protocol that aimed to combat global warming.

The incident involved an e-mail server used by the Climatic Research Unit (CRU) at the University of East Anglia (UEA) in Norwich, East England. Unknown persons stole and anonymously disseminated thousands of e-mails and other documents dealing with the global-warming issue made over the course of 13 years.

Controversy arose after various allegations were made including that climate scientists colluded to withhold scientific evidence and manipulated data to make the case for global warming appear stronger than it is.

ETA: Anthony Watts has more details here:

http://wattsupwiththat.com/2009/12/16/russian-iea-claims-cru-tampered-with-climate-data-cherrypicked-warmest-stations/

Climategate has already affected Russia. On Tuesday, the Moscow-based Institute of Economic Analysis (IEA) issued a report claiming that the Hadley Center for Climate Change based at the headquarters of the British Meteorological Office in Exeter (Devon, England) had probably tampered with Russian-climate data.

The IEA believes that Russian meteorological-station data did not substantiate the anthropogenic global-warming theory.

Analysts say Russian meteorological stations cover most of the country's territory, and that the Hadley Center had used data submitted by only 25% of such stations in its reports.

Over 40% of Russian territory was not included in global-temperature calculations for some other reasons, rather than the lack of meteorological stations and observations.

The data of stations located in areas not listed in the Hadley Climate Research Unit Temperature UK (HadCRUT) survey often does not show any substantial warming in the late 20th century and the early 21st century.

The HadCRUT database includes specific stations providing incomplete data and highlighting the global-warming process, rather than stations facilitating uninterrupted observations.

On the whole, climatologists use the incomplete findings of meteorological stations far more often than those providing complete observations.

IEA analysts say climatologists use the data of stations located in large populated centers that are influenced by the urban-warming effect more frequently than the correct data of remote stations.

The scale of global warming was exaggerated due to temperature distortions for Russia accounting for 12.5% of the world's land mass. The IEA said it was necessary to recalculate all global-temperature data in order to assess the scale of such exaggeration.

Global-temperature data will have to be modified if similar climate-date procedures have been used from other national data because the calculations used by COP15 analysts, including financial calculations, are based on HadCRUT research.

RIA Novosti is not responsible for the content of outside sources.

ICECAP NOTE: recall it is in Soviet Union that the CRU, NOAA, NASA show the greatest warming. As this story implies, there was no obvious reason for these data centers to be selective about stations. It implies the stations often selected were urban and those with incomplete data, providing more opportuniuty for mischief

http://icecap.us/images/uploads/BOMBSHELL.pdf

http://icecap.us/
« Last Edit: December 16, 2009, 04:18:33 PM by Body-by-Guinness »

Body-by-Guinness

  • Guest
Torture the Data Long Enough & it will Confess
« Reply #231 on: December 16, 2009, 04:37:01 PM »
2nd post.

Climategate: Something’s Rotten in Denmark … and East Anglia, Asheville, and New York City (PJM Exclusive)
Posted By Joseph D'Aleo On December 15, 2009 @ 1:39 pm In . Column1 02, Science, Science & Technology, US News, World News | 59 Comments

The familiar phrase was spoken by Marcellus in Shakespeare’s Hamlet — first performed around 1600, at the start of the Little Ice Age. “Something is rotten in the state of Denmark” is the exact quote. It recognizes that fish rots from the head down, and it means that all is not well at the top of the political hierarchy. Shakespeare proved to be Nostradamus. Four centuries later — at the start of what could be a new Little Ice Age — the rotting fish is Copenhagen.

The smell in the air may be from the leftover caviar at the banquet tables, or perhaps from the exhaust of 140 private jets and 1200 limousines commissioned by the attendees when they discovered there was to be no global warming evident in Copenhagen. (In fact, the cold will deepen and give way to snow before they leave, an extension of the Gore Effect.)

But the metaphorical stench comes from the well-financed bad science and bad policy, promulgated by the UN, and the complicity of the so-called world leaders, thinking of themselves as modern-day King Canutes (the Viking king of Denmark, England, and Norway — who ironically ruled during the Medieval Warm Period this very group has tried to deny). His flatterers thought his powers “so great, he could command the tides of the sea to go back.”

Unlike the warmists and the compliant media, Canute knew otherwise, and indeed the tide kept rising. Nature will do what nature always did — change.

It’s the data, stupid

If we torture the data long enough, it will confess. (Ronald Coase [1], Nobel Prize for Economic Sciences, 1991)

The Climategate whistleblower proved what those of us dealing with data for decades know to be the case — namely, data was being manipulated. The IPCC and their supported scientists have worked to remove the pesky Medieval Warm Period, the Little Ice Age, and the period emailer Tom Wigley referred to as the “warm 1940s blip,” and to pump up the recent warm cycle.

Attention has focused on the emails dealing with Michael Mann’s hockey stick and other proxy attempts, most notably those of Keith Briffa. Briffa was conflicted [2] in this whole process, noting he “[tried] hard to balance the needs of the IPCC with science, which were not always the same,” and that he knew “ … there is pressure to present a nice tidy story as regards ‘apparent unprecedented warming in a thousand years or more in the proxy data.’”

As Steve McIntyre has blogged [3]:

Much recent attention has been paid to the email about the “trick [4]” and the effort to “hide the decline.” Climate scientists have complained that this email has been taken “out of context.” In this case, I’m not sure that it’s in their interests that this email be placed in context because the context leads right back to … the role of IPCC itself in “hiding the decline” in the Briffa reconstruction.

In the area of data, I am more concerned about the coordinated effort to manipulate instrumental data (that was appended onto the proxy data truncated in 1960 when the trees showed a decline — the so called “divergence problem”) to produce an exaggerated warming that would point to man’s influence. I will be the first to admit that man does have some climate effect — but the effect is localized. Up to half the warming since 1900 is due to land use changes and urbanization, confirmed most recently [5] by Georgia Tech’s Brian Stone (2009), Anthony Watts (2009), Roger Pielke Sr., and many others. The rest of the warming is also man-made — but the men are at the CRU, at NOAA’s NCDC, and NASA’s GISS, the grant-fed universities and computer labs.

Programmer Ian “Harry” Harris, in the Harry_Read_Me.txt file, [6] commented about:

[The] hopeless state of their (CRU) data base. … No uniform data integrity, it’s just a catalogue of issues that continues to grow as they’re found. … I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case?Aarrggghhh! There truly is no end in sight. …

This whole project is SUCH A MESS. No wonder I needed therapy!!

I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the updateprog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more. So what the hell can I do about all these duplicate stations?

Climategate has sparked a flurry of examinations of the global data sets — not only at CRU, but in nations worldwide and at the global data centers at NOAA and NASA. Though the Hadley Centre implied their data was in agreement with other data sets and thus trustworthy, the truth is other data centers are complicit in the data manipulation fraud.

The New Zealand Climate Coalition [7] had long solicited data from New Zealand’s National Institute of Water & Atmospheric Research (NIWA), which is responsible for New Zealand’s National Climate Database. For years the data was not released, despite many requests to NIWA’s Dr. Jim Salinger — who came from CRU. With Dr. Salingers’ departure from NIWA, the data was released and showed quite a different story than the manipulated data. The raw data showed a warming of just 0.06C per century since records started in 1850. This compared to a warming of 0.92C per century in NIWA’s (CRU’s) adjusted data.

Willis Eschenbach, in a guest post on Anthony Watts’ blog, found a smoking gun at Darwin station in Australia. Raw data from NOAA (from their GHCN, Global Historical Climate Network, that compiled data that NASA and Hadley work with) showed a cooling of 0.7C. After NOAA “homogenized” the data for Darwin, that changed dramatically. In Willis’ words:

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celsius per century … but after the homogenization, they were warming at 1.2 Celsius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust,” they don’t mess around.

He found similar discrepancies [8] in the Nordic countries. And that same kind of unexplainable NOAA GHCN adjustment was made to U.S. stations.

In this story [9], see how Central Park data was manipulated in inconsistent ways. The original U.S. Historical Climate Network (USHCN) data showed a cooling to adjust for urban heat island effect — but the global version of Central Park (NOAA GHCN again) inexplicably warmed Central Park by 4F. The difference between the two U.S. adjusted and global adjusted databases, both produced by NOAA NCDC, reached an unbelievable 11F for Julys and 7F annually! Gradually and without notice, NOAA began slowly backing off the urban heat island adjustment in the USHCN data in 1999 and eliminated it entirely in 2007.

Anthony Watts, in his surfacestations.org [10] volunteer project “Is the U.S. Surface Temperature Record Reliable? [10]”, found that of the 1000-plus temperature recording stations he had surveyed (a 1221-station network), 89% rated poor to very poor – according to the government’s own criteria for siting the stations.

Perhaps one of the biggest issues with the global data is station dropout after 1990. Over 6000 stations were active in the mid-1990s. Just over 1000 are in use today. The stations that dropped out were mainly rural and at higher latitudes and altitudes — all cooler stations. This alone should account for part of the assessed warming. China had 100 stations in 1950, over 400 in 1960, then only 25 by 1990. This changing distribution makes any assessment of accurate change impossible.

No urbanization adjustment is made for either NOAA or CRU’s global data, based on flawed papers by Wang (1990), Jones (1990), and Peterson (2003). The Jones and Wang papers were shown [11] to be based on fabricated China data. Ironically, in 2008 Jones found [12] that contamination by urbanization in China was a very non-trivial 1C per century — but that did not cause the data centers to begin adjusting, as that would have eliminated global warming.

Continent after continent, researchers are seeing no warming in the unprocessed data (see one thorough analysis here [13]).

Just as the Medieval Warm Period made it difficult to explain why the last century of warming could not be natural (which the hockey stick team attempted to make go away), so did the warm spike in the 1930s and 1940s. In each of the databases, the land data from that period was adjusted down. And Wigley [14] suggested that sea surface temperatures could likewise be “corrected” down by 0.15C, making the results look both warmer but still plausible.

Wigley also noted [15]:

Land warming since 1980 has been twice the ocean warming — and skeptics might claim that this proves that urban warming is real and important.

NOAA complied in July 2009 — removing the satellite input from the global sea surface temperature assessment (the most complete data in terms of coverage), which resulted in an instant jump of 0.24C in ocean temperatures.

Is NASA in the clear? No. They work with the same base GHCN data, plus data from Antarctica (SCAR). To their credit, they attempt to consider urbanization — though Steve McIntyre showed [16] they have poor population data and adjust cities warmer as often as they do colder. They also constantly fiddle with the data. John Goetz [17] showed that 20% of the historical record was modified 16 times in the 2 1/2 years ending in 2007.

When you hear press releases from NOAA, NASA, or Hadley claiming a month, year, or decade ranks among the warmest ever recorded, keep in mind: they have tortured the data, and it has confessed.

Article printed from Pajamas Media: http://pajamasmedia.com

URL to article: http://pajamasmedia.com/blog/climategate-somethings-rotten-in-denmark-and-east-anglia-asheville-and-new-york-city-pjm-exclusive/

URLs in this post:

[1] Ronald Coase: http://en.wikipedia.org/wiki/Ronald_Coase
[2] Briffa was conflicted: http://www.eastangliaemails.com/emails.php?eid=1039&filename=0938031546.txt
[3] blogged: http://climateaudit.org/2009/12/10/ipcc-and-the-trick/
[4] trick: http://www.eastangliaemails.com/emails.php?eid=154
[5] confirmed most recently: http://www.gatech.edu/newsroom/release.html?nid=47354
[6] Harry_Read_Me.txt file,: http://pajamasmedia.com/blog/climategates-harry_read_me-txt-we-all-really-should/
[7] New Zealand Climate Coalition: http://icecap.us/images/uploads/global_warming_nz_pdf.pdf
[8] similar discrepancies: http://wattsupwiththat.com/2009/11/29/when-results-go-bad/
[9] this story: http://icecap.us/images/uploads/Central_Park_Temperatures_Two.pdf
[10] surfacestations.org: http://www.heartland.org/books/PDFs/SurfaceStations.pdf
[11] shown: http://icecap.us/index.php/go/icing-the-hype/climate_scientists_implicated_in_research_fraud/
[12] found: http://www.warwickhughes.com/blog/?p=204
[13] here: http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/
[14] Wigley: http://www.eastangliaemails.com/emails.php?eid=1016&filename=1254108338.txt
[15] noted: http://www.eastangliaemails.com/emails.php?eid=1067&filename=1257546975.txt
[16] showed: http://icecap.us/images/uploads/US_AND_GLOBAL_TEMP_ISSUES.pdf
[17] John Goetz: http://wattsupwiththat.com/2008/04/08/rewriting-history-time-and-time-again/

Body-by-Guinness

  • Guest
Ice Cores & Temps
« Reply #232 on: December 17, 2009, 06:46:39 AM »
Video comparing temps to "greenhouse gas" data via ice core data.

[youtube]http://www.youtube.com/watch?v=z685n4RMx6Y&feature=player_embedded[/youtube]

Body-by-Guinness

  • Guest
1850's Adjustment Error
« Reply #233 on: December 18, 2009, 08:07:51 AM »
Interesting blog post where a gent goes back through the CRU adjustment algorithms for extrapolating global temp data from the few stations recording info in the 1850s. Not the lack of adjustment error, keeping in my the various William Briggs posts about homogenizing data and the inappropriate confidence that process instills.

http://www.jgc.org/blog/2009/12/adjusting-for-coverage-bias-and.html

Body-by-Guinness

  • Guest
Subverting the Publication Process
« Reply #234 on: December 20, 2009, 04:45:56 AM »
Telling chronology wherein AGW zealots conspire to derail, belittle, and finally publish a response to a paper they didn't like prior to the original appearing:

http://www.americanthinker.com/2009/12/a_climatology_conspiracy.html

Body-by-Guinness

  • Guest
An IPCC Scientist Summarizes
« Reply #235 on: December 27, 2009, 08:44:56 AM »
Dec 17, 2009
Fact-based climate debate
By Lee C. Gerhard, IPCC Expert Reviewer

It is crucial that scientists are factually accurate when they do speak out, that they ignore media hype and maintain a clinical detachment from social or other agendas. There are facts and data that are ignored in the maelstrom of social and economic agendas swirling about Copenhagen.

Greenhouse gases and their effects are well-known. Here are some of things we know:

• The most effective greenhouse gas is water vapor, comprising approximately 95 percent of the total greenhouse effect.

• Carbon dioxide concentration has been continually rising for nearly 100 years. It continues to rise, but carbon dioxide concentrations at present are near the lowest in geologic history.

• Temperature change correlation with carbon dioxide levels is not statistically significant.

• There are no data that definitively relate carbon dioxide levels to temperature changes.

• The greenhouse effect of carbon dioxide logarithmically declines with increasing concentration. At present levels, any additional carbon dioxide can have very little effect.

We also know a lot about Earth temperature changes:

• Global temperature changes naturally all of the time, in both directions and at many scales of intensity.

• The warmest year in the U.S. in the last century was 1934, not 1998. The U.S. has the best and most extensive temperature records in the world.

• Global temperature peaked in 1998 on the current 60-80 year cycle, and has been episodically declining ever since. This cooling absolutely falsifies claims that human carbon dioxide emissions are a controlling factor in Earth temperature.

• Voluminous historic records demonstrate the Medieval Climate Optimum (MCO) was real and that the “hockey stick” graphic that attempted to deny that fact was at best bad science. The MCO was considerably warmer than the end of the 20th century.

• During the last 100 years, temperature has both risen and fallen, including the present cooling. All the changes in temperature of the last 100 years are in normal historic ranges, both in absolute value and, most importantly, rate of change.

Contrary to many public statements:

• Effects of temperature change are absolutely independent of the cause of the temperature change.

• Global hurricane, cyclonic and major storm activity is near 30-year lows. Any increase in cost of damages by storms is a product of increasing population density in vulnerable areas such as along the shores and property value inflation, not due to any increase in frequency or severity of storms.

• Polar bears have survived and thrived over periods of extreme cold and extreme warmth over hundreds of thousands of years - extremes far in excess of modern temperature changes.

• The 2009 minimum Arctic ice extent was significantly larger than the previous two years. The 2009 Antarctic maximum ice extent was significantly above the 30-year average. There are only 30 years of records.

• Rate and magnitude of sea level changes observed during the last 100 years are within normal historical ranges. Current sea level rise is tiny and, at most, justifies a prediction of perhaps ten centimeters rise in this century.

The present climate debate is a classic conflict between data and computer programs. The computer programs are the source of concern over climate change and global warming, not the data. Data are measurements. Computer programs are artificial constructs.

Public announcements use a great deal of hyperbole and inflammatory language. For instance, the word “ever” is misused by media and in public pronouncements alike. It does not mean “in the last 20 years,” or “the last 70 years.” “Ever” means the last 4.5 billion years.

For example, some argue that the Arctic is melting, with the warmest-ever temperatures. One should ask, “How long is ever?” The answer is since 1979. And then ask, “Is it still warming?” The answer is unequivocally “No.” Earth temperatures are cooling. Similarly, the word “unprecedented” cannot be legitimately used to describe any climate change in the last 8,000 years.

There is not an unlimited supply of liquid fuels. At some point, sooner or later, global oil production will decline, and transportation costs will become insurmountable if we do not develop alternative energy sources. However, those alternative energy sources do not now exist.

A legislated reduction in energy use or significant increase in cost will severely harm the global economy and force a reduction in the standard of living in the United States. It is time we spent the research dollars to invent an order-of-magnitude better solar converter and an order-of-magnitude better battery. Once we learn how to store electrical energy, we can electrify transportation. But these are separate issues. Energy conversion is not related to climate change science.

I have been a reviewer of the last two IPCC reports, one of the several thousand scientists who purportedly are supporters of the IPCC view that humans control global temperature. Nothing could be further from the truth. Many of us try to bring better and more current science to the IPCC, but we usually fail. Recently we found out why. The whistleblower release of e-mails and files from the Climate Research Unit at East Anglia University has demonstrated scientific malfeasance and a sickening violation of scientific ethics.

If the game of Russian roulette with the environment that Adrian Melott contends is going on, is it how will we feed all the people when the cold of the inevitable Little Ice Age returns? It will return. We just don’t know when. Read more here.

http://icecap.us/index.php/go/new-and-cool/fact_based_climate_debate/

Body-by-Guinness

  • Guest
Simple Physics & Complex Systems
« Reply #236 on: December 28, 2009, 12:45:37 PM »
The Unbearable Complexity of Climate

Guest Post by Willis Eschenbach



Figure 1. The Experimental Setup

I keep reading statements in various places about how it is indisputable “simple physics” that if we increase amount of atmospheric CO2, it will inevitably warm the planet. Here’s a typical example:

In the hyperbolic language that has infested the debate, researchers have been accused of everything from ditching the scientific method to participating in a vast conspiracy. But the basic concepts of the greenhouse effect is a matter of simple physics and chemistry, and have been part of the scientific dialog for roughly a century.

Here’s another:

The important thing is that we know how greenhouse gases affect climate. It has even been predicted hundred years ago by Arrhenius. It is simple physics.

Unfortunately, while the physics is simple, the climate is far from simple. It is one of the more complex systems that we have ever studied. The climate is a tera-watt scale planetary sized heat engine. It is driven by both terrestrial and extra-terrestrial forcings, a number of which are unknown, and many of which are poorly understood and/or difficult to measure. It is inherently chaotic and turbulent, two conditions for which we have few mathematical tools.

The climate is comprised of five major subsystems — atmosphere, ocean, cryosphere, lithosphere, and biosphere. All of these subsystems are imperfectly understood. Each of these subsystems has its own known and unknown internal and external forcings, feedbacks, resonances, and cyclical variations. In addition, each subsystem affects all of the other subsystems through a variety of known and unknown forcings and feedbacks.

Then there is the problem of scale. Climate has crucially important processes at physical scales from the molecular to the planetary, and at temporal scales from milliseconds to millennia.

As a result of this almost unimaginable complexity, simple physics is simply inadequate to predict the effect of a change in one of the hundreds and hundreds of things that affect the climate. I will give two examples of why “simple physics” doesn’t work with the climate — a river, and a block of steel. I’ll start with a thought experiment with the block of steel.

Suppose that I want to find out about how temperature affects solids. I take a 75 kg block of steel, and I put the bottom end of it in a bucket of hot water. I duct tape a thermometer to the top end in the best experimental fashion, and I start recording how the temperature change with time. At first, nothing happens. So I wait. And soon, the temperature of the other end of the block of steel starts rising. Hey, simple physics, right?

To verify my results, I try the experiment with a block of copper. I get the same result, the end of the block that’s not in the hot water soon begins to warm up. I try it with a block of glass, same thing. My tentative conclusion is that simple physics says that if you heat one end of a solid, the other end will eventually heat up as well.

So I look around for a final test. Not seeing anything obvious, I have a flash of insight. I weigh about 75 kg. So I sit with my feet in the bucket of hot water, put the thermometer in my mouth, and wait for my head to heat up. This experimental setup is shown in Figure 1 above.

After all, simple physics is my guideline, I know what’s going to happen, I just have to wait.

And wait … and wait …

As our thought experiment shows, simple physics may simply not work when applied to a complex system. The problem is that there are feedback mechanisms that negate the effect of the hot water on my cold toes. My body has a preferential temperature which is not set by the external forcings.

For a more nuanced view of what is happening, let’s consider the second example, a river. Again, a thought experiment.

I take a sheet of plywood, and I cover it with some earth. I tilt it up so it slopes from one edge to the other. For our thought experiment, we’ll imagine that this is a hill that goes down to the ocean.

I place a steel ball at the top edge of the earth-covered plywood, and I watch what happens. It rolls, as simple physics predicts, straight down to the lower edge. I try it with a wooden ball, and get the same result. I figure maybe it’s because of the shape of the object.

So I make a small wooden sled, and put it on the plywood. Again, it slides straight down to the ocean. I try it with a miniature steel shed, same result. It goes directly downhill to the ocean as well. Simple physics, understood by Isaac Newton.

As a final test, I take a hose and I start running some water down from the top edge of my hill to make a model river. To my surprise, although the model river starts straight down the hill, it soon starts to wander. Before long, it has formed a meandering stream, which changes its course with time. Sections of the river form long loops, the channel changes, loops are cut off, new channels form, and after while we get something like this:



Figure 2. Meanders, oxbow bends, and oxbow lakes in a river system. Note the old channels where the river used to run.

The most amazing part is that the process never stops. No matter how long we run the river experiment, the channel continues to change. What’s going on here?

Well, the first thing that we can conclude is that, just as in our experiment with the steel block, simple physics simply doesn’t work in this situation. Simple physics says that things roll straight downhill, and clearly, that ain’t happening here … it is obvious we need better tools to analyze the flow of the river.

Are there mathematical tools that we can use to understand this system? Yes, but they are not simple. The breakthrough came in the 1990’s, with the discovery by Adrian Bejan of the Constructal Law. The Constructal Law applies to all flow systems which are far from equilibrium, like a river or the climate.

It turns out that these types of flow systems are not passive systems which can take up any configuration. Instead, they actively strive to maximize some aspect of the system. For the river, as for the climate, the system strives to maximize the sum of the energy moved and the energy lost through turbulence. See the discussion of these principles here, here, here, and here. There is also a website devoted to various applications of the Constructal Law here.

There are several conclusions that we can make from the application of the Constructal Law to flow systems:

1. Any flow system far from equilibrium is not free to take up any form as the climate models assume. Instead, it has a preferential state which it works actively to achieve.

2. This preferential state, however, is never achieved. Instead, the system constantly overshoots and undershoots that state, and does not settle down to one final form. The system never stops modifying its internal aspects to move towards the preferential state.

3. The results of changes in such a flow system are often counterintuitive. For example, suppose we want to shorten the river. Simple physics says it should be easy. So we cut through an oxbow bend, and it makes the river shorter … but only for a little while. Soon the river readjusts, and some other part of the river becomes longer. The length of the river is actively maintained by the system. Contrary to our simplistic assumptions, the length of the river is not changed by our actions.

So that’s the problem with “simple physics” and the climate. For example, simple physics predicts a simple linear relationship between the climate forcings and the temperature. People seriously believe that a change of X in the forcings will lead inevitably to a chance of A * X in the temperature. This is called the “climate sensitivity”, and is a fundamental assumption in the climate models. The IPCC says that if CO2 doubles, we will get a rise of around 3C in the global temperature. However, there is absolutely no evidence to support that claim, only computer models. But the models assume this relationship, so they cannot be used to establish the relationship.

However, as rivers clearly show, there is no such simple relationship in a flow system far from equilibrium. We can’t cut through an oxbow to shorten the river, it just lengthens elsewhere to maintain the same total length. Instead of being affected by a change in the forcings, the system sets its own preferential operating conditions (e.g. length, temperature, etc.) based on the natural constraints and flow possibilities and other parameters of the system.

Final conclusion? Because climate is a flow system far from equilibrium, it is ruled by the Constructal Law. As a result, there is no physics-based reason to assume that increasing CO2 will make any difference to the global temperature, and the Constructal Law gives us reason to think that it may make no difference at all. In any case, regardless of Arrhenius, the “simple physics” relationship between CO2 and global temperature is something that we cannot simply assume to be true.

http://wattsupwiththat.com/2009/12/27/the-unbearable-complexity-of-climate-2/#more-14585

Body-by-Guinness

  • Guest
Whistleblowers Enticement?
« Reply #237 on: January 03, 2010, 05:44:36 PM »
Attention Penn State: Top fraud attorney seeks climategate whistleblowers
JANUARY 3, 2010 · 3 COMMENTS
by John O’Sullivan

We are turning up the heat in pursuit of prosecutions against scientists involved in the recent Climategate scandal. Our dedicated group of volunteers working with Climategate.com are behind a plan to entice co-workers of discredited Penn State University climatologist Michael Mann to turn whistleblowers in return for millions of dollars in federal reward money. Mann is famous for his emails obtained from the East Anglia University server hacking, and for creating the widely disputed ‘hockey stick’ graph that is depicted in Al Gore’s film, “An Inconvenient Truth.”

An “inconvenient truth” for Mann is that an ally of ours, former CIA agent Kent Clizbe, has this weekend emailed the proxy professor’s co-workers with details of the tempting offer that could turn 2010 into quite a prosperous New Year. We hope someone at this premiere world research institution will come forward and substantiate the facts from evidence already uncovered from the government emails leaked on November 19, 2009. The emails, among other things, show correspondence between Michael Mann and British Professor Phil Jones of the University of East Anglia’s Climate Research, which discuss methods to “hide the decline” in global temperatures.

If any of the dozens of co-workers in the US or the UK are prepared to give evidence, even if it doesn’t lead to any convictions, they could benefit from a share of tens of millions of dollars in recovered public funds. The Whistleblower idea came up in Internet discussions with top US fraud lawyer, Joel Hesch, of Hesch and Associates and former CIA agent, Kent Clizbe. Clizbe’s idea was to email the offer to all 27 of Mann’s co-workers at Penn State’s Earth System Science Center (ESSC) this weekend.

Whether convictions are obtained or not, Mr. Hesch assures prospective whistleblowers they will receive a substantial share of any monies recovered. Federal investigators reward whistleblowers with an average payment of $1.5 million based on the sums of money recovered. The US Federal government has paid out almost $3 billion so far in such rewards. The largest rewards to date exceed $150 million, and one out of every five applicants gets a monetary reward. Estimates of the total sums invested in government climate research already exceed $50 billion. The offer put on the table to Mann’s colleagues could be the most lucrative whistleblower deal ever made.

Kent Clizbe, who authored the email to Mann’s co-workers, has extensive experience in protecting the confidentiality and security of his clients. Both as an executive recruiter, and as a former government intelligence officer, Kent specialized in protecting the confidentiality of interactions with his clients.

ESSC employees will read Kent’s offer Monday morning when the switch on their computers to check their email. In his message, Kent tells them, “the whistleblowers with inside knowledge of misused federal grant dollars will enjoy the highest level of confidentiality possible. We suggest that you contact Kent using an email account outside of your work. Details can be found at www.kentclizbe.com. Alternatively, you may also contact attorney Joel Hesch, of Hesch and Associates, through his website HowToReportFraud.com.

We at Climategate.com made the decision to give our scoop to the widely read climate change skeptic and journalist, James Delingpole of the Daily Telegraph, for maximum dissemination of this story. Earlier today he went public with it in Climategate: Michael Mann’s very unhappy New Year at the Telegraph.

John O’Sullivan is a legal advocate and writer who for several years has litigated in government corruption and conspiracy cases in both the US and Britain. Visit his website.

http://www.climategate.com/penn-state-climategate-whistleblowers#more-1415

DougMacG

  • Power User
  • ***
  • Posts: 19447
    • View Profile
Pathological Science and WTF
« Reply #238 on: January 04, 2010, 07:45:46 AM »
I love the whistle blower enticement post.  I hope all who can come forward with any true stories  about public and institutional monies used to mislead the public for the purpose of this left turn to halt our economy.
-----
As the climate and weather runs through its natural cycles, zealots used to pick a warm stretch to tell us that we are deniers if we won't admit that we notice a tenth of a degree per decade change.  But scare-mongering became a fully funded, year-round industry resulting in the embarrassments of being snowed and frozen out of events in NYC, DC and Copenhagen.

If we only fly to Alaska or Kilimanjaro (in summer) we are told, that we can witness global warming first hand.  But those affects would be regional and cyclical just like whatever temps and changes we might experience right out our own doors across the heartland. 

Those of you in sunny southern Cal might want to put on a sweater before you read the following morning temps that millions are facing here in the twin cities (and similar for most of the country), 4 Days actual, 5 days forecast, note the warmup at the end.  We truly look forward to it:
Dec31: -2
Jan 1: -10
Jan 2: -18
Jan 3: -17
Jan 4: -10
Jan 5: -5
Jan 6: -9
Jan 7: -15
Jan 8: -8
Jan 9: 2 °F
Wonder what the temps these mornings would be without man caused global warming.  Was it really colder when we were kids?  But this is winter, what about summer? The summer update here is that my home air conditioner has been off for over ten years and my car air conditioners have all died from non-use.

I also get a heat bill for an old house high in the mountains of Colo.  This winter so far has been six degrees colder than last.  Not tenths of a degree, 6 degrees colder on average - morning, noon, night and everything in between, for a region over an extended period. 

Maybe God knows more about these fluctuations, the scientists don't.

Meanwhile, CO2 is still an atmospheric trace element measured in parts per million.  It is NOT trapping huge amounts of heat in, across the globe.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Pathological Science
« Reply #239 on: January 04, 2010, 08:24:20 AM »
http://hotair.com/archives/2010/01/04/no-rise-in-atmospheric-carbon-over-the-last-150-years-university-of-bristol/

No rise in atmospheric carbon fraction over the last 150 years: University of Bristol

Body-by-Guinness

  • Guest
More on Cargo Cult Science, I
« Reply #240 on: January 06, 2010, 08:17:04 AM »
A prescient piece by Richard Feynman presented back in 1974:

CARGO CULT SCIENCE by Richard Feynman
 
Adapted from the Caltech commencement address given in 1974.
 
During the Middle Ages there were all kinds of crazy ideas, such
as that a piece of rhinoceros horn would increase potency. Then a
method was discovered for separating the ideas--which was to try
one to see if it worked, and if it didn't work, to eliminate it.
This method became organized, of course, into science. And it
developed very well, so that we are now in the scientific age. It
is such a scientific age, in fact that we have difficulty in
understanding how witch doctors could ever have existed, when
nothing that they proposed ever really worked--or very little of
it did.
 
But even today I meet lots of people who sooner or later get me
into a conversation about UFOS, or astrology, or some form of
mysticism, expanded consciousness, new types of awareness, ESP, and
so forth. And I've concluded that it's not a scientific world.
 
Most people believe so many wonderful things that I decided to
investigate why they did. And what has been referred to as my
curiosity for investigation has landed me in a difficulty where I
found so much junk that I'm overwhelmed. First I started out by
investigating various ideas of mysticism, and mystic experiences.
I went into isolation tanks and got many hours of hallucinations,
so I know something about that. Then I went to Esalen, which is a
hotbed of this kind of thought (it's a wonderful place; you should
go visit there). Then I became overwhelmed. I didn't realize how
much there was.
 
At Esalen there are some large baths fed by hot springs situated
on a ledge about thirty feet above the ocean. One of my most
pleasurable experiences has been to sit in one of those baths and
watch the waves crashing onto the rocky shore below, to gaze into
the clear blue sky above, and to study a beautiful nude as she
quietly appears and settles into the bath with me.
 
One time I sat down in a bath where there was a beautiful girl
sitting with a guy who didn't seem to know her. Right away I began
thinking, "Gee! How am I gonna get started talking to this
beautiful nude babe?"
 
I'm trying to figure out what to say, when the guy says to her,
I'm, uh, studying massage. Could I practice on you?"
 
"Sure," she says. They get out of the bath and she lies down on a
massage table nearby.
 
I think to myself, "What a nifty line! I can never think of
anything like that!" He starts to rub her big toe. "I think I feel
it, "he says. "I feel a kind of dent--is that the pituitary?"
 
I blurt out, "You're a helluva long way from the pituitary, man!"
 
They looked at me, horrified--I had blown my cover--and said, "It's
reflexology!"
 
I quickly closed my eyes and appeared to be meditating.
 
That's just an example of the kind of things that overwhelm me. I
also looked into extrasensory perception and PSI phenomena, and the
latest craze there was Uri Geller, a man who is supposed to be able
to bend keys by rubbing them with his finger. So I went to his
hotel room, on his invitation, to see a demonstration of both
mindreading and bending keys. He didn't do any mindreading that
succeeded; nobody can read my mind, I guess. And my boy held a key
and Geller rubbed it, and nothing happened. Then he told us it
works better under water, and so you can picture all of us standing
in the bathroom with the water turned on and the key under it, and
him rubbing the key with his finger. Nothing happened. So I was
unable to investigate that phenomenon.
 
But then I began to think, what else is there that we believe? (And
I thought then about the witch doctors, and how easy it would have
been to cheek on them by noticing that nothing really worked.) So
I found things that even more people believe, such as that we have
some knowledge of how to educate. There are big schools of reading
methods and mathematics methods, and so forth, but if you notice,
you'll see the reading scores keep going down--or hardly going up
in spite of the fact that we continually use these same people to
improve the methods. There's a witch doctor remedy that doesn't
work. It ought to be looked into; how do they know that their
method should work? Another example is how to treat criminals. We
obviously have made no progress--lots of theory, but no progress--
in decreasing the amount of crime by the method that we use to
handle criminals.
 
Yet these things are said to be scientific. We study them. And I
think ordinary people with commonsense ideas are intimidated by
this pseudoscience. A teacher who has some good idea of how to
teach her children to read is forced by the school system to do it
some other way--or is even fooled by the school system into
thinking that her method is not necessarily a good one. Or a parent
of bad boys, after disciplining them in one way or another, feels
guilty for the rest of her life because she didn't do "the right
thing," according to the experts.
 
So we really ought to look into theories that don't work, and
science that isn't science.
 
I think the educational and psychological studies I mentioned are
examples of what I would like to call cargo cult science. In the
South Seas there is a cargo cult of people. During the war they saw
airplanes land with lots of good materials, and they want the same
thing to happen now. So they've arranged to imitate things like
runways, to put fires along the sides of the runways, to make a
wooden hut for a man to sit in, with two wooden pieces on his head
like headphones and bars of bamboo sticking out like antennas--he's
the controller--and they wait for the airplanes to land. They're
doing everything right. The form is perfect. It looks exactly the
way it looked before. But it doesn't work. No airplanes land. So
I call these things cargo cult science, because they follow all the
apparent precepts and forms of scientific investigation, but
they're missing something essential, because the planes don't land.
 
Now it behooves me, of course, to tell you what they're missing.
But it would be just about as difficult to explain to the South Sea
Islanders how they have to arrange things so that they get some
wealth in their system. It is not something simple like telling
them how to improve the shapes of the earphones. But there is one
feature I notice that is generally missing in cargo cult science.
That is the idea that we all hope you have learned in studying
science in school--we never explicitly say what this is, but just
hope that you catch on by all the examples of scientific
investigation. It is interesting, therefore, to bring it out now
and speak of it explicitly. It's a kind of scientific integrity,
a principle of scientific thought that corresponds to a kind of
utter honesty--a kind of leaning over backwards. For example, if
you're doing an experiment, you should report everything that you
think might make it invalid--not only what you think is right about
it: other causes that could possibly explain your results; and
things you thought of that you've eliminated by some other
experiment, and how they worked--to make sure the other fellow can
tell they have been eliminated.
 
Details that could throw doubt on your interpretation must be
given, if you know them. You must do the best you can--if you know
anything at all wrong, or possibly wrong--to explain it. If you
make a theory, for example, and advertise it, or put it out, then
you must also put down all the facts that disagree with it, as well
as those that agree with it. There is also a more subtle problem.
When you have put a lot of ideas together to make an elaborate
theory, you want to make sure, when explaining what it fits, that
those things it fits are not just the things that gave you the idea
for the theory; but that the finished theory makes something else
come out right, in addition.
 
In summary, the idea is to try to give all of the information to
help others to judge the value of your contribution; not just the
information that leads to judgment in one particular direction or
another.
 
The easiest way to explain this idea is to contrast it, for
example, with advertising. Last night I heard that Wesson oil
doesn't soak through food. Well, that's true. It's not dishonest;
but the thing I'm talking about is not just a matter of not being
dishonest, it's a matter of scientific integrity, which is another
level. The fact that should be added to that advertising statement
is that no oils soak through food, if operated at a certain
temperature. If operated at another temperature, they all will--
including Wesson oil. So it's the implication which has been
conveyed, not the fact, which is true, and the difference is what
we have to deal with.
 
We've learned from experience that the truth will come out. Other
experimenters will repeat your experiment and find out whether you
were wrong or right. Nature's phenomena will agree or they'll
disagree with your theory. And, although you may gain some
temporary fame and excitement, you will not gain a good reputation
as a scientist if you haven't tried to be very careful in this kind
of work. And it's this type of integrity, this kind of care not to
fool yourself, that is missing to a large extent in much of the
research in cargo cult science.
 
A great deal of their difficulty is, of course, the difficulty of
the subject and the inapplicability of the scientific method to the
subject.  Nevertheless it should be remarked that this is not the
only difficulty.  That's why the planes didn't land--but they don't
land.
 
We have learned a lot from experience about how to handle some of
the ways we fool ourselves. One example: Millikan measured the
charge on an electron by an experiment with falling oil drops, and
got an answer which we now know not to be quite right. It's a
little bit off, because he had the incorrect value for the
viscosity of air. It's interesting to look at the history of
measurements of the charge of the electron, after Millikan. If you
plot them as a function of time, you find that one is a little
bigger than Millikan's, and the next one's a little bit bigger than
that, and the next one's a little bit bigger than that, until
finally they settle down to a number which is higher.
 
Why didn't they discover that the new number was higher right away?
It's a thing that scientists are ashamed of--this history--because
it's apparent that people did things like this: When they got a
number that was too high above Millikan's, they thought something
must be wrong--and they would look for and find a reason why
something might be wrong. When they got a number closer to
Millikan's value they didn't look so hard. And so they eliminated
the numbers that were too far off, and did other things like that.
We've learned those tricks nowadays, and now we don't have that
kind of a disease.
 

Body-by-Guinness

  • Guest
More on Cargo Cult Science, II
« Reply #241 on: January 06, 2010, 08:17:22 AM »
But this long history of learning how not to fool ourselves--of
having utter scientific integrity--is, I'm sorry to say, something
that we haven't specifically included in any particular course that
I know of. We just hope you've caught on by osmosis.
 
The first principle is that you must not fool yourself--and you are
the easiest person to fool. So you have to be very careful about
that. After you've not fooled yourself, it's easy not to fool other
scientists. You just have to be honest in a conventional way after
that.
 
I would like to add something that's not essential to the science,
but something I kind of believe, which is that you should not fool
the layman when you're talking as a scientist. I am not trying to
tell you what to do about cheating on your wife, or fooling your
girlfriend, or something like that, when you're not trying to be
a scientist, but just trying to be an ordinary human being. We'll
leave those problems up to you and your rabbi. I'm talking about
a specific, extra type of integrity that is not lying, but bending
over backwards to show how you are maybe wrong, that you ought to
have when acting as a scientist. And this is our responsibility as
scientists, certainly to other scientists, and I think to laymen.
 
For example, I was a little surprised when I was talking to a
friend who was going to go on the radio. He does work on cosmology
and astronomy, and he wondered how he would explain what the
applications of this work were. "Well," I said, "there aren't any."
He said, "Yes, but then we won't get support for more research of
this kind." I think that's kind of dishonest. If you're
representing yourself as a scientist, then you should explain to
the layman what you're doing--and if they don't want to support you
under those circumstances, then that's their decision.
 
One example of the principle is this: If you've made up your mind
to test a theory, or you want to explain some idea, you should
always decide to publish it whichever way it comes out. If we only
publish results of a certain kind, we can make the argument look
good. We must publish both kinds of results.
 
I say that's also important in giving certain types of government
advice. Supposing a senator asked you for advice about whether
drilling a hole should be done in his state; and you decide it
would be better in some other state. If you don't publish such a
result, it seems to me you're not giving scientific advice. You're
being used. If your answer happens to come out in the direction the
government or the politicians like, they can use it as an argument
in their favor; if it comes out the other way, they don't publish
it at all. That's not giving scientific advice.
 
Other kinds of errors are more characteristic of poor science. When
I was at Cornell, I often talked to the people in the psychology
department. One of the students told me she wanted to do an
experiment that went something like this--it had been found by
others that under certain circumstances, X, rats did something, A.
She was curious as to whether, if she changed the circumstances to
Y, they would still do A. So her proposal was to do the experiment
under circumstances Y and see if they still did A.
 
I explained to her that it was necessary first to repeat in her
laboratory the experiment of the other person--to do it under
condition X to see if she could also get result A, and then change
to Y and see if A changed. Then she would know that the real
difference was the thing she thought she had under control.
 
She was very delighted with this new idea, and went to her
professor. And his reply was, no, you cannot do that, because the
experiment has already been done and you would be wasting time.
This was in about 1947 or so, and it seems to have been the general
policy then to not try to repeat psychological experiments, but
only to change the conditions and see what happens.
 
Nowadays there's a certain danger of the same thing happening, even
in the famous (?) field of physics. I was shocked to hear of an
experiment done at the big accelerator at the National Accelerator
Laboratory, where a person used deuterium. In order to compare his
heavy hydrogen results to what might happen with light hydrogen"
he had to use data from someone else's experiment on light
hydrogen, which was done on different apparatus. When asked why,
he said it was because he couldn't get time on the program (because
there's so little time and it's such expensive apparatus) to do the
experiment with light hydrogen on this apparatus because there
wouldn't be any new result. And so the men in charge of programs
at NAL are so anxious for new results, in order to get more money
to keep the thing going for public relations purposes, they are
destroying--possibly--the value of the experiments themselves,
which is the whole purpose of the thing. It is often hard for the
experimenters there to complete their work as their scientific
integrity demands.
 
All experiments in psychology are not of this type, however. For
example, there have been many experiments running rats through all
kinds of mazes, and so on--with little clear result. But in 1937
a man named Young did a very interesting one. He had a long
corridor with doors all along one side where the rats came in, and
doors along the other side where the food was. He wanted to see if
he could train the rats to go in at the third door down from
wherever he started them off. No. The rats went immediately to the
door where the food had been the time before.
 
The question was, how did the rats know, because the corridor was
so beautifully built and so uniform, that this was the same door
as before? Obviously there was something about the door that was
different from the other doors. So he painted the doors very
carefully, arranging the textures on the faces of the doors exactly
the same. Still the rats could tell. Then he thought maybe the rats
were smelling the food, so he used chemicals to change the smell
after each run. Still the rats could tell. Then he realized the
rats might be able to tell by seeing the lights and the arrangement
in the laboratory like any commonsense person. So he covered the
corridor, and still the rats could tell.
 
He finally found that they could tell by the way the floor sounded
when they ran over it. And he could only fix that by putting his
corridor in sand. So he covered one after another of all possible
clues and finally was able to fool the rats so that they had to
learn to go in the third door. If he relaxed any of his conditions,
the rats could tell.
 
Now, from a scientific standpoint, that is an A-number-one
experiment. That is the experiment that makes rat-running
experiments sensible, because it uncovers the clues that the rat
is really using--not what you think it's using. And that is the
experiment that tells exactly what conditions you have to use in
order to be careful and control everything in an experiment with
rat-running.
 
I looked into the subsequent history of this research. The next
experiment, and the one after that, never referred to Mr. Young.
They never used any of his criteria of putting the corridor on
sand, or being very careful. They just went right on running rats
in the same old way, and paid no attention to the great discoveries
of Mr. Young, and his papers are not referred to, because he didn't
discover anything about the rats. In fact, he discovered all the
things you have to do to discover something about rats. But not
paying attention to experiments like that is a characteristic of
cargo cult science.
 
Another example is the ESP experiments of Mr. Rhine, and other
people. As various people have made criticisms--and they themselves
have made criticisms of their own experiments--they improve the
techniques so that the effects are smaller, and smaller, and
smaller until they gradually disappear. All the parapsychologists
are looking for some experiment that can be repeated--that you can
do again and get the same effect--statistically, even. They run a
million rats no, it's people this time they do a lot of things and
get a certain statistical effect. Next time they try it they don't
get it any more. And now you find a man saying that it is an
irrelevant demand to expect a repeatable experiment. This is
science?
 
This man also speaks about a new institution, in a talk in which
he was resigning as Director of the Institute of Parapsychology.
And, in telling people what to do next, he says that one of the
things they have to do is be sure they only train students who have
shown their ability to get PSI results to an acceptable extent--
not to waste their time on those ambitious and interested students
who get only chance results. It is very dangerous to have such a
policy in teaching--to teach students only how to get certain
results, rather than how to do an experiment with scientific
integrity.
 
So I have just one wish for you--the good luck to be somewhere
where you are free to maintain the kind of integrity I have
described, and where you do not feel forced by a need to maintain
your position in the organization, or financial support, or so on,
to lose your integrity. May you have that freedom.
 
http://www.lhup.edu/~DSIMANEK/cargocul.htm

Rarick

  • Guest
Re: Pathological Science
« Reply #242 on: January 07, 2010, 04:45:11 AM »
Excellent point.  To get funding to gather data, to find out whether this global warming thing is really true or not ther is a prty line that must be met.   That is why the E-Mail relelation is so important.  I do not care what conclusion is reached with the issue, as long as it is accurate and as honestly arrived at as possible.  The observed data contradicts the computer model's prediction, observation is the reality and the check and should not be suppressed.  The model is obviously wrong, but it is what is being pushed. That right there is what make up my mind about global warming, and any other scientific theory.

That does not change the fact that there are other reasons to "go green" ranging from national stability, to personal self reliance.

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #243 on: January 07, 2010, 06:14:40 AM »
So long as "going green" actually accomplishes anything. There's a lot of data out there, for instance, that recycling is ineffective at best and counterproductive at worst. Studies comparing the environmental impact of disposable v. cotton diapers show that reusing cotton diapers can be construed to harm the environment more than the disposables do. Bottom line is that a lot of "green" efforts end up being heavy on the moral posturing and light on the empiric effect.

An example that has long stuck with me involves recycling aluminum cans. Back in the day me and my buds tossed back a lot of beer and dealt with occasional lean spells by saving up our cans and taking them to be recycled when funds were otherwise lacking. We got way into it, designing various sorts of can crushers, debating which method worked best, worked out methods to store our aluminum horde, and so on. Aluminum was going for $.75 a pound or more back then, so it didn't take too many trashbags full of crushed cans before we could afford a case or three of Old Style. Alas, the sweetness and light Nazis reared their heads, recycling was made mandatory in our neck of the woods, the market became glutted, and not only did the price of aluminum drop from $.75 to under $.20 per pound, but the facility we'd recycled stuff at folded up because the local government had muscled its way into the market. Unsurprisingly, with the hop laden motive removed, me and my buds stopped recycling.

Bottom line: I wish the greenies spent more time living up to their buzzwords and creating sustainable and fiscally sound structures that supported their ends rather than ones dependent on cajoling, bandying guilt, and being so strident that their efforts prove counterproductive when benefits and costs are compared.

DougMacG

  • Power User
  • ***
  • Posts: 19447
    • View Profile
Re: Pathological Science
« Reply #244 on: January 07, 2010, 07:19:45 AM »
A little friendly parsing but the term 'going green' is backwards.  The green plants love the increased CO2 levels, like from a Chevy Suburban, and would gasp for a breath if a Prius or a bicycle were the only CO2 sources available. 

BBG, Old Style? From God's Country?  Fully Krausened? That beer should be sipped and savored, not pounded down by the case.   :-)

Rarick, I agree there is no reason to be wasteful or piggish with earth's resources.  I know we always turn off the waterski boat between skiers.  That is different than putting state control over recreational uses of gasoline for example. Recycle by choice is different than forcing me to pay for huge diesel trucks to come down our little dead end for recycling every week, then forcing me to pay for more diesel trucks to bring in more asphalt to repair the damage that the other trucks do.  The key is to make reasonable choices and not have them rammed down our throats or engineered socially from government elites.  Most of the tainted 'research' was aimed at justifying new legislation, restrictions, taxation and redistribution.

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #245 on: January 07, 2010, 07:44:11 AM »
Quote
BBG, Old Style? From God's Country?  Fully Krausened? That beer should be sipped and savored, not pounded down by the case.

Hell yeah, pouring a tad of wort back in made for ambrosia, at least to Midwestern teenagers with unsophisticated pallets.

Old Style taught me a lot of important lessons beyond the bounty we received for returning the sacred vessel. Budweiser was the big local brand when I started imbibing in my mid to late teens. Bud delivery drivers decided they needed a bigger piece of the action and so went on a multi-month strike, at which time old G. Heileman Brewing dropped the case price of its fine product from $7.99 a case to $5.99. Other brands like Strohs and Miller sought to capitalize on the strike by maintaining or increasing their price, but Old Style went for market share, and cleaned house. After several months the strike ended with a slight increase in driver pay, Bud tried to resume selling its product at the pre-strike price, but the entire upper Midwest had converted to Kräusened lager, and Bud hence had to lay off half its drivers. Lessons to learn all the way around.

Body-by-Guinness

  • Guest
Wobbly Science
« Reply #246 on: January 07, 2010, 07:59:55 AM »
Climategate and the Migrating Arctic Tree Line

By Dexter Wright
One of the more enlightening e-mails to spill out of the Climategate scandal is a report on the progress of Siberian fossilized tree ring work. The report, dated October 9, 1998, focuses on some two thousand samples of fossilized trees thirty-three nautical miles north of the present-day Arctic Circle. The report attempts to correlate the migration, north and south, of the tree line with annual tree ring dating so that an actual year can be assigned to a certain location of the Arctic Circle. The report correctly states that there has been migration of the polar tree line over the past several thousand years, but the investigators attribute this migration singularly to the cold tolerances of tree species.

Although the botanists are correct that cold tolerance does affect the northern limit of trees, they incorrectly attribute the migration solely to variation in climatic temperatures. This is only part of the answer. The other part lies in the in the geographic fact that creates the Arctic Circle in the first place.

The location of the Arctic Circle is a function of the tilt of the Earth's axis, which is approximately twenty-three and a half degrees (23.5o) from the orbital plane of our planet. This is why the Arctic Circle is at sixty-six and half degrees (66.5o) north latitude, exactly 23.5o south of the North Pole, which is at ninety degrees (90o) north latitude. But this angle of axis tilt has not always been 23.5o.

The Earth's axis has been calculated to "wobble" on a 40,000-year cycle. This "wobble" is known as "precession," and this phenomenon is well-documented by astronomical observations throughout history. As the axis wobbles, it points toward different parts of the heavens. There is even an entry in Christopher Columbus's log where he admonishes his officers that the star Polaris (the North Star) is not located due north at the center of the celestial sphere but is off by one degree. That is not the fact today, but five hundred years ago, Polaris was off by one degree. Calculations have revealed that the tilt of the Earth's axis has been as much as twenty-four degrees (24o) and little as twenty-two and a half degrees (22.5o). These variations in the tilt of the axis over time have been linked to the onset and end of ice ages simply because the size on the arctic would expand and contract correspondingly to the angle of tilt resulting in a migration of the Arctic Circle tree line.

The now-discredited Dr. Jones of East Anglia University would like us to believe that the migration of the tree line along the Arctic Circle eliminates what is known as the Medieval Optimum, a warm period one thousand years ago when the Vikings were growing grapes in Greenland. Dr. Jones fails to take into account the "wobble" of the Earth's axis, which just three thousand years ago was pointing toward the star Kochab in the constellation Ursa Minor (the Little Dipper) so that it was fixed at the center of the celestial sphere. The measurements of this "wobble" over the last hundred years reveal that in 1900, the tilt was 23.45229 degrees; in 1977, the tilt was 23.44229 degrees; and in the year 2000, the tilt was 23.43928 degrees.

The e-mail report does conclude that "[t]here are no evidences of moving polar timberline to the north during last century." By establishing that there has been no northern migration of the Arctic Circle tree line, this might suggest that global temperatures have remained stable over the last one hundred years. However, keep in mind that this observation is consistent with the fact that the tilt of the Earth's axis has not shifted appreciably over that last century, either. The bigger question is this: Why was this small piece of the puzzle omitted from the reports generated by the U.N.'s Intergovernmental Panel on Climate Change (IPCC) that Dr. Jones helped compile? Is it possible that it is because this fact completely contradicts the "prevailing scientific view" that Dr. Jones would have us believe?

These types of "errors" and "omissions" seem to be indicative of the entire "Global Warming" investigation conducted through, or in collaboration with, Dr. Jones. Perhaps the best conclusion to come to is that the entire body of work compiled by the IPCC is tainted and therefore unreliable for any policymaker.

Page Printed from: http://www.americanthinker.com/2010/01/climategate_and_the_migrating.html at January 07, 2010 - 10:58:13 AM EST

Rarick

  • Guest
Re: Pathological Science
« Reply #247 on: January 07, 2010, 11:25:30 AM »
So long as "going green" actually accomplishes anything. There's a lot of data out there, for instance, that recycling is ineffective at best and counterproductive at worst. Studies comparing the environmental impact of disposable v. cotton diapers show that reusing cotton diapers can be construed to harm the environment more than the disposables do. Bottom line is that a lot of "green" efforts end up being heavy on the moral posturing and light on the empiric effect.

An example that has long stuck with me involves recycling aluminum cans. Back in the day me and my buds tossed back a lot of beer and dealt with occasional lean spells by saving up our cans and taking them to be recycled when funds were otherwise lacking. We got way into it, designing various sorts of can crushers, debating which method worked best, worked out methods to store our aluminum horde, and so on. Aluminum was going for $.75 a pound or more back then, so it didn't take too many trashbags full of crushed cans before we could afford a case or three of Old Style. Alas, the sweetness and light Nazis reared their heads, recycling was made mandatory in our neck of the woods, the market became glutted, and not only did the price of aluminum drop from $.75 to under $.20 per pound, but the facility we'd recycled stuff at folded up because the local government had muscled its way into the market. Unsurprisingly, with the hop laden motive removed, me and my buds stopped recycling.

Bottom line: I wish the greenies spent more time living up to their buzzwords and creating sustainable and fiscally sound structures that supported their ends rather than ones dependent on cajoling, bandying guilt, and being so strident that their efforts prove counterproductive when benefits and costs are compared.

Agreed,  that's what I used the quotes for.  My concept is more about self reliance and recycling where it is feasible.  Most cars are pretty thoroughly recycled today, and economically too.  That has been economically driven since the 1920's if I recall correctly- before politics got involved.  There are some dairy farms that are selling electricity, and selling milk as a nice by product, but I suspect that that would be reversed if tax incentives were removed.  Self reliance looks at it as a cool thing to have control of your own power, whatever other people may choose.

Body-by-Guinness

  • Guest
Avoiding Reasoned Response
« Reply #248 on: January 07, 2010, 11:37:15 AM »
Interesting delineation of CRU et al obstruction of an article that didn't heed their gospel. Lotta formatting and quotes, so here be the link:

http://climateaudit.org/2010/01/07/team-responses-to-mm2003/

Rarick

  • Guest
Re: Pathological Science
« Reply #249 on: January 07, 2010, 12:02:22 PM »
More smoking gun.