Author Topic: Pathological Science  (Read 576986 times)

Body-by-Guinness

  • Guest
Yes, Let's Follow the Money
« Reply #300 on: March 04, 2010, 10:16:00 AM »
The Climate Industry Wall of Money
Posted By JoNova On March 4, 2010 @ 3:44 pm In Global Warming | 26 Comments

This is the copy of the file I sent the ABC Drum Unleashed. I’m grateful they are allowing both sides of the story to get some airtime (though Bob Carter’s , and Marc Hendrickx’s posts were both rejected. Hat-tip to Louis and Marc). Unfortunately the updated version I sent late yesterday which included some empirical references near the end was not posted until 4.30pm EST. (NB: The Australian spelling of skeptic is “sceptic”)



Somehow the tables have turned. For all the smears of big money funding the “deniers”, the numbers reveal that the sceptics are actually the true grassroots campaigners, while Greenpeace defends Wall St. How times have changed. Sceptics are fighting a billion dollar industry aligned with a trillion dollar trading scheme. Big Oil’s supposed evil influence has been vastly outdone by Big Government, and even those taxpayer billions are trumped by Big-Banking.

The big-money side of this debate has fostered a myth that sceptics write what they write because they are funded by oil profits. They say, follow the money? So I did and it’s chilling. Greens and environmentalists need to be aware each time they smear with an ad hominem attack they are unwittingly helping giant finance houses.

Follow the money

Money for Sceptics: Greenpeace has searched for funding for sceptics and found $23 million dollars paid by Exxon over ten years (which has stopped). Perhaps Greenpeace missed funding from other fossil fuel companies, but you can be sure that they searched. I wrote the Climate Money paper in July last year, and since then no one has claimed a larger figure. Big-Oil may well prefer it if emissions are not traded, but it’s not make-or-break for them. If all fossil fuels are in effect “taxed”, consumers will pay the tax anyhow, and past price rises in crude oil suggest consumers will not consume much less fuel, so profits won’t actually fall that much.

But in the end, everyone spends more on carbon friendly initiatives than on sceptics– even Exxon: (how about $100 million for Stanford’s Global Climate and Energy Project, and $600 million for Biofuels research). Some will complain that Exxon is massive and their green commitment was a tiny part of their profits, but the point is, what they spent on skeptics was even less.
Money for the Climate Industry: The US government spent $79 billion on climate research and technology since 1989 – to be sure, this funding paid for things like satellites and studies, but it’s 3,500 times as much as anything offered to sceptics. It buys a bandwagon of support, a repetitive rain of press releases, and includes PR departments of institutions like NOAA, NASA, the Climate Change Science Program and the Climate Change Technology Program. The $79 billion figure does not include money from other western governments, private industry, and is not adjusted for inflation. In other words, it could be…a lot bigger.



For direct PR comparisons though, just look at “Think Climate Think Change“: the Australian Government put $13.9 million into just one quick advertising campaign. There is no question that there are vastly more financial rewards for people who promote a carbon-made catastrophe than for those who point out the flaws in the theory.

Ultimately the big problem is that there are no grants for scientists to demonstrate that carbon has little effect. There are no Institutes of Natural Climate Change, but plenty that are devoted to UnNatural Forces.

It’s a monopsony, and the main point is not that the scientists are necessarily corrupted by money or status (though that appears to have happened to a few), but that there is no group or government seriously funding scientists to expose flaws. The lack of systematic auditing of the IPCC, NOAA, NASA or East Anglia CRU, leaves a gaping vacuum. It’s possible that honest scientists have dutifully followed their grant applications, always looking for one thing in one direction, and when they have made flawed assumptions or errors, or just exaggerations, no one has pointed it out simply because everyone who could have, had a job doing something else. In the end the auditors who volunteered—like Steve McIntyre and AnthonyWatts—are retired scientists, because they are the only ones who have the time and the expertise to do the hard work. (Anyone fancy analysing statistical techniques in dendroclimatology or thermometer siting instead of playing a round of golf?)

Money for the Finance Houses: What the US Government has paid to one side of the scientific process pales in comparison with carbon trading. According to the World Bank, turnover of carbon trading reached $126 billion in 2008. PointCarbon estimates trading in 2009 was about $130 billion. This is turnover, not specifically profits, but each year the money market turnover eclipses the science funding over 20 years. Money Talks. Every major finance house stands to profit as brokers of a paper trade. It doesn’t matter whether you buy or sell, the bankers take a slice both ways. The bigger the market, the more money they make shifting paper.

Banks want us to trade carbon…

Not surprisingly banks are doing what banks should do (for their shareholders): they’re following the promise of profits, and urging governments to adopt carbon trading.7,8 Banks are keen to be seen as good corporate citizens (look, there’s an environmental banker!), but somehow they don’t find the idea of a non-tradable carbon tax as appealing as a trading scheme where financial middlemen can take a cut. (For banks that believe in the carbon crisis, taxes may well “help the planet,” but they don’t pay dividends.)

The stealthy mass entry of the bankers and traders poses a major force. Surely if  money has any effect on carbon emissions, it must also have an effect on careers, shareholders, advertising, and lobbying? There were over 2000 lobbyists in Washington in 2008.



Unpaid sceptics are not just taking on scientists who conveniently secure grants and junkets for pursuing one theory, they also conflict with potential profits of Goldman Sachs, JP Morgan, BNP Paribas, Deutsche Bank, HSBC, Barclays, Morgan Stanley, and every other financial institution or corporation that stands to profit like the Chicago Climate Exchange, European Climate Exchange, PointCarbon, IdeaCarbon (and the list goes on… ) as well as against government bureaucracies like the IPCC and multiple departments of Climate Change. There’s no conspiracy between these groups, just similar profit plans or power grabs.

Tony Abbot’s new policy removes the benefits for bankers. Labor and the Greens don’t appear to notice that they fight tooth and nail for a market in a “commodity” which isn’t a commodity and that guarantees profits for big bankers. The public though are figuring it out.

The largest tradeable “commodity” in the world?

Commissioner Bart Chilton, head of the energy and environmental markets advisory committee of the Commodity Futures Trading Commission (CFTC), has predicted that within five years a carbon market would dwarf any of the markets his agency currently regulates: “I can see carbon trading being a $2 trillion market.” “The largest commodity market in the world.” He ought to know.

It promises to be larger than the markets for coal, oil, gold, wheat, copper or uranium.  Just soak in that thought for a moment. Larger than oil.

Richard L. Sandor, chairman and chief executive officer of Climate Exchange Plc, agrees and predicts trades eventually will total $10 trillion a year.” That’s 10 thousand billion dollars.

Only the empirical evidence matters

Ultimately the atmosphere is what it is regardless of fiat currency movements. Some people will accuse me of smearing climate scientists and making the same ad hominem attacks I detest and protest about. So note carefully: I haven’t said that the massive amount of funding received by promoters of the Carbon Catastrophe proves that they are wrong, just as the grassroots unpaid dedication of sceptics doesn’t prove them right either. But the starkly lop-sided nature of the funding means we’d be fools not to pay very close attention to the evidence. It also shows how vapid the claims are from those who try to smear sceptics and who mistakenly think ad hominem arguments are worth making.

And as far as evidence goes, surprisingly, I agree with the IPCC that carbon dioxide warms the planet. But few realize that the IPCC relies on feedback factors like humidity and clouds causing a major amplification of the minor CO2 effect and that this amplification simply isn’t there. Hundreds of thousands of radiosonde measurements failed to find the pattern of upper trophospheric heating the models predicted, (and neither Santer 2008 with his expanding “uncertainties” nor Sherwood 2008 with his wind gauges change that). Other independent empirical observations indicate that the warming due to CO2 is halved by changes in the atmosphere, not amplified. [Spencer 2007, Lindzen 2009, see also Spencer 2008]. Without this amplification from water vapor or clouds the infamous “3.5 degrees of warming” collapses to just a half a degree—most of which has happened.

Those resorting to this vacuous, easily refutable point should be shamed into lifting their game. The ad hominem argument is stone age reasoning, and the “money” insult they throw, bounces right back at them—a thousand-fold.

http://joannenova.com.au/2010/03/the-climate-industry-wall-of-money/

Body-by-Guinness

  • Guest
A Bit of a Rebuke
« Reply #301 on: March 04, 2010, 12:12:19 PM »
Second post. One of these days the CRU folks are gonna figure out there is no percentage experts in a field tangential to their "research," particularly when it's clear they have no particular standing to initiate the conversation in the first place:

CRUTEM3 “…code did not adhere to standards one might find in professional software engineering”

Those of us who have looked at GISS and CRU code have been saying this for months. Now John Graham-Cumming has posted a statement with the UK Parliament about the quality and veracity of CRU code that has been posted, saying “they have not released everything”.



I found this line most interesting:

“I have never been a climate change skeptic and until the release of emails from UEA/CRU I had paid little attention to the science surrounding it.”

Here is his statement as can be seen at:

http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc5502.htm

=================================

Memorandum submitted by John Graham-Cumming (CRU 55)

I am writing at this late juncture regarding this matter because I have now seen that two separate pieces of written evidence to your committee mention me (without using my name) and I feel it is appropriate to provide you with some further information. I am a professional computer programmer who started programming almost 30 years ago. I have a BA in Mathematics and Computation from Oxford University and a DPhil in Computer Security also from Oxford. My entire career has been spent in computer software in the UK, US and France.

I am also a frequent blogger on science topics (my blog was recently named by The Times as one of its top 30 science blogs). Shortly after the release of emails from UEA/CRU I looked at them out of curiosity and found that there was a large amount of software along with the messages. Looking at the software itself I was surprised to see that it was of poor quality. This resulted in my appearance on BBC Newsnight criticizing the quality of the UEA/CRU code in early December 2009 (see http://news.bbc.co.uk/1/hi/programmes/newsnight/8395514.stm).

That appearance and subsequent errors I have found in both the data provided by the Met Office and the code used to process that data are referenced in two submissions. I had not previously planned to submit anything to your committee, as I felt that I had nothing relevant to say, but the two submissions which reference me warrant some clarification directly from me, the source.

I have never been a climate change skeptic and until the release of emails from UEA/CRU I had paid little attention to the science surrounding it.

In the written submission by Professor Hans von Storch and Dr. Myles R. Allen there are three paragraphs that concern me:

“3.1 An allegation aired on BBC’s “Newsnight” that software used in the production of this dataset was unreliable. It emerged on investigation that the neither of the two pieces of software produced in support of this allegation was anything to do with the HadCRUT instrumental temperature record. Newsnight have declined to answer the question of whether they were aware of this at the time their allegations were made.

3.2 A problem identified by an amateur computer analyst with estimates of average climate (not climate trends) affecting less than 1% of the HadCRUT data, mostly in Australasia, and some station identifiers being incorrect. These, it appears, were genuine issues with some of the input data (not analysis software) of HadCRUT which have been acknowledged by the Met Office and corrected. They do not affect trends estimated from the data, and hence have no bearing on conclusions regarding the detection and attribution of external influence on climate.

4. It is possible, of course, that further scrutiny will reveal more serious problems, but given the intensity of the scrutiny to date, we do not think this is particularly likely. The close correspondence between the HadCRUT data and the other two internationally recognised surface temperature datasets suggests that key conclusions, such as the unequivocal warming over the past century, are not sensitive to the analysis procedure.”

I am the ‘computer analyst’ mentioned in 3.2 who found the errors mentioned. I am also the person mentioned in 3.1 who looked at the code on Newsnight.

In paragraph 4 the authors write “It is possible, of course, that further scrutiny will reveal more serious problems, but given the intensity of the scrutiny to date, we do not think this is particularly likely.” This has turned out to be incorrect. On February 7, 2010 I emailed the Met Office to tell them that I believed that I had found a wide ranging problem in the data (and by extension the code used to generate the data) concerning error estimates surrounding the global warming trend. On February 24, 2010 the Met Office confirmed via their press office to Newsnight that I had found a genuine problem with the generation of ’station errors’ (part of the global warming error estimate).

In the written submission by Sir Edward Acton there are two paragraphs that concern the things I have looked at:

“3.4.7 CRU has been accused of the effective, if not deliberate, falsification of findings through deployment of “substandard” computer programs and documentation. But the criticized computer programs were not used to produce CRUTEM3 data, nor were they written for third-party users. They were written for/by researchers who understand their limitations and who inspect intermediate results to identify and solve errors.

3.4.8 The different computer program used to produce the CRUTEM3 dataset has now been released by the MOHC with the support of CRU.”

My points:

1. Although the code I criticized on Newsnight was not the CRUTEM3 code the fact that the other code written at CRU was of low standard is relevant. My point on Newsnight was that it appeared that the organization writing the code did not adhere to standards one might find in professional software engineering. The code had easily identified bugs, no visible test mechanism, was not apparently under version control and was poorly documented. It would not be surprising to find that other code written at the same organization was of similar quality. And given that I subsequently found a bug in the actual CRUTEM3 code only reinforces my opinion.

2. I would urge the committee to look into whether statement 3.4.8 is accurate. The Met Office has released code for calculating CRUTEM3 but they have not released everything (for example, they have not released the code for ’station errors’ in which I identified a wide-ranging bug, or the code for generating the error range based on the station coverage), and when they released the code they did not indicate that it was the program normally used for CRUTEM3 (as implied by 3.4.8) but stated “[the code] takes the station data files and makes gridded fields in the same way as used in CRUTEM3.” Whether

3.4.8 is accurate or not probably rests on the interpretation of “in the same way as”. My reading is that this implies that the released code is not the actual code used for CRUTEM3. It would be worrying to discover that 3.4.8 is inaccurate, but I believe it should be clarified.

I rest at your disposition for further information, or to appear personally if necessary.

John Graham-Cumming

March 2010

http://wattsupwiththat.com/2010/03/04/crutem3-code-did-not-adhere-to-standards-one-might-find-in-professional-software-engineering/

Body-by-Guinness

  • Guest
How to Build Bad Stats
« Reply #302 on: March 05, 2010, 08:34:54 PM »
The Ease of Cheating With Statistics
Published under Bad Stats, Statistics

Thanks to readers Ari Schwartz and Tom Pollard for suggesting this article.
Take any two sets of numbers, where the only restriction is that a reasonable chunk inside each set has to be different than one another. That is, we don’t want all the numbers inside a set to be equal to one another. We also want the sets to be, more or less, different, though it’s fine to have some matches. Make sure there is a least a dozen or two numbers in each set: for ease, each set should be the same size.
You could collect numbers like this in under two minutes. Just note the calories in an “Serving size” for a dozen different packages of food in your cupboard. That’s the first set. For the second, I don’t know, write down the total page counts from a dozen books (don’t count these! just look at the last page number and write that down).
All set? Type these into any spreadsheet in two columns. Label the first column “Outcome” and label the second column “Theory.” It doesn’t matter which is which. If you’re too lazy to go to the cupboard, just type a jumble of numbers by placing your fingers over the number keys and closing your eyes: however, this will make trouble for you later.
The next step is trickier, but painless for anybody who has had at least one course in “Applied Statistics.” You have to migrate your data from that spreadsheet so that it’s inside some statistical software. Any software will do.
OK so far? You now have to model “Outcome” as a function of “Theory.” Try linear regression first. What you’re after is a small p-value (less than the publishable 0.05) for the coefficient on “Theory.” Don’t worry if this doesn’t make sense to you, or if you don’t understand regression. All software is set up to flag small p-values.
If you find one—a small p-value, that is—then begin to write your scientific paper. It will be titled “Theory is associated with Outcome.” But you have to substitute “Theory” and “Outcome” with suitable scientific-sounding names based on the numbers you observed. The advantage of going to the cupboard instead of just typing numbers is now obvious.
For our example, “Outcome” is easy: “Calorie content”, but “Theory” is harder. How about “Literary attention span”? Longer books, after all, require a longer attention span.
Thus, if you find a publishable p-value, your title will read “Literary attention span is associated with diet”. If you know more about regression and can read the coefficient on “Theory”, then you might be cleverer and entitle your piece, “Lower literary attention spans associated with high caloric diets.” (It might be “Higher” attention spans if the regression coefficient is positive.)
That sounds plausible, does it not? It’s suitably scolding, too, just as we like our medical papers to be. We don’t want to hear about how gene X’s activity is modified in the presence of protein Y, we want admonishment! And we can deliver it with my method.
If you find a small p-value, all you have to do is to think up a Just-So story based on the numbers you have collected, and academic success is guaranteed. After your article is published, write a grant to explore the “issue” more deeply. For example, we haven’t even begun to look for racial disparities (the latest fad) in literary and body heft. You’re on your way!
But that only works if you find a small p-value. What if you don’t? Do not despair! Just because you didn’t find one with regression, does not mean you can’t find one in another way. The beauty of classical statistics is that it was designed to produce success. You can find a small, publishable p-value in any set of data using ingenuity and elbow grease.
For a start, who said we had to use linear regression? Try a non-parametric test like the Mann-Whitney or Wilcoxon, or some other permutation test. Try non-linear regression like a neural net. Try MARS or some other kind of smoothing. There are dozens of tests available.
If none of those work, then try dichotomizing your data. Start with “Theory”: call all the page counts larger than some number “large”, and all those smaller “small.” Then go back and re-try all the tests you tried before. If that still doesn’t give satisfaction, un-dichotomize “Theory” and dichotomize “Outcome” in the same way. Now, a whole new world of classification methods awaits! There’s logistic regression, quadratic discrimination, and on and on and on… And I haven’t even told you about adding more numbers or adding more columns! Those tricks guarantee small p-values.
In short, if you do not find a publishable p-value with your set of data, then you just aren’t trying hard enough.
Don’t believe just me. Here’s an article over at Ars Technica called “We’re so good at medical studies that most of them are wrong” that says the same thing. A statistician named Moolgavkar said “that two models can be fed an identical dataset, and still produce a different answer.”
The article says, “Moolgavkar also made a forceful argument that journal editors and reviewers needed to hold studies to a minimal standard of biological plausibility.” That’s a good start, but if we’re clever in our wording, we could convince an editor that a book length and calories correlation is biologically plausible.
The real solution? As always, prediction and replication. About which, we can talk another time.

http://wmbriggs.com/blog/?p=2043

Body-by-Guinness

  • Guest
Who's the Denier Now? I
« Reply #303 on: March 06, 2010, 01:08:03 PM »
In Denial

The meltdown of the climate campaign.

BY Steven F. Hayward

March 15, 2010, Vol. 15, No. 25
It is increasingly clear that the leak of the internal emails and documents of the Climate Research Unit at the University of East Anglia in November has done for the climate change debate what the Pentagon Papers did for the Vietnam war debate 40 years ago—changed the narrative decisively. Additional revelations of unethical behavior, errors, and serial exaggeration in climate science are rolling out on an almost daily basis, and there is good reason to expect more.

The U.N.’s Intergovernmental Panel on Climate Change (IPCC), hitherto the gold standard in climate science, is under fire for shoddy work and facing calls for a serious shakeup. The U.S. Climate Action Partnership, the self-serving coalition of environmentalists and big business hoping to create a carbon cartel, is falling apart in the wake of the collapse of any prospect of enacting cap and trade in Congress. Meanwhile, the climate campaign’s fallback plan to have the EPA regulate greenhouse gas emissions through the cumbersome Clean Air Act is generating bipartisan opposition. The British media—even the left-leaning, climate alarmists of the Guardian and BBC—are turning on the climate campaign with a vengeance. The somnolent American media, which have done as poor a job reporting about climate change as they did on John Edwards, have largely averted their gaze from the inconvenient meltdown of the climate campaign, but the rock solid edifice in the newsrooms is cracking. Al Gore was conspicuously missing in action before surfacing with a long article in the New York Times on February 28, reiterating his familiar parade of horribles: The sea level will rise! Monster storms! Climate refugees in the hundreds of millions! Political chaos the world over! It was the rhetorical equivalent of stamping his feet and saying “It is too so!” In a sign of how dramatic the reversal of fortune has been for the climate campaign, it is now James Inhofe, the leading climate skeptic in the Senate, who is eager to have Gore testify before Congress.

The body blows to the climate campaign did not end with the Climategate emails. The IPCC—which has produced four omnibus assessments of climate science since 1992—has issued several embarrassing retractions from its most recent 2007 report, starting with the claim that Himalayan glaciers were in danger of melting as soon as 2035. That such an outlandish claim would be so readily accepted is a sign of the credulity of the climate campaign and the media: Even if extreme global warming occurred over the next century, the one genuine scientific study available estimated that the huge ice fields of the Himalayas would take more than 300 years to melt—a prediction any beginning chemistry student could confirm with a calculator. (The actual evidence is mixed: Some Himalayan glaciers are currently expanding.) The source for the melt-by-2035 claim turned out to be not a peer-reviewed scientific assessment, but a report from an advocacy group, the World Wildlife Fund (WWF), which in turn lifted the figure from a popular magazine article in India whose author later disavowed his offhand speculation.

But what made this first retraction noteworthy was the way in which it underscored the thuggishness of the climate establishment. The IPCC’s chairman, Rajendra Pachauri (an economist and former railroad engineer who is routinely described as a “climate scientist”), initially said that critics of the Himalayan glacier melt prediction were engaging in “voodoo science,” though it later turned out that Pachauri had been informed of the error in early December—in advance of the U.N.’s climate change conference in Copenhagen—but failed to disclose it. He’s invoking the Charlie Rangel defense: It was my staff’s fault.

The Himalayan retraction has touched off a cascade of further retractions and corrections, though the IPCC and other organs of climate alarmism are issuing their corrections sotto voce, hoping the media won’t take notice. The IPCC’s assessment that 40 percent of the Amazonian rain forest was at risk of destruction from climate change was also revealed to be without scientific foundation; the WWF was again the source. The Daily Telegraph identified 20 more claims of ruin in the IPCC’s 2007 report that are based on reports from advocacy groups such as Greenpeace rather than peer-reviewed research, including claims that African agricultural production would be cut in half, estimates of coral reef degradation, and the scale of glacier melt in the Alps and the Andes. Numerous other claims were sourced to unpublished student papers and dissertations, or to misstated or distorted research.

Peer reviewers in the formal IPCC process had flagged many of these errors and distortions during the writing of the 2007 report but were ignored. For example, the IPCC claimed that the world is experiencing rapidly rising costs due to extreme weather related events brought on by climate change. But the underlying paper, when finally published in 2008, expressly contradicted this, saying, “We find insufficient evidence to claim a statistical relationship between global temperature increase and catastrophe losses.” Perhaps the most embarrassing walkback was the claim that 55 percent of the Netherlands was below sea level, and therefore gravely threatened by rising sea levels. The correct number is 26 percent, which Dutch scientists say they tried to tell the IPCC before the 2007 report was published, to no avail. And in any case, a paper published last year in Nature Geoscience predicting a 21st-century sea level rise of up to 32 inches has been withdrawn, with the authors acknowledging mistaken methodology and admitting “we can no longer draw firm conclusions regarding 21st century sea level rise from this study without further work.” The IPCC ignored several published studies casting doubt on its sea level rise estimates.

The IPCC isn’t the only important node of the climate campaign having its reputation run through the shredder. The 2006 Stern Review, a British report on the economics of climate change named for its lead author, Lord Nicholas Stern, was revealed to have quietly watered down some of its headline-grabbing claims in its final published report because, as the Telegraph put it, “the scientific evidence on which they were based could not be verified.” Like rats deserting a sinking ship, scientists and economists cited in the Stern Review have disavowed the misuse of their work. Two weeks ago the World Meteorological Association pulled the rug out from under one of Gore’s favorite talking points—that climate change will mean more tropical storms. A new study by the top scientists in the field concluded that although warmer oceans might make for stronger tropical storms in the future, there has been no climate-related trend in tropical storm activity over recent decades and, further, there will likely be significantly fewer tropical storms in a warmer world. “We have come to substantially different conclusions from the IPCC,” said lead author Chris Landsea, a scientist at the National Hurricane Center in Florida. (Landsea, who does not consider himself a climate skeptic, resigned from the IPCC in 2005 on account of its increasingly blatant politicization.)

It was a thorough debunking, as Roger Pielke Jr.’s invaluable blog (rogerpielkejr.blogspot.com) noted in highlighting key findings in the study:

What about more intense rainfall? “[A] detectable change in tropical-cyclone-related rainfall has not been established by existing studies.” What about changes in location of storm formation, storm motion, lifetime and surge? “There is no conclusive evidence that any observed changes in tropical cyclone genesis, tracks, duration and surge flooding exceed the variability expected from natural causes.” Bottom line? “[W]e cannot at this time conclusively identify anthropogenic signals in past tropical cyclone data.”

When Pielke, an expert on hurricane damage at the University of Colorado at Boulder, pointed out defects in the purported global-warming/tropical storm link in a 2005 edition of the Bulletin of the American Meteorological Society, the lead author of the IPCC’s work on tropical storms, Kevin Trenberth, called the article “shameful,” said it should be “withdrawn,” but in typical fashion refused to debate Pielke about the substance of the article.

Finally, the original Climategate controversy over the leaked documents from the University of East Anglia’s Climate Research Unit (CRU) (see my “Scientists Behaving Badly,” The Weekly Standard, December 14, 2009) is far from over. The British government has determined that the CRU’s prolonged refusal to release documents sought in 95 Freedom of Information requests is a potential criminal violation.

The rout has opened up serious divisions within the formerly closed ranks of the climate campaign. Before Climategate, expressing skepticism about catastrophic global warming typically got the hefty IPCC report thrown in your face along with the mantra that “2,500 of the world’s top scientists all agree” about climate change. Now the IPCC is being disavowed like a Mission Impossible team with its cover blown. Senate Environment and Public Works chairman Barbara Boxer insisted on February 23 that she relied solely on U.S. scientific research and not the IPCC to support the EPA’s greenhouse gas “endangerment finding.” In her opening statement at a hearing, Boxer said, “I didn’t quote one international scientist or IPCC report. .  .  . We are quoting the American scientific community here.” The U.N. has announced that it will launch an “independent review” of the IPCC, though like the British investigation of the CRU, the U.N. review will probably be staffed by “settled science” camp followers who will obligingly produce a whitewash. But Pachauri’s days as IPCC chairman are likely numbered; there are mounting calls from within the IPCC for Pachauri to resign, amid charges of potential conflicts of interest (like Gore, Pachauri is closely involved with commercial energy schemes that benefit from greenhouse gas regulation) but also in part because Pachauri chose this delicate moment to publish a soft-core pornographic novel. (The main character is an aging environmentalist and engineer engaged in a “spiritual journey” that includes meeting Shirley MacLaine, detailed explorations of the Kama Sutra, and group sex.)

Robert Watson, Pachauri’s predecessor as chairman of the IPCC from 1997 to 2002, told the BBC: “In my opinion, Dr. Pachauri has to ask himself, is he still credible, and the governments of the world have to ask themselves, is he still credible.” Not the most ringing endorsement. Yvo de Boer, the head of the U.N.’s Framework Convention on Climate Change (the diplomatic contrivance that produced the Kyoto Protocol and the Copenhagen circus), announced his surprise resignation on February 18. De Boer will join the private sector after years of saying that warming is the greatest threat humanity has ever faced.

The climate campaign is a movement unable to hide its decline. Skeptics and critics of climate alarmism have long been called “deniers,” with the comparison to Holocaust denial made explicit, but the denier label now more accurately fits the climate campaigners. Their first line of defense was that the acknowledged errors amount to a few isolated and inconsequential points in the report of the IPCC’s Working Group II, which studies the effects of global warming, and not the more important report of the IPCC’s Working Group I, which is about the science of global warming. Working Group I, this argument goes, is where the real action is, as it deals with the computer models and temperature data on which the “consensus” conclusion is based that the Earth has warmed by about 0.8 degrees Celsius over the last century, that human-generated greenhouse gases are overwhelmingly responsible for this rise, and that we may expect up to 4 degrees Celsius of further warming if greenhouse gas emissions aren’t stopped by mid-century. As Gore put it in his February 28 Times article, “the overwhelming consensus on global warming remains unchanged.” I note in passing that the 2007 Working Group I report uses the terms “uncertain” or “uncertainty” more than 1,300 times in its 987 pages, including what it identified as 54 “key uncertainties” limiting our mastery of climate prediction.

This central pillar of the climate campaign is unlikely to survive much longer, and each repetition of the “science-is-settled” mantra inflicts more damage on the credibility of the climate science community. The scientist at the center of the Climategate scandal at East Anglia University, Phil (“hide the decline”) Jones dealt the science-is-settled narrative a huge blow with his candid admission in a BBC interview that his surface temperature data are in such disarray they probably cannot be verified or replicated, that the medieval warm period may have been as warm as today, and that he agrees that there has been no statistically significant global warming for the last 15 years—all three points that climate campaigners have been bitterly contesting. And Jones specifically disavowed the “science-is-settled” slogan:

BBC: When scientists say “the debate on climate change is over,” what exactly do they mean, and what don’t they mean?

Jones: It would be supposition on my behalf to know whether all scientists who say the debate is over are saying that for the same reason. I don’t believe the vast majority of climate scientists think this. This is not my view. There is still much that needs to be undertaken to reduce uncertainties, not just for the future, but for the instrumental (and especially the palaeoclimatic) past as well [emphasis added].

Judith Curry, head of the School of Earth and Atmos-pheric Sciences at Georgia Tech and one of the few scientists convinced of the potential for catastrophic global warming who is willing to engage skeptics seriously, wrote February 24: “No one really believes that the ‘science is settled’ or that ‘the debate is over.’ Scientists and others that say this seem to want to advance a particular agenda. There is nothing more detrimental to public trust than such statements.”

The next wave of climate revisionism is likely to reopen most of the central questions of “settled science” in the IPCC’s Working Group I, starting with the data purporting to prove how much the Earth has warmed over the last century. A London Times headline last month summarizes the shocking revision currently underway: “World May Not Be Warming, Scientists Say.” The Climategate emails and documents revealed the disarray in the surface temperature records the IPCC relies upon to validate its claim of 0.8 degrees Celsius of human-caused warming, prompting a flood of renewed focus on the veracity and handling of surface temperature data. Skeptics such as Anthony Watts, Joseph D’Aleo, and Stephen McIntyre have been pointing out the defects in the surface temperature record for years, but the media and the IPCC ignored them. Watts and D’Aleo have painstakingly documented (and in many cases photographed) the huge number of temperature stations that have been relocated, corrupted by the “urban heat island effect,” or placed too close to heat sources such as air conditioning compressors, airports, buildings, or paved surfaces, as well as surface temperature series that are conveniently left out of the IPCC reconstructions and undercut the IPCC’s simplistic story of rising temperatures. The compilation and statistical treatment of global temperature records is hugely complex, but the skeptics such as Watts and D’Aleo offer compelling critiques showing that most of the reported warming disappears if different sets of temperature records are included, or if compromised station records are excluded.


Body-by-Guinness

  • Guest
Who's the Denier Now? II
« Reply #304 on: March 06, 2010, 01:08:24 PM »
The puzzle deepens when more accurate satellite temperature records, available starting in 1979, are considered. There is a glaring anomaly: The satellite records, which measure temperatures in the middle and upper atmosphere, show very little warming since 1979 and do not match up with the ground-based measurements. Furthermore, the satellite readings of the middle- and upper-air temperatures fail to record any of the increases the climate models say should be happening in response to rising greenhouse gas concentrations. John Christy of the University of Alabama, a contributing author to the IPCC’s Working Group I chapter on surface and atmospheric climate change, tried to get the IPCC to acknowledge this anomaly in its 2007 report but was ignored. (Christy is responsible for helping to develop the satellite monitoring system that has tracked global temperatures since 1979. He received NASA’s Medal for Exceptional Scientific Achievement for this work.) Bottom line: Expect some surprises to come out of the revisions of the surface temperature records that will take place over the next couple of years.

Eventually the climate modeling community is going to have to reconsider the central question: Have the models the IPCC uses for its predictions of catastrophic warming overestimated the climate’s sensitivity to greenhouse gases? Two recently published studies funded by the U.S. Department of Energy, one by Brookhaven Lab scientist Stephen Schwartz in the Journal of Geophysical Research, and one by MIT’s Richard Lindzen and Yong-Sang Choi in Geophysical Research Letters, both argue for vastly lower climate sensitivity to greenhouse gases. The models the IPCC uses for projecting a 3 to 4 degree Celsius increase in temperature all assume large positive (that is, temperature-magnifying) feedbacks from a doubling of carbon dioxide in the atmosphere; Schwartz, Lindzen, and Choi discern strong negative (or temperature-reducing) feedbacks in the climate system, suggesting an upper-bound of future temperature rise of no more than 2 degrees Celsius.

If the climate system is less sensitive to greenhouse gases than the climate campaign believes, then what is causing plainly observable changes in the climate, such as earlier arriving springs, receding glaciers, and shrinking Arctic Ocean ice caps? There have been alternative explanations in the scientific literature for several years, ignored by the media and the IPCC alike. The IPCC downplays theories of variations in solar activity, such as sunspot activity and gamma ray bursts, and although there is robust scientific literature on the issue, even the skeptic community is divided about whether solar activity is a primary cause of recent climate variation. Several studies of Arctic warming conclude that changes in ocean currents, cloud formation, and wind patterns in the upper atmosphere may explain the retreat of glaciers and sea ice better than greenhouse gases. Another factor in the Arctic is “black carbon”—essentially fine soot particles from coal-fired power plants and forest fires, imperceptible to the naked eye but reducing the albedo (solar reflectivity) of Arctic ice masses enough to cause increased summertime ice melt. Above all, if the medieval warm period was indeed as warm or warmer than today, we cannot rule out the possibility that the changes of recent decades are part of a natural rebound from the “Little Ice Age” that followed the medieval warm period and ended in the 19th century. Skeptics have known and tried to publicize all of these contrarian or confounding scientific findings, but the compliant news media routinely ignored all of them, enabling the IPCC to get away with its serial exaggeration and blatant advocacy for more than a decade.

The question going forward is whether the IPCC will allow contrarian scientists and confounding scientific research into its process, and include the opportunity for dissenting scientists to publish a minority report. Last March, John Christy sent a proposal to the 140 authors of IPCC Working Group I asking “that the IPCC allow for well-credentialed climate scientists to craft a chapter on an alternative view presenting evidence for lower climate sensitivity to greenhouse gases than has been the IPCC’s recent message—all based on published information. .  .  . An alternative view is necessary, one that is not censured for the so-called purpose of consensus. This will present to our policymakers an honest picture of scientific discourse and process.” Christy received no response.

In the aftermath of Climategate, Christy proposed in Nature magazine that the IPCC move to a Wikipedia-style format, in which lead authors would mediate an ongoing discussion among scientists, with the caveat that all claims would need to be based on original studies and data. Such a process would produce more timely and digestible information than the huge twice-a-decade reports the IPCC now produces. Christy told me that he does not hold out much hope for serious IPCC reform. Although he was a lead author in the IPCC’s 2001 report and a contributing author for the 2007 report, the Obama administration has not nominated Christy to participate in the next report. IPCC participants are nominated by governments (a “gatekeeping exercise,” Christy rightly notes). The nomination period closes next week.

Even a reformed IPCC that offered a more balanced account of climate science would make little difference to the fanatical climate campaigners, whose second line of defense is to double-down on demonizing skeptics and “deniers.” Greenpeace, which should be regarded as the John Birch Society of the environmental movement, is filing its own Freedom of Information Act and state public record act requests to obtain private emails and documents from university-based climate skeptics such as Christy, Pat Michaels (University of Virginia), David Legates (University of Delaware), and Willie Soon (Harvard University/Smithsonian Center for Astrophysics), hoping to stir up a scandal commensurate with Climategate by hyping a supposed nefarious link between such researchers and energy companies. Greenpeace has sent letters to nongovernmental skeptics and organizations requesting that they submit to polygraph examinations about their role in or knowledge of the “illegally hacked” CRU emails. “We want to do our part,” Greenpeace’s letter reads, “to help international law enforcement get to the bottom of this potentially criminal act by putting some basic questions to people whose bank accounts, propaganda efforts or influence peddling interests benefitted from the theft.” One wonders whether Greenpeace has really thought this through, as a successful FOIA request for the emails of American scientists would open the floodgates to further probing of James Hansen at NASA, Michael Mann at Penn State, and other government climate scientists who probably wrote emails as embarrassing or crude as Phil Jones and the CRU circle.

Greenpeace is hardly alone in its paranoia. Britain’s former chief government science adviser, Sir David King, popped off to the press in early February that a foreign intelligence service working with American industry lobbyists​—he intimated that he had the CIA and ExxonMobil in mind—were responsible for hacking the CRU emails last year. King backed away from this claim the next day, admitting he had no information to back it up.

The climate campaign camp followers are exhausting their invective against skeptics. Harvard’s Jeffrey Sachs wrote in the Guardian that climate skeptics are akin to tobacco scientists—some of the same people, in fact, though he gave no names and offered no facts to establish such a claim. In the Los Angeles Times Bill McKibben compared climate skeptics to O.J. Simpson’s “dream team” of defense attorneys able to twist incontrovertible scientific evidence. Not to be outdone, Senator Bernie Sanders (Socialist-VT) compared climate skeptics to appeasers of Hitler in the 1930s, a comparison, to be sure, that Al Gore has been making since the early 1990s, but Sanders delivered it with his patented popping-neck-veins style that makes you worry for his health.

In addition to being a sign of desperation, these ad hominem arguments from the climate campaigners also make clear which camp is truly guilty of anti-intellectualism. Gore and the rest of the chorus simply will not discuss any of the scientific anomalies and defects in the conventional climate narrative that scientists such as Christy have pointed out to the IPCC. Perhaps the climate campaign’s most ludicrous contortion is their response to the record snowfall of the eastern United States over the last two months. The ordinary citizen, applying Occam’s Razor while shoveling feet of snow, sees global warming as a farce. The climate campaigners now insist that “weather is not climate,” and that localized weather events, even increased winter snowfall, can be consistent with climate change. They may be right about this, though even the IPCC cautions that we still have little ability to predict regional climate-related weather changes. These are the same people, however, who jumped up and down that Hurricane Katrina was positive proof that catastrophic global warming had arrived, though the strong 2005 hurricane season was followed by four quiet years for tropical storms that made a hash of that talking point.

The ruckus about “weather is not climate” exposes the greatest problem of the climate campaign. Al Gore and his band of brothers have been happy to point to any weather anomaly—cold winters, warm winters, in-between winters​—as proof of climate change. But the climate campaigners cannot name one weather pattern or event that would be inconsistent with their theory. Pretty convenient when your theory works in only one direction.

The unraveling of the climate campaign was entirely predictable, though not the dramatic swiftness with which it arrived. The long trajectory of the climate change controversy conforms exactly to the “issue-attention cycle” that political scientist Anthony Downs explained in the Public Interest almost 40 years ago. Downs laid out a five-stage cycle through which political issues of all kinds typically pass. A group of experts and interest groups begin promoting a problem or crisis, which is soon followed by the alarmed discovery of the problem by the news media and broader political class. This second stage typically includes a large amount of euphoric enthusiasm—you might call this the dopamine stage—as activists conceive the issue in terms of global salvation and redemption. One of the largest debilities of the climate campaign from the beginning was their having conceived the issue not as a practical problem, like traditional air pollution, but as an expression, in Gore’s view, of deeper spiritual and even metaphysical problems arising from our “dysfunctional civilization.” Gore is still thinking about the issue in these terms, grasping for another dopamine rush. In his February 28 New York Times article, he claimed that an international climate treaty would be “an instrument of human redemption.”

The third stage is the hinge. As Downs explains, there comes “a gradually spreading realization that the cost of ‘solving’ the problem is very high indeed.” This is where we have been since the Kyoto process proposed completely implausible near-term reductions in fossil fuel energy—a fanatical monomania the climate campaign has been unable to shake. In retrospect it is now possible to grasp the irony that President George W. Bush’s open refusal to embrace the Kyoto framework kept the climate campaign alive by providing an all-purpose excuse for the lack of “progress” toward a binding treaty. With Bush gone, the intrinsic weakness of the carbon-cutting charade is impossible to hide, though Gore and the climate campaigners are now trying to blame the U.S. Senate for the lack of international agreement.

“The previous stage,” Downs continued, “becomes almost imperceptibly transformed into the fourth stage: a gradual decline in the intensity of public interest in the problem.” Despite the relentless media drumbeat, Gore’s Academy Award and Nobel Prize twofer, and millions of dollars in paid advertising, public concern for climate change has been steadily waning for several years. In the latest Pew survey of public priorities released in January, climate change came in dead last, ranked 21st out of 21 issues of concern, with just 28 percent saying the issue should be a top priority for Congress and President Obama. That’s down 10 points over the last three years.

A separate Pew poll taken last October, before Climate-gate, reported a precipitous drop in the number of Americans who think there is “solid evidence” of global warming, from 71 percent in 2008 to 57 percent in 2009; the number who think humans are responsible for warming dropped in the Pew poll from 47 to 36 percent. Surveys from Rasmussen and other pollsters find similar declines in public belief in human-caused global warming; European surveys are reporting the same trend. In Gallup’s annual survey of environmental issues, taken last spring, respondents ranked global warming eighth out of eight environmental issues Gallup listed; the number of people who say they “worry a great deal” about climate change has fallen from 41 to 34 percent over the last three years. Gallup’s Lydia Saad commented: “Not only does global warming rank last on the basis of the total percentage concerned either a great deal or a fair amount, but it is the only issue for which public concern dropped significantly in the past year.”

“In the final [post-problem] stage,” Downs concluded, “an issue that has been replaced at the center of public concern moves into a prolonged limbo—a twilight realm of lesser attention or spasmodic recurrences of interest.” The death rattle of the climate campaign will be deafening. It has too much political momentum and fanatical devotion to go quietly. The climate campaigners have been fond of warning of catastrophic “tipping points” for years. Well, a tipping point has indeed arrived​—just not the one the climate campaigners expected.

The lingering question is whether the collapse of the climate campaign is also a sign of a broader collapse in public enthusiasm for environmentalism in general. Ted Nordhaus and Michael Shellenberger, two of the more thoughtful and independent-minded figures in the environmental movement, have been warning their green friends that the public has reached the point of “apocalypse fatigue.” They’ve been met with denunciations from the climate campaign enforcers for their heresy. The climate campaign has no idea that it is on the cusp of becoming as ludicrous and forlorn as the World -Esperanto Association.

Steven F. Hayward is the F.K. Weyerhaeuser fellow at the American Enterprise Institute, and author of the forthcoming Almanac of Environmental Trends (Pacific Research Institute).

http://www.weeklystandard.com/articles/denial

Body-by-Guinness

  • Guest
Datasets Bite the Dust
« Reply #305 on: March 11, 2010, 12:09:54 PM »
Climategate: Three of the Four Temperature Datasets Now Irrevocably Tainted
Posted By Christopher Horner On March 11, 2010 @ 8:49 am In Column 1, Money, Science, Science & Technology, US News, World News | 5 Comments

The warmist response to Climategate — the discovery of the thoroughly corrupt practices of the Climate Research Unit (CRU) — was that the tainted CRU dataset was just one of four [1] independent data sets. You know. So really there’s no big deal.

Thanks to a FOIA request, the document production of which I am presently plowing through — and before that, thanks to the great work of Steve McIntyre, and particularly in their recent, comprehensive work, Joseph D’Aleo and Anthony Watts [2] — we know that NASA’s Goddard Institute for Space Studies (GISS) passed no one’s test for credibility. Not even NASA’s.

In fact, CRU’s former head, Phil Jones, even told his buddies that while people may think his dataset — which required all of those “fudge factors” (their words) — is troubled, “GISS is inferior” to CRU.

Really [3].

NASA’s temperature data is so woeful that James Hansen’s colleague Reto Ruedy told the USA Today weather editor:

“My recommendation to you is to continue using … CRU data for the global mean [temperatures]. … “What we do is accurate enough” — left unspoken: for government work — “[but] we have no intention to compete with either of the other two organizations in what they do best.”

To reiterate, NASA’s temperature data is worse than the Climategate temperature data. According to NASA.

And apparently, although these points were never stressed publicly before, NASA GISS is just “basically a modeling group forced into rudimentary analysis of global observed data.” But now, however, NASA GISS “happily [combines the National Climatic Data Center (NCDC) data] and Hadley Center’s data” for the purpose of evaluating NASA’s models.

So — Climategate’s CRU was just “one of four organizations worldwide that have independently compiled thermometer measurements of local temperatures from around the world to reconstruct the history of average global surface temperature.”

But one of the three remaining sets is not credible either, and definitely not independent.

Two down, two to go.

Reto Ruedy refers his inquiring (ok, credulous) reporter to NCDC — the third of the four data sets — as being the gold standard for U.S. temperatures.

But NCDC has been thoroughly debunked elsewhere — Joseph D’Aleo and Anthony Watts have found NCDC completely incredible, having made a practice out of not including cooler temperature stations over time, exaggerating the warming illusion.

Three out of the four temperature datasets stink, with corroboration from the alarmists. Second-sourced, no less.

Anyone know if Japan has a FOIA?

Article printed from Pajamas Media: http://pajamasmedia.com

URL to article: http://pajamasmedia.com/blog/climategate-three-of-the-four-temperature-datasets-now-irrevocably-tainted/

URLs in this post:

[1] just one of four: http://www.pewclimate.org/docUploads/east-anglia-cru-hacked-emails-12-09-09.pdf
[2] Joseph D’Aleo and Anthony Watts: http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf
[3] Really: http://wattsupwiththat.files.wordpress.com/2009/11/giss_is_inferior.png

Body-by-Guinness

  • Guest
The Explications Begin
« Reply #306 on: March 11, 2010, 05:45:20 PM »
The case against the hockey stick
MATT RIDLEY   10th March 2010  —  Issue 168 
The "hockey stick" temperature graph is a mainstay of global warming science. A new book tells of one man's efforts to dismantle it—and deserves to win prizes

Messy, truncated data? Trees on the Gaspé Peninsula in Canada: one of several pieces of evidence for rising temperatures that have been called into question

Andrew Montford’s The Hockey Stick Illusion is one of the best science books in years. It exposes in delicious detail, datum by datum, how a great scientific mistake of immense political weight was perpetrated, defended and camouflaged by a scientific establishment that should now be red with shame. It is a book about principal components, data mining and confidence intervals—subjects that have never before been made thrilling. It is the biography of a graph.

I can remember when I first paid attention to the “hockey stick” graph at a conference in Cambridge. The temperature line trundled along with little change for centuries, then shot through the roof in the 20th century, like the blade of an ice-hockey stick. I had become somewhat of a sceptic about the science of climate change, but here was emphatic proof that the world was much warmer today; and warming much faster than at any time in a thousand years. I resolved to shed my doubts. I assumed that since it had been published in Nature—the Canterbury Cathedral of scientific literature—it was true.

I was not the only one who was impressed. The graph appeared six times in the Intergovernmental Panel on Climate Change (IPCC)’s third report in 2001. It was on display as a backdrop at the press conference to launch that report. James Lovelock pinned it to his wall. Al Gore used it in his film (though describing it as something else and with the Y axis upside down). Its author shot to scientific stardom. “It is hard to overestimate how influential this study has been,” said the BBC. The hockey stick is to global warming what St Paul was to Christianity.

Of course, there is other evidence for global warming, but none of it proves that the recent warming is unprecedented. Indeed, quite the reverse: surface temperatures, sea levels, tree lines, glacier retreats, summer sea ice extent in the Arctic, early spring flowers, bird migration, droughts, floods, storms—they all show change that is no different in speed or magnitude from other periods, like 1910-1940, at least as far as can be measured. There may be something unprecedented going on in temperature, but the only piece of empirical evidence that actually says so—yes, the only one—is the hockey stick.

And the hockey stick is wrong. The emails that were leaked from the University of East Anglia late last year are not proof of this; they are merely the icing on the lake, proof that some of the scientists closest to the hockey stick knew all along that it was problematic. Andrew Montford’s book, despite its subtitle, is not about the emails, which are tagged on as a last chapter. It is instead built around the long, lonely struggle of one man— Stephen McIntyre—to understand how the hockey stick was made, with what data and what programs.

A retired mining entrepreneur with a mathematical bent, McIntyre asked the senior author of the hockey stick graph, Michael Mann, for the data and the programs in 2003, so he could check it himself. This was five years after the graph had been published, but Mann had never been asked for them before. McIntyre quickly found errors: mislocated series, infilled gaps, truncated records, old data extrapolated forwards where new was available, and so on.

Not all the data showed a 20th century uptick either. In fact just 20 series out of 159 did, and these were nearly all based on tree rings. In some cases, the same tree ring sets had been used in different series. In the end the entire graph got its shape from a few bristlecone and foxtail pines in the western United States; a messy tree-ring data set from the Gaspé Peninsula in Canada; another Canadian set that had been truncated 17 years too early called, splendidly, Twisted Tree Heartrot Hill; and a superseded series from Siberian larch trees. There were problems with all these series: for example, the bristlecone pines were probably growing faster in the 20th century because of more carbon dioxide in the air, or recovery after “strip bark” damage, not because of temperature change.

This was bad enough; worse was to come. Mann soon stopped cooperating, yet, after a long struggle, McIntyre found out enough about Mann’s programs to work out what he had done. The result was shocking. He had standardised the data by “short-centering” them—essentially subtracting them from a 20th century average rather than an average of the whole period. This meant that the principal component analysis “mined” the data for anything with a 20th century uptick, and gave it vastly more weight than data indicating, say, a medieval warm spell.

Well, it happens. People make mistakes in science. Corrections get made. That’s how it works, is it not? Few papers get such scrutiny as this had. But that is an even more worrying thought: how much dodgy science is being published without the benefit of an audit by Mcintyre’s ilk? As a long-time champion of science, I find the reaction of the scientific establishment more shocking than anything. The reaction was not even a shrug: it was shut-eyed denial.

If this had been a drug trial done by a pharmaceutical company, the scientific journals, the learned academies and the press would have soon have rushed to discredit it—and rightly so. Instead, they did not want to know. Nature magazine, which had published the original study, went out of its way to close its ears to McIntyre’s criticisms, even though they were upheld by the reviewers it appointed. So did the National Academy of Sciences in the US, even when two reports commissioned by Congress upheld McIntyre. So, of course, did the IPCC, which tied itself in knots changing its deadlines so it could include flawed references to refutations of McIntyre while ignoring complaints that it had misquoted him.

The IPCC has taken refuge in saying that other recent studies confirm the hockey stick but, if you take those studies apart, the same old bad data sets keep popping out: bristlecone pines and all. A new Siberian data series from a place called Yamal showed a lovely hockey stick but, after ten years of asking, McIntyre finally got hold of the data last autumn and found that it relied heavily on just one of just twelve trees, when far larger samples from the same area were available showing no uptick. Another series from Finnish lake sediments also showed a gorgeous hockey stick, but only if used upside down. McIntyre just keeps on exposing scandal after scandal in the way these data were analysed and presented.

Montford’s book is written with grace and flair. Like all the best science writers, he knows that the secret is not to leave out the details (because this just results in platitudes and leaps of faith), but rather to make the details delicious, even to the most unmathematical reader. I never thought I would find myself unable to put a book down because—sad, but true—I wanted to know what happened next in an r-squared calculation. This book deserves to win prizes.

Oh, and by the way, I have a financial interest in coal mining, though not as big as Al Gore has in carbon trading. Maybe you think it makes me biased. Read the book and judge for yourself.

The Hockey Stick Illusion is published by Stacey International, 482 pages, £10.99

http://www.prospectmagazine.co.uk/2010/03/the-case-against-the-hockey-stick/

Rarick

  • Guest
Re: Pathological Science
« Reply #307 on: March 19, 2010, 02:58:08 AM »
http://discovermagazine.com/2010/apr/10-it.s-gettin-hot-in-here-big-battle-over-climate-science

I figured you might want to see something fairly mainstream.  Also you get to see what one of the scientists under fire looks like.  I also found the ways the article contrasts the two scientists.

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #308 on: March 19, 2010, 05:53:16 AM »
Yeah, I read that when it came out and was mildly annoyed that the mainstream science mags still can't find a sceptic to interview, instead finding someone in the AGW camp who spouts radical concepts like scientists should be skeptical by nature. Yes, and they should tinkle rather than let their bladders burst, too. Paging Captain Obvious. At least it was nice to see McIntyre's role in things spoken about in kind terms that outlines his idealistic and pro bono role in all this.

As for Mann, when doesn't he come off as a whiny, self-obsessed, pompous, unctuous, unprincipled weasel who thinks his proprietary interest in AGW panic mongering somehow allows him to dictate the terms of debate? Like to slap his smarmy smirk right off his face.

Rarick

  • Guest
Re: Pathological Science
« Reply #309 on: March 20, 2010, 04:08:03 AM »
Her role.   But that is all you are going to get out of the mainstream.  They like to be "reconciliatory" rather than have a real debate going.   Maybe the should have a Pro, Con, and Counciliator- 3 view paradigm from now on so everyone can get their facts straight?

They are still trying to go green tho'.  all the articles that they published on their own initiative in favor of green make it REAL hard to start moving the other way too quickly, doesn't it?  Maybe a bit of better peer review would have saved them from embarassment which they are trying to avoid now.  THAT is why the MSM is slinking around trying to avoid the gorilla on the buffet table.  A lot of Scientists with them are suffering the same ego bubble pop.9(that stings, I suffered it too, it is part of learning)

Body-by-Guinness

  • Guest
Stats Often Don't Add Up, I
« Reply #310 on: March 20, 2010, 11:59:45 AM »
Something of a bellwether here, I think. Though it speaks not at all about AGW, this piece in a mainstream science pub quite directly attacks statistical methods upon which much of the AGW panic is founded

Odds Are, It's Wrong
Science fails to face the shortcomings of statistics By Tom Siegfried March 27th, 2010; Vol.177 #7 (p. 26)    Text Size
ENLARGE
P VALUEA P value is the probability of an observed (or more extreme) result arising only from chance. S. Goodman, adapted by A. Nandy

For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.

During the past century, though, a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.

It’s science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.

Replicating a result helps establish its validity more securely, but the common tactic of combining numerous studies into one analysis, while sound in principle, is seldom conducted properly in practice.

Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science’s love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn’t believe what you read in the scientific literature.

“There is increasing concern,” declared epidemiologist John Ioannidis in a highly cited 2005 paper in PLoS Medicine, “that in modern research, false findings may be the majority or even the vast majority of published research claims.”

Ioannidis claimed to prove that more than half of published findings are false, but his analysis came under fire for statistical shortcomings of its own. “It may be true, but he didn’t prove it,” says biostatistician Steven Goodman of the Johns Hopkins University School of Public Health. On the other hand, says Goodman, the basic message stands. “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.”

Nobody contends that all of science is wrong, or that it hasn’t compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. “A lot of scientists don’t understand statistics,” says Goodman. “And they don’t understand statistics because the statistics don’t make sense.”

Statistical insignificance

Nowhere are the problems with statistics more blatant than in studies of genetic influences on disease. In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn’t, only one of the suspect gene variants turned up substantially more often in those with the syndrome — a number to be expected by chance.

“Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor” for the syndrome, the researchers reported in the Journal of the American Medical Association.

How could so many studies be wrong? Because their conclusions relied on “statistical significance,” a concept at the heart of the mathematical analysis of modern scientific experiments.

Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control.

Fisher first assumed that fertilizer caused no difference — the “no effect” or “null” hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05 — meaning the chance of a fluke is less than 5 percent — the result should be declared “statistically significant,” Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works.

Fisher’s P value eventually became the ultimate arbiter of credibility for science results of all sorts — whether testing the health effects of pollutants, the curative powers of new drugs or the effect of genes on behavior. In various forms, testing for statistical significance pervades most of scientific and medical research to this day.

But in fact, there’s no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher’s method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn’t prove anything, either. Perhaps the effect doesn’t exist, or maybe the statistical test wasn’t powerful enough to detect a small but real effect.

“That test itself is neither necessary nor sufficient for proving a scientific result,” asserts Stephen Ziliak, an economic historian at Roosevelt University in Chicago.

Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a “false positive” — concluding an effect is real when it actually isn’t. What  eventually emerged was a hybrid mix of the mutually inconsistent Fisher and Neyman-Pearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. “It’s almost never, ever, ever stated correctly, what it means,” says Goodman.

Correctly phrased, experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the no-difference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: “This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance.”

That interpretation commits an egregious logical error (technical term: “transposed conditional”): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A well-fed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is well-fed all of the time. (See Box 2)

Another common error equates statistical significance to “significance” in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect “statistical significance” for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional cures — not clinically significant. Similarly, when studies claim that a chemical causes a “significantly increased risk of cancer,” they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk.

Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fields — psychology, medicine and economics among others — and reported frequent disregard for the distinction.

“I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution” of equating statistical significance to importance, he said in an interview. Ziliak’s data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago.

Multiplicity of mistakes

Even when “significance” is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the “multiplicity” issue — the testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly. Experiments on altered gene activity in diseases may test 20,000 genes at once, for instance. Using a P value of .05, such studies could find 1,000 genes that appear to differ even if none are actually involved in the disease. Setting a higher threshold of statistical significance will eliminate some of those flukes, but only at the cost of eliminating truly changed genes from the list. In metabolic diseases such as diabetes, for example, many genes truly differ in activity, but the changes are so small that statistical tests will dismiss most as mere fluctuations. Of hundreds of genes that misbehave, standard stats might identify only one or two. Altering the threshold to nab 80 percent of the true culprits might produce a list of 13,000 genes — of which over 12,000 are actually innocent.

Recognizing these problems, some researchers now calculate a “false discovery rate” to warn of flukes disguised as real effects. And genetics researchers have begun using “genome-wide association studies” that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20).

Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems.

Clinical trials and errors

Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.

Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. (See Box 3) Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harm ful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others.

“Determining the best treatment for a particular patient is fundamentally different from determining which treatment is best on average,” physicians David Kent and Rodney Hayward wrote in American Scientist in 2007. “Reporting a single number gives the misleading impression that the treatment-effect is a property of the drug rather than of the interaction between the drug and the complex risk-benefit profile of a particular group of patients.”

Another concern is the common strategy of combining results from many trials into a single “meta-analysis,” a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included — published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. “That’s not a formal part of most meta-analyses,” he says.

Meta-analyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on meta-analyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 meta-analysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk.

In principle, a proper statistical analysis can suggest an actual risk even though the raw numbers show a benefit. But in this case the criteria justifying such statistical manipulations were not met. In some of the trials, Avandia was given along with other drugs. Sometimes the non-Avandia group got placebo pills, while in other trials that group received another drug. And there were no common definitions.

“Across the trials, there was no standard method for identifying or validating outcomes; events ... may have been missed or misclassified,” Bruce Psaty and Curt Furberg wrote in an editorial accompanying the New England Journal report. “A few events either way might have changed the findings.”


Body-by-Guinness

  • Guest
Stats Often Don't Add Up, II
« Reply #311 on: March 20, 2010, 12:00:08 PM »
More recently, epidemiologist Charles Hennekens and biostatistician David DeMets have pointed out that combining small studies in a meta-analysis is not a good substitute for a single trial sufficiently large to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding,” Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. “Such results should be considered more as hypothesis formulating than as hypothesis testing.”

These concerns do not make clinical trials worthless, nor do they render science impotent. Some studies show dramatic effects that don’t require sophisticated statistics to interpret. If the P value is 0.0001 — a hundredth of a percent chance of a fluke — that is strong evidence, Goodman points out. Besides, most well-accepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated.

“Replication is vital,” says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. “But in the social sciences and behavioral sciences, replication is not common,” she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. “This is a sad situation.”

Bayes watch

Such sad statistical situations suggest that the marriage of science and math may be desperately in need of counseling. Perhaps it could be provided by the Rev. Thomas Bayes.

Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a “prior probability” — in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.

Bayesian math seems baffling at first, even to many scientists, but it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when well-fed. You also need to know how often it eats — in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. “A scientific hypothesis cannot be properly assessed solely by reference to the observational data,” but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA’s School of Medicine in 2004 in the Journal of the American College of Cardiology. “Bayes’ theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis.” (See Box 4)

With the increasing availability of computer power to perform its complex calculations, the Bayesian approach has become more widely applied in medicine and other fields in recent years. In many real-life contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account.

But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of “probability” in the real world. Standard or “frequentist” statistics treat probabilities as objective realities; Bayesians treat probabilities as “degrees of belief” based in part on a personal assessment or subjective decision about what to include in the calculation. That’s a tough placebo to swallow for scientists wedded to the “objective” ideal of standard statistics. “Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity,” Diamond and Kaul wrote.

Conflict between frequentists and Bayesians has been ongoing for two centuries. So science’s marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability.

“What does probability mean in real life?” the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. “This problem is still unsolved, and ... if it remains un solved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies.”

_______________________________________________________________________

BOX 1: Statistics Can Confuse

Statistical significance is not always statistically significant.

It is common practice to test the effectiveness (or dangers) of a drug by comparing it to a placebo or sham treatment that should have no effect at all. Using statistical methods to compare the results, researchers try to judge whether the real treatment’s effect was greater than the fake treatments by an amount unlikely to occur by chance.

By convention, a result expected to occur less than 5 percent of the time is considered “statistically significant.” So if Drug X outperformed a placebo by an amount that would be expected by chance only 4 percent of the time, most researchers would conclude that Drug X really works (or at least, that there is evidence favoring the conclusion that it works).

Now suppose Drug Y also outperformed the placebo, but by an amount that would be expected by chance 6 percent of the time. In that case, conventional analysis would say that such an effect lacked statistical significance and that there was insufficient evidence to conclude that Drug Y worked.

If both drugs were tested on the same disease, though, a conundrum arises. For even though Drug X appeared to work at a statistically significant level and Drug Y did not, the difference between the performance of Drug A and Drug B might very well NOT be statistically significant. Had they been tested against each other, rather than separately against placebos, there may have been no statistical evidence to suggest that one was better than the other (even if their cure rates had been precisely the same as in the separate tests).

“Comparisons of the sort, ‘X is statistically significant but Y is not,’ can be misleading,” statisticians Andrew Gelman of Columbia University and Hal Stern of the University of California, Irvine, noted in an article discussing this issue in 2006 in the American Statistician. “Students and practitioners [should] be made more aware that the difference between ‘significant’ and ‘not significant’ is not itself statistically significant.”

A similar real-life example arises in studies suggesting that children and adolescents taking antidepressants face an increased risk of suicidal thoughts or behavior. Most such studies show no statistically significant increase in such risk, but some show a small (possibly due to chance) excess of suicidal behavior in groups receiving the drug rather than a placebo. One set of such studies, for instance, found that with the antidepressant Paxil, trials recorded more than twice the rate of suicidal incidents for participants given the drug compared with those given the placebo. For another antidepressant, Prozac, trials found fewer suicidal incidents with the drug than with the placebo. So it appeared that Paxil might be more dangerous than Prozac.

But actually, the rate of suicidal incidents was higher with Prozac than with Paxil. The apparent safety advantage of Prozac was due not to the behavior of kids on the drug, but to kids on placebo — in the Paxil trials, fewer kids on placebo reported incidents than those on placebo in the Prozac trials. So the original evidence for showing a possible danger signal from Paxil but not from Prozac was based on data from people in two placebo groups, none of whom received either drug. Consequently it can be misleading to use statistical significance results alone when comparing the benefits (or dangers) of two drugs.

_______________________________________________________________________

BOX 2: The Hunger Hypothesis

A common misinterpretation of the statistician’s P value is that it measures how likely it is that a null (or “no effect”) hypothesis is correct. Actually, the P value gives the probability of observing a result if the null hypothesis is true, and there is no real effect of a treatment or difference between groups being tested. A P value of .05, for instance, means that there is only a 5 percent chance of getting the observed results if the null hypothesis is correct.

It is incorrect, however, to transpose that finding into a 95 percent probability that the null hypothesis is false. “The P value is calculated under the assumption that the null hypothesis is true,” writes biostatistician Steven Goodman. “It therefore cannot simultaneously be a probability that the null hypothesis is false.”

Consider this simplified example. Suppose a certain dog is known to bark constantly when hungry. But when well-fed, the dog barks less than 5 percent of the time. So if you assume for the null hypothesis that the dog is not hungry, the probability of observing the dog barking (given that hypothesis) is less than 5 percent. If you then actually do observe the dog barking, what is the likelihood that the null hypothesis is incorrect and the dog is in fact hungry?

Answer: That probability cannot be computed with the information given. The dog barks 100 percent of the time when hungry, and less than 5 percent of the time when not hungry. To compute the likelihood of hunger, you need to know how often the dog is fed, information not provided by the mere observation of barking.

_______________________________________________________________________

BOX 3: Randomness and Clinical Trials

Assigning patients at random to treatment and control groups is an essential feature of controlled clinical trials, but statistically that approach cannot guarantee that individual differences among patients will always be distributed equally. Experts in clinical trial analyses are aware that such incomplete randomization will leave some important differences unbalanced between experimental groups, at least some of the time.

“This is an important concern,” says biostatistician Don Berry of M.D. Anderson Cancer Center in Houston.

In an e-mail message, Berry points out that two patients who appear to be alike may respond differently to identical treatments. So statisticians attempt to incorporate patient variability into their mathematical models.

“There may be a googol of patient characteristics and it’s guaranteed that not all of them will be balanced by randomization,” Berry notes. “But some characteristics will be biased in favor of treatment A and others in favor of treatment B. They tend to even out. What is not evened out is regarded by statisticians to be ‘random error,’ and this we model explicitly.”

Understanding the individual differences affecting response to treatment is a major goal of scientists pursuing “personalized medicine,” in which therapies are tailored to each person’s particular biology. But the limits of statistical methods in drawing conclusions about subgroups of patients pose a challenge to achieving that goal.

“False-positive observations abound,” Berry acknowledges. “There are patients whose tumors melt away when given some of our newer treatments.… But just which one of the googol of characteristics of this particular tumor enabled such a thing? It’s like looking for a needle in a haystack ... or rather, looking for one special needle in a stack of other needles.”

_______________________________________________________________________

BOX 4: Bayesian Reasoning

Bayesian methods of statistical analysis stem from a paper published posthumously in 1763 by the English clergyman Thomas Bayes. In a Bayesian analysis, probability calculations require a prior value for the likelihood of an association, which is then modified after data are collected. When the prior probability isn’t known, it must be estimated, leading to criticisms that subjective guesses must often be incorporated into what ought to be an objective scientific analysis. But without such an estimate, statistics can produce grossly inaccurate conclusions.

For a simplified example, consider the use of drug tests to detect cheaters in sports. Suppose the test for steroid use among baseball players is 95 percent accurate — that is, it correctly identifies actual steroid users 95 percent of the time, and misidentifies non-users as users 5 percent of the time.

Suppose an anonymous player tests positive. What is the probability that he really is using steroids? Since the test really is accurate 95 percent of the time, the naïve answer would be that probability of guilt is 95 percent. But a Bayesian knows that such a conclusion cannot be drawn from the test alone. You would need to know some additional facts not included in this evidence. In this case, you need to know how many baseball players use steroids to begin with — that would be what a Bayesian would call the prior probability.

Now suppose, based on previous testing, that experts have established that about 5 percent of professional baseball players use steroids. Now suppose you test 400 players. How many would test positive?

• Out of the 400 players, 20 are users (5 percent) and 380 are not users.

• Of the 20 users, 19 (95 percent) would be identified correctly as users.

• Of the 380 nonusers, 19 (5 percent) would incorrectly be indicated as users.

So if you tested 400 players, 38 would test positive. Of those, 19 would be guilty users and 19 would be innocent nonusers. So if any single player’s test is positive, the chances that he really is a user are 50 percent, since an equal number of users and nonusers test positive.

http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are,_Its_Wrong

Body-by-Guinness

  • Guest
Carbon Cash Sink
« Reply #312 on: March 20, 2010, 06:00:50 PM »
Long, well illustrated piece looking at the World Wildlife Fund's interest in carbon credit schemes. Those who endlessly snivel about oil company money funding skeptics ought to enjoy the feel of the shoe being on the other foot.

http://eureferendum.blogspot.com/2010/03/amazongate-part-ii-seeing-redd.html

Body-by-Guinness

  • Guest
Station Survey Down Under
« Reply #313 on: March 21, 2010, 04:30:08 PM »
Some folks in Australia have started documenting siting issues with their weather stations. The results are amusing in a head shaking way. More here:

http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/climate/

Including this one amid a bunch of construction trash:





Body-by-Guinness

  • Guest
"Given Data Where None Exists"
« Reply #314 on: March 25, 2010, 03:16:57 PM »
Two posts from Watt's Up With That where arctic and antarctic panic mongering is explored. A wee bit of a problem is discovered: a whole lot of "warming" is extrapolated from very few weather stations and hence very little data. Sundry statistical folly ensues:

http://wattsupwiththat.com/2010/03/23/why-joe-bastardi-see-red-a-look-at-sea-ice-and-gistemp-and-starting-choices/

http://wattsupwiththat.com/2010/03/25/gisscapades/#more-17728

Body-by-Guinness

  • Guest
CO2 Trajectories: Indeterminate
« Reply #315 on: March 26, 2010, 05:39:52 AM »
A look at how several different models arrive at several different places all using the same initial data that then diverges significantly when extrapolated forward:

http://wattsupwiththat.com/2010/03/23/loehle-on-hoffman-et-al-and-co2-trajectories/#more-17620

Body-by-Guinness

  • Guest
More Meaningless Data Upon which AGW is Built
« Reply #316 on: April 01, 2010, 06:17:22 AM »
Blake Snow   - FOXNews.com  - March 30, 2010
NASA Data Worse Than Climate-Gate Data, Space Agency Admits

NASA can put a man on the moon, but the space agency can't tell you what the temperature was back then.

NASA was able to put a man on the moon, but the space agency can't tell you what the temperature was when it did. By its own admission, NASA's temperature records are in even worse shape than the besmirched Climate-gate data.

E-mail messages obtained by a Freedom of Information Act request reveal that NASA concluded that its own climate findings were inferior to those maintained by both the University of East Anglia's Climatic Research Unit (CRU) -- the scandalized source of the leaked Climate-gate e-mails -- and the National Oceanic and Atmospheric Administration's National Climatic Data Center.

The e-mails from 2007 reveal that when a USA Today reporter asked if NASA's data "was more accurate" than other climate-change data sets, NASA's Dr. Reto A. Ruedy replied with an unequivocal no. He said "the National Climatic Data Center's procedure of only using the best stations is more accurate," admitting that some of his own procedures led to less accurate readings.

"My recommendation to you is to continue using NCDC's data for the U.S. means and [East Anglia] data for the global means," Ruedy told the reporter.

"NASA's temperature data is worse than the Climate-gate temperature data. According to NASA," wrote Christopher Horner, a senior fellow at the Competitive Enterprise Institute who uncovered the e-mails. Horner is skeptical of NCDC's data as well, stating plainly: "Three out of the four temperature data sets stink."

Global warming critics call this a crucial blow to advocates' arguments that minor flaws in the "Climate-gate" data are unimportant, since all the major data sets arrive at the same conclusion -- that the Earth is getting warmer. But there's a good reason for that, the skeptics say: They all use the same data.

"There is far too much overlap among the surface temperature data sets to assert with a straight face that they independently verify each other's results," says James M. Taylor, senior fellow of environment policy at The Heartland Institute.

"The different groups have cooperated in a very friendly way to try to understand different conclusions when they arise," said Dr. James Hansen, head of NASA's Goddard Institute for Space Studies, in the same 2007 e-mail thread. Earlier this month, in an updated analysis of the surface temperature data, GISS restated that the separate analyses by the different agencies "are not independent, as they must use much of the same input observations."

Neither NASA nor NOAA responded to requests for comment. But Dr. Jeff Masters, director of meteorology at Weather Underground, still believes the validity of data from NASA, NOAA and East Anglia would be in jeopardy only if the comparative analysis didn't match. "I see no reason to question the integrity of the raw data," he says. "Since the three organizations are all using mostly the same raw data, collected by the official weather agency of each individual country, the only issue here is whether the corrections done to the raw data were done correctly by CRU."

Corrections are needed, Masters says, "since there are only a few thousand surface temperature recording sites with records going back 100+ years." As such, climate agencies estimate temperatures in various ways for areas where there aren't any thermometers, to account for the overall incomplete global picture.

"It would be nice if we had more global stations to enable the groups to do independent estimates using completely different raw data, but we don't have that luxury," Masters adds. "All three groups came up with very similar global temperature trends using mostly the same raw data but independent corrections. This should give us confidence that the three groups are probably doing reasonable corrections, given that the three final data sets match pretty well."

But NASA is somewhat less confident, having quietly decided to tweak its corrections to the climate data earlier this month.

In an updated analysis of the surface temperature data released on March 19, NASA adjusted the raw temperature station data to account for inaccurate readings caused by heat-absorbing paved surfaces and buildings in a slightly different way. NASA determines which stations are urban with nighttime satellite photos, looking for stations near light sources as seen from space.

Of course, this doesn't solve problems with NASA's data, as the newest paper admits: "Much higher resolution would be needed to check for local problems with the placement of thermometers relative to possible building obstructions," a problem repeatedly underscored by meteorologist Anthony Watts on his SurfaceStations.org Web site. Last month, Watts told FoxNews.com that "90 percent of them don't meet [the government's] old, simple rule called the '100-foot rule' for keeping thermometers 100 feet or more from biasing influence. Ninety percent of them failed that, and we've got documentation."

Still, "confidence" is not the same as scientific law, something the public obviously recognizes. According to a December survey, only 25 percent of Americans believed there was agreement within the scientific community on climate change. And unless things fundamentally change, it could remain that way, said Taylor.

"Until surface temperature data sets are truly independent of one another and are entrusted to scientists whose objectivity is beyond question, the satellite temperature record alone will not have any credibility," he said.

http://www.foxnews.com/scitech/2010/03/30/nasa-data-worse-than-climategate-data/

Body-by-Guinness

  • Guest
AGW: The Perfect Storm
« Reply #317 on: April 02, 2010, 10:29:12 AM »
Long Der Spiegel piece that lays into the warmist side pretty hard. As I recall Spiegel use to be quite alarmist, is so this is further indication of a sea change:

http://www.spiegel.de/international/world/0,1518,686697,00.html

Rarick

  • Guest
Re: Pathological Science
« Reply #318 on: April 03, 2010, 03:14:35 AM »
At least they are responsible nough to realize that they are reporting what they are given by the scientists, and are not ego involved with the cause.  I suspect a lot of the MSM types here in the USA are...........   Yeah- "a whole branch of science is under fire" is laying into them pretty good.  They a decent job of summing up the situation so far.

ccp

  • Power User
  • ***
  • Posts: 19776
    • View Profile
Re: Pathological Science
« Reply #319 on: April 07, 2010, 12:55:01 PM »
I don't know.  It is 93 degrees in NJ in early April.  This has never happened that I know of.

Body-by-Guinness

  • Guest
Re: Pathological Science
« Reply #320 on: April 07, 2010, 06:42:27 PM »
Don't get me started about how much snow I plowed this winter in Virginia. . . .

Rarick

  • Guest
Re: Pathological Science
« Reply #321 on: April 09, 2010, 06:13:45 AM »
I don't know.  It is 93 degrees in NJ in early April.  This has never happened that I know of.

 :-P  right back, NO TREND.

DougMacG

  • Power User
  • ***
  • Posts: 19462
    • View Profile
Re: Pathological Science
« Reply #322 on: April 13, 2010, 08:46:49 AM »
I missed the discussion of 93 degrees in NJ in April.  As that was posted I was driving through a brutal winter storm with glare ice coming out of the I-70 Eisenhower tunnel in Colo.  Skied 19 inches of fresh wintery powder at Vail the next day, not the slushy stuff you normally associate with spring skiing.  This week they are closed for the season because of Forest Service rules, not because the snow is gone.  Snow depth is still around 70 to 80 inches at many of the areas.

At home (MN) it was a brutally long and cold winter (even for us and that is long and cold), followed by a warm spring so far.

My take:  If you are truly noticing an increase of about .0023 degrees warming per decade, then maybe it is human caused global warming.  But if it is 40-50 degrees warmer than usual on a particular day, then it is weather.

Head of the IPCC after climategate admitted no statistically significant warming since 1995.

DougMacG

  • Power User
  • ***
  • Posts: 19462
    • View Profile
Warming or Cooling?
« Reply #323 on: April 21, 2010, 08:24:43 AM »
Linear regression trends in temperatures (deg C per century):

US, 1880-2009:  +0.64 deg/century

US, 1997-2009:  -2.50 deg/century

Globe, 1880-2009:  +0.57 deg/century

Globe, 2002-2009:  -0.40 deg/century

Data Source:  NASA/GISS.

http://www.americanthinker.com/blog/2010/04/graph_of_the_day_for_april_20.html

Body-by-Guinness

  • Guest
Model Gets Ash Kicked
« Reply #324 on: April 26, 2010, 07:49:30 AM »
When the air current models fail to correctly predict short term phenomena involving relatively few variables perhaps we should question their long term veracity

The Great Phony Volcano Ash Scare and Global Warming

Thomas Lifson
The Global Warming alarmists who want to wreck the world economy over an illusory danger, have already wrecked the finances of airlines and stranded passengers over an illusory danger, based on computer models from the same UK bureaucrats.

The UK Met Office is responsible for predicting that the ash cloud from the Icelandic volcano would destroy aircraft engines. They were wrong, because their computer models were wrong. Sean Poulter of the UK Daily Mail reports:

Britain's airspace was closed under false pretences, with satellite images revealing there was no doomsday volcanic ash cloud over the entire country.

Skies fell quiet for six days, leaving as many as 500,000 Britons stranded overseas and costing airlines hundreds of millions of pounds.

Estimates put the number of Britons still stuck abroad at 35,000.

However, new evidence shows there was no all-encompassing cloud and, where dust was present, it was often so thin that it posed no risk.


The Met Office and its faulty computer models should not be allowed to destroy the economies of the advanced countries. This incident should increase public skepticism over doomsday predictions of the alarmists, which are all based on computer models riddled with duvbious assumptions.

The havoc imposed on the airlines and flying public is but a pale indicator of the dispruption and ruin that would be caused by Cap and Trade, carbon credit trading, and other schemes dreamed up to prevent a disaster even more dubious than the supposed ash cloud, now discredited.

Hat tip: Lucianne.com

Page Printed from: http://www.americanthinker.com/blog/2010/04/the_great_phony_volcano_ash_sc.html at April 26, 2010 - 09:46:16 AM CDT

Rarick

  • Guest
Re: Pathological Science
« Reply #325 on: April 27, 2010, 02:30:22 AM »
Strike 2 the one that impacts and INCONVENIENCES the public in a major way.  Maybe they will get some scrutiny that they have tried to avoid, with MSM cooperation, up to now?

The threat is from concentrated gasses and ash, usually from flying through the plume with in a couple hundred miles of the volcano, not trace concentrations that are several hundred miles removed from the volcano.  Heck, discover channel can tell you that, and FAA regs developed from Mt StHelens also are a good precedent, no planes were lost then either.  I have been laughing for a week at the modern idiocy of fault intolerance, rather than calculated risk that we evolved with........

DougMacG

  • Power User
  • ***
  • Posts: 19462
    • View Profile
Setting fire to the gulf?
« Reply #326 on: April 30, 2010, 10:21:37 AM »
No fear of intensifying hurricanes by warming the gulf?

http://www.latimes.com/news/nationworld/nation/la-na-oil-spill-20100428,0,1038312.story

New Orleans —
Crews may set fires to burn off oil being spewed by a blown-out well that is dumping 42,000 gallons of crude oil a day into the Gulf of Mexico off the Louisiana coast, the Coast Guard said Tuesday.

Body-by-Guinness

  • Guest
Mannipulator Mannhandled?
« Reply #327 on: April 30, 2010, 11:49:58 PM »
Interesting. I shoot at the same club as Cuccinelli and have a generally favorable opinion of him, though he's got some religious right habits I could do without. Don't think making a habit of these kinds of investigations would be good for science, but Mann has so amply demonstrated a disregard for the basic principles of scientific investigation that I think he makes a fine target indeed. As a VA taxpayer, it thrills me to think commonwealth colleges might find themselves contending with accountability.

Oh, Mann: Cuccinelli targets UVA papers in Climategate salvo
by Courteney Stuart
published 4:32pm Thursday Apr 29, 2010
 
Show him the papers— or else.
CUCCINELLI CAMPAIGN
No one can accuse Virginia Attorney General Ken Cuccinelli of shying from controversy. In his first four months in office, Cuccinelli  directed public universities to remove sexual orientation from their anti-discrimination policies, attacked the Environmental Protection Agency, and filed a lawsuit challenging federal health care reform. Now, it appears, he may be preparing a legal assault on an embattled proponent of global warming theory who used to teach at the University of Virginia, Michael Mann.
In papers sent to UVA April 23, Cuccinelli’s office commands the university to produce a sweeping swath of documents relating to Mann’s receipt of nearly half a million dollars in state grant-funded climate research conducted while Mann— now director of the Earth System Science Center at Penn State— was at UVA between 1999 and 2005.
If Cuccinelli succeeds in finding a smoking gun like the purloined emails that led to the international scandal dubbed Climategate, Cuccinelli could seek the return of all the research money, legal fees, and trebled damages.
“Since it’s public money, there’s enough controversy to look in to the possible manipulation of data,” says Dr. Charles Battig, president of the nonprofit Piedmont Chapter Virginia Scientists and Engineers for Energy and Environment, a group that doubts the underpinnings of climate change theory.
Mann is one of the lead authors of the controversial “hockey stick graph,” which contends that global temperatures have experienced a sudden and unprecedented upward spike (like the shape of a hockey stick).
UVA spokesperson Carol Wood says the school will fulfill its legal obligation, noting that the scope of the documents requested mean it could take some time. Mann had not returned a reporter’s calls at posting time, but Mann— whose research remains under investigation at Penn State— recently defended his work in a front page story in USA Today saying while there could be “minor” errors in his work there’s nothing that would amount to fraud or change his ultimate conclusions that the earth is warming as a result of human activities, particularly the burning of fossil fuels.
“Mike is an outstanding and extremely reputable climate scientist,” says UVA climate faculty member Howie Epstein. “And I don’t really know what they’re looking for or expecting to find.”
Among the documents Cuccinelli demands are any and all emailed or written correspondence between or relating to Mann and more than 40 climate scientists, documents supporting any of five applications for the $484,875 in grants, and evidence of any documents that no longer exist along with proof of why, when, and how they were destroyed or disappeared.
Last fall, the release of some emails by researchers caused turmoil in the climate science world and bolstered critics of the human-blaming scientific models. (Among Climategate’s embarrassing revelations was that one researcher professed an interest in punching Charlottesville-based doubting climate scientist Patrick Michaels in the nose.”)
One former UVA climate scientist now working with Michaels worries about politicizing— or, in his words, creating a “witch hunt”— what he believes should be an academic debate.
“I didn’t like it when the politicians came after Pat Michaels,” says Chip Knappenberger. “I don’t like it that the politicians are coming after Mike Mann.”
Making his comments via an online posting under an earlier version of this story, Knappenberger worries that scientists at Virginia’s public universities could become “political appointees, with whoever is in charge deciding which science is acceptable, and prosecuting the rest. Say good-bye to science in Virginia.”
The Attorney General has the right to make such demands for documents under the Fraud Against Taxpayers Act, a 2002 law designed to keep government workers honest.

http://www.readthehook.com/blog/index.php/2010/04/29/oh-mann-cuccinelli-targets-uva-papers-in-climategate-salvo/

Body-by-Guinness

  • Guest
Do the Math, Malthus
« Reply #328 on: May 06, 2010, 12:45:07 PM »
Oh dear, we're all gonna die by 1890.

[youtube]http://www.youtube.com/watch?v=vZVOU5bfHrM[/youtube]

Rarick

  • Guest
Re: Pathological Science
« Reply #329 on: May 07, 2010, 03:39:36 AM »
Peak oil?  if everyone on the planet had a "frugal industrial" lifestyle (what is required to start the birthrate leveling off) how long would the non-renewables last?  I do not think we will necessarily overpopulate in the plague of locusts sense, but maybe in a qualitative sense.  I suspect a certain amount of population is a root of a lot of crime, inefficiencies in distrubution, and other issues.  I suspect the push, pull between haves and have nots gets magnified where the population is massive vs. medium/small?

we can grow the food, but can the energy ratio and quality of life be maintained?

Body-by-Guinness

  • Guest
Raining Soup
« Reply #330 on: May 07, 2010, 10:15:30 AM »
Can't remember if it was Robert Heinlein, Jerry Pournelle, or Larry Niven who said of space "It's raining soup out there, but we haven't learned yet how to hold out our bowls." Think we are no where near of running out of planetary energy sources, and if we get our act together essentially limitless sources are there for the having.

Indeed, I remember in the '60s doom mongering going on about how we'll soon run out of coal. Think currently it's still the largest reserve of US energy. Peak oil breast beating comes and goes while the loudest whiners try to steer us away from oil sands, off shore drilling, coal liquefaction, and so on. The doom struck spout doom and try to be its handmaiden about whatever's handy and have done so for eons. I guess stopped clocks are right twice a day, but it'd be nice if their record was reviewed for accuracy before we start doing any peak oil hair rending.


Rarick

  • Guest
Re: Pathological Science
« Reply #331 on: May 08, 2010, 07:49:03 AM »
exactly, but the look for the soup line got cancelled by obama........  I remember some stuff (DOE summer study?) when I was just entering high school that would have had several million people in space by now, but some entitlement program or the other has been gobbling up the seed money. we many not get over populated, but I guarantee you that we are feeling crowded- eh?  Things seem to be happening one way or another (like that Sci-Fi "end of eternity") that keep us from enlarging ourselves.  There are no panama canals or tangible great projects anymore.  just programs that shuffle bureaucracy around.........

Body-by-Guinness

  • Guest
What's That? The Sun Heats Things Up?
« Reply #332 on: May 22, 2010, 08:07:03 PM »
It’s the Sun, stupid
Comments Twitter Facebook LinkedIn Digg Reddit Buzz Email
By Lawrence Solomon  May 21, 2010 – 7:17 pm

Solar scientists are finally overcoming their fears and going public about the Sun-climate connection

Four years ago, when I first started profiling scientists who were global warming skeptics, I soon learned two things: Solar scientists were overwhelmingly skeptical that humans caused climate change and, overwhelmingly, they were reluctant to go public with their views. Often, they refused to be quoted at all, saying they feared for their funding, or they feared other recriminations from climate scientists in the doomsayer camp. When the skeptics agreed to be quoted at all, they often hedged their statements, to give themselves wiggle room if accused of being a global warming denier. Scant few were outspoken about their skepticism.

No longer.

Scientists, and especially solar scientists, are becoming assertive. Maybe their newfound confidence stems from the Climategate emails, which cast doomsayer-scientists as frauds and diminished their standing within academia. Maybe their confidence stems from the avalanche of errors recently found in the reports of the United Nations Intergovernmental Panel on Climate Change, destroying its reputation as a gold standard in climate science. Maybe the solar scientists are becoming assertive because the public no longer buys the doomsayer thesis, as seen in public opinion polls throughout the developed world. Whatever it was, solar scientists are increasingly conveying a clear message on the chief cause of climate change: It’s the Sun, Stupid.

Jeff Kuhn, a rising star at the University of Hawaii’s Institute for Astronomy, is one of the most recent scientists to go public, revealing in press releases this month that solar scientists worldwide are on a mission to show that the Sun drives Earth’s climate. “As a scientist who knows the data, I simply can’t accept [the claim that man plays a dominant role in Earth’s climate],” he states.

Kuhn’s team, which includes solar scientists from Stanford University and Brazil as well as from his own institute, last week announced a startling breakthrough — evidence that the Sun does not change much in size, as had previously been believed. This week, in announcing the award of a ¤60,000 Humboldt Prize for Kuhn’s solar excellence, his institute issued a release stating that its research into sunspots “may ultimately help us predict how and when a changing sun affects Earth’s climate.”

Earlier this month, the link between solar activity and climate made headlines throughout Europe after space scientists from the U.K., Germany and South Korea linked the recent paucity of sunspots to the cold weather that Europe has been experiencing. This period of spotlessness, the scientists predicted in a study published in Environmental Research Letters, could augur a repeat of winters comparable to those of the Little Ice Age in the 1600s, during which the Sun was often free of sunspots. By comparing temperatures in Europe since 1659 to highs and lows in solar activity in the same years, the scientists discovered that low solar activity generally corresponded to cold winters. Could this centuries-long link between the Sun and Earth’s climate have been a matter of chance? “There is less than a 1% probability that the result was obtained by chance,” asserts Mike Lockwood of the University of Reading in the U.K., the study’s lead author.

Solar scientists widely consider the link between the Sun and Earth’s climate incontrovertible. When bodies such the IPCC dismiss solar science’s contribution to understanding Earth’s climate out of hand, solar scientists no longer sit on their hands. Danish scientist Henrik Svensmark of the Danish National Space Institute stated that the IPCC was “probably totally wrong” to dismiss the significance of the sun, which in 2009 would likely have the most spotless days in a century. As for claims from the IPCC and other global warming doomsayers who argue that periods of extreme heat or cold were regional in scope, not global, Svensmark cites the Medieval Warm Period, a prosperous period of very high solar activity around the year 1000: “It was a time when frosts in May were almost unknown — a matter of great importance for a good harvest. Vikings settled in Greenland and explored the coast of North America. On the whole it was a good time. For example, China’s population doubled in this period.”

The Medieval Warm Period, many solar scientists believe, was warmer than today, and the Roman Warm Period, around the time of Christ, was warmer still. Compelling new evidence to support his view came just in March from the Saskatchewan Isotope Laboratory at the University of Saskatchewan and Institute of Arctic and Alpine Research at the University of Colorado. In a study published in the Proceedings of the National Academy of Sciences of the United States of America, the authors for the first time document seasonal temperature variations in the North Atlantic over a 2,000-year period, from 360 BC to about 1660 AD. Their technique — involving measurements of oxygen and carbon isotopes captured in mollusk shells — confirmed that the Roman Period was the warmest in the past two millennia.

Among solar scientists, there are a great many theories about how the Sun influences climate. Some will especially point to sunspots, others to the Sun’s magnetic field, others still to the Sun’s influence on cosmic rays which, in turn, affect cloud cover. There is as yet no answer to how the Sun affects Earth’s climate. All that now seems sure is that the Sun does play an outsized role and that the Big Chill on freedom of expression that scientists once faced when discussing global warming is becoming a Big Thaw.

http://fullcomment.nationalpost.com/2010/05/21/its-the-sun-stupid/

Body-by-Guinness

  • Guest
Avoiding a Sense of Scale
« Reply #333 on: May 24, 2010, 07:49:57 AM »
On Being the Wrong SizePosted on May 23, 2010 by Willis Eschenbach
Guest Post by Willis Eschenbach

This topic is a particular peeve of mine, so I hope I will be forgiven if I wax wroth.

There is a most marvelous piece of technology called the GRACE satellites, which stands for the Gravity Recovery and Climate Experiment. It is composed of two satellites flying in formation. Measuring the distance between the two satellites to the nearest micron (a hundredth of the width of a hair) allows us to calculate the weight of things on the earth very accurately.

One of the things that the GRACE satellites have allowed us to calculate is the ice loss from the Greenland Ice Cap. There is a new article about the Greenland results called Weighing Greenland.

So, what’s not to like about the article?


Well, the article opens by saying:

Scott Luthcke weighs Greenland — every 10 days. And the island has been losing weight, an average of 183 gigatons (or 200 cubic kilometers) — in ice — annually during the past six years. That’s one third the volume of water in Lake Erie every year. Greenland’s shrinking ice sheet offers some of the most powerful evidence of global warming.

Now, that sounds pretty scary, it’s losing a third of the volume of Lake Erie every year. Can’t have that.

But what does that volume, a third of Lake Erie, really mean? We could also say that it’s 80 million Olympic swimming pools, or 400 times the volume of Sydney Harbor, or about the same volume as the known world oil reserves. Or we could say the ice loss is 550 times the weight of all humans on the Earth, or the weight of 31,000 Great Pyramids … but we’re getting no closer to understanding what that ice loss means.

To understand what it means, there is only one thing to which we should compare the ice loss, and that is the ice volume of the Greenland Ice Cap itself. So how many cubic kilometres of ice are sitting up there on Greenland?

My favorite reference for these kinds of questions is the Physics Factbook, because rather than give just one number, they give a variety of answers from different authors. In this case I went to the page on Polar Ice Caps. It gives the following answers:

Spaulding & Markowitz, Heath Earth Science. Heath, 1994: 195. says less than 5.1 million cubic kilometres (often written as “km^3″).

“Greenland.” World Book Encyclopedia. Chicago: World Book, 1999: 325 says 2.8 million km^3.

Satellite Image Atlas of Glaciers of the World. US Geological Survey (USGS) says 2.6 million km^3.

Schultz, Gwen. Ice Age Lost. 1974. 232, 75. also says 2.6 million km^3.

Denmark/Greenland. Greenland Tourism. Danish Tourist Board says less than 5.5 million km^3.

Which of these should we choose? Well, the two larger figures both say “less than”, so they are upper limits. The Physics Factbook says “From my research, I have found different values for the volume of the polar ice caps. … For Greenland, it is approximately 3,000,000 km^3.” Of course, we would have to say that there is an error in that figure, likely on the order of ± 0.4 million km^3 or so.

So now we have something to which we can compare our one-third of Lake Erie or 400 Sidney Harbors or 550 times the weight of the global population. And when we do so, we find that the annual loss is around 200 km^3 lost annually out of some 3,000,000 km^3 total. This means that Greenland is losing about 0.007% of its total mass every year … seven thousandths of one percent lost annually, be still, my beating heart …

And if that terrifying rate of loss continues unabated, of course, it will all be gone in a mere 15,000 years.

That’s my pet peeve, that numbers are being presented in the most frightening way possible. The loss of 200 km^3 of ice per year is not “some of the most powerful evidence of global warming”, that’s hyperbole. It is a trivial change in a huge block of ice.

And what about the errors in the measurements? We know that the error in the Greenland Ice Cap is on the order of 0.4 million km^3. How about the error in the GRACE measurements? This reference indicates that there is about a ± 10% error in the GRACE Greenland estimates. How does that affect our numbers?

Well, if we take the small estimate of ice cap volume, and the large estimate of loss, we get 220 km^3 lost annually / 2,600,000 km^3 total. This is an annual loss of 0.008%, and a time to total loss of 12,000 years.

Going the other way, we get 180 km^3 lost annually / 3,400,000 km^3 total. This is an annual loss of 0.005%, and a time to total loss of 19,000 years.

It is always important to include the errors in the calculation, to see if they make a significant difference in the result. In this case they happen to not make much difference, but each case is different.

That’s what angrifies my blood mightily, meaningless numbers with no errors presented for maximum shock value. Looking at the real measure, we find that Greenland is losing around 0.005% — 0.008% of its ice annually, and if that rate continues, since this is May 23rd, 2010, the Greenland Ice Cap will disappear entirely somewhere between the year 14010 and the year 21010 … on May 23rd …

So the next time you read something that breathlessly says …

“If this activity in northwest Greenland continues and really accelerates some of the major glaciers in the area — like the Humboldt Glacier and the Peterman Glacier — Greenland’s total ice loss could easily be increased by an additional 50 to 100 cubic kilometers (12 to 24 cubic miles) within a few years”

… you can say “Well, if it does increase by the larger estimate of 100 cubic km per year, and that’s a big if since the scientists are just guessing, that would increase the loss from 0.007% per year to around 0.010% per year, meaning that the Greenland Ice Cap would only last until May 23rd, 12010.”

Finally, the original article that got my blood boiling finishes as follows:

The good news for Luthcke is that a separate team using an entirely different method has come up with measurements of Greenland’s melting ice that, he says, are almost identical to his GRACE data. The bad news, of course, is that both sets of measurements make it all the more certain that Greenland’s ice is melting faster than anyone expected.

Oh, please, spare me. As the article points out, we’ve only been measuring Greenland ice using the GRACE satellites for six years now. How could anyone have “expected” anything? What, were they expecting a loss of 0.003% or something? And how is a Greenland ice loss of seven thousandths of one percent per year “bad news”? Grrrr …

I’ll stop here, as I can feel my blood pressure rising again. And as this is a family blog, I don’t want to revert to being the un-reformed cowboy I was in my youth, because if I did I’d start needlessly but imaginatively and loudly speculating on the ancestry, personal habits, and sexual malpractices of the author of said article … instead, I’m going to go drink a Corona beer and reflect on the strange vagaries of human beings, who always seem to want to read “bad news”.

http://wattsupwiththat.com/2010/05/23/on-being-the-wrong-size/

Body-by-Guinness

  • Guest
Challenging the Scientific/Congressional Complex
« Reply #334 on: May 26, 2010, 05:01:21 PM »
Climategate and the Scientific Elite
Climategate starkly revealed to the public how many global-warming scientists speak and act like politicians.
 
The news that Dr. Andrew Wakefield, who popularized the idea of a link between the MMR (measles, mumps, and rubella) vaccine and autism, has been struck off the register of general practitioners in the United Kingdom testifies to the fact that, in many scientific fields, objectivity still reigns. Britain’s General Medical Council found that Wakefield had used unethical and dishonest research methods and that when his conclusions became common knowledge, the result was that far more children were exposed to the risk of those diseases than would have been the case otherwise. Unfortunately, in other areas, some scientists have been getting away with blatant disregard for the scientific method.

The most prominent example, “Climategate,” highlights how dangerous the politicization of science can be. The public reaction to Climategate should motivate politicians to curb such abuses in the future. Yet it was politicians who facilitated this politicization of science in the first place.

The economic historians Terence Kealey (The Economic Laws of Scientific Research) and Joel Mokyr (The Gifts of Athena) help us understand just how science progresses. Their central insight involves the recursive nature of the scientific process. In Mokyr’s terms, propositional knowledge (what politicians term “basic” science) can inform prescriptive knowledge (“applied” science). However, the reverse happens just as often.

This understanding contradicts the linear model of scientific research, which became prevalent in America in the 1940s and ’50s, following the model of the great scientist Vannevar Bush. Under this model, we must invest in propositional knowledge as a public good, because that’s where our prescriptive knowledge comes from. Yet even as Bush’s model was taking hold, President Eisenhower warned against it. In his farewell address, just after the famous remarks about the military-industrial complex, he said:

Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers. The prospect of domination of the nation’s scholars by federal employment, project allocations, and the power of money is ever present — and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

What Ike warned about has now come to pass. The scientific elite, with the help of its allies in Congress, increasingly dictates public policy and thereby secures the continued flow of research funding. Time and again, scientists have told me how they have to tie their work to global warming in order to obtain funding, and time and again — bar a few brave souls, who are immediately tagged as “deniers” — they tell me it would be career suicide to speak out openly about this.

Moreover, by consciously reinforcing the link between politics and science, the scientific elite is diminishing the role of private innovation, where prescriptive knowledge informed by market demand drives propositional knowledge. Thus, they are driving the market out of the marketplace of ideas.

For that reason, we must challenge the linear model of science. One way to do this is to break the link between political patronage and scientific funding. For example, we could fund basic science by awarding prizes for excellent research results instead of grants before the event. With their patronage powers curtailed, politicians might become less interested in scientific funding, allowing private money to fill the void.

That’s the good news about Climategate. It starkly revealed to the public how many global-warming scientists speak and act like politicians. To those scientists, the message trumped the science. Few members of the public have accepted the findings of the inquiries exonerating the scientists; most dismiss them as whitewashes. This is to the good, for it reinforces awareness of the scientific elite President Eisenhower warned about.

If politicians realize that the public regards them as corrupting science rather than encouraging it, they might become less inclined to continue funding the scientific-political complex. Then scientists would be free to deal with the Andrew Wakefields among them as needed, rather than worry about their funding.

— Iain Murray is vice president for strategy at the Competitive Enterprise Institute.

http://article.nationalreview.com/434861/climategate-and-the-scientific-elite/iain-murray

Rarick

  • Guest
Re: Pathological Science
« Reply #335 on: May 27, 2010, 02:35:51 AM »
The was an article in the june 2010 (Pixar on the cover) wired about how climate science needs better PR and how plain old facts just do not speak for themselves.  The article is in this month's wired, which is not on their website yet.  It is a scary story, because apparently the editor and staff bought the main point hook line and sinker.  Reality has nothing to do with the facts, but more to do with an appeal to the heart....... and that is the title of the article:  Appeal to the Heart.

sorry I can't quote everything but here are some key lines:

"Scientists feel the facts should speak for themselves. They are not wrong; they are just unrealistic" and "the messaging up to this point has been 'here are our findings. Read it and believe' The deniers are convincing people that science is propoganda" 

Wired's website is about 6 months behind the newsstand editions, so the article is not online yet.

Basically they seem to be saying "global warrming is an issue if we can spin it hard enough and get enough people to agree, facts are secondary" it sounds like an openly talked about propaganda plan to me............

Body-by-Guinness

  • Guest
Royal Society Reevaluation
« Reply #336 on: June 01, 2010, 10:42:49 AM »
Rebel scientists force Royal Society to accept climate change scepticism
Ben Webster, Environment Editor


Britain’s premier scientific institution is being forced to review its statements on climate change after a rebellion by members who question mankind’s contribution to rising temperatures.

The Royal Society has appointed a panel to rewrite the 350-year-old institution’s official position on global warming. It will publish a new “guide to the science of climate change” this summer. The society has been accused by 43 of its Fellows of refusing to accept dissenting views on climate change and exaggerating the degree of certainty that man-made emissions are the main cause.

The society appears to have conceded that it needs to correct previous statements. It said: “Any public perception that science is somehow fully settled is wholly incorrect — there is always room for new observations, theories, measurements.” This contradicts a comment by the society’s previous president, Lord May, who was once quoted as saying: “The debate on climate change is over.”

The admission that the society needs to conduct the review is a blow to attempts by the UN to reach a global deal on cutting emissions. The Royal Society is viewed as one of the leading authorities on the topic and it nominated the panel that investigated and endorsed the climate science of the University of East Anglia.

Sir Alan Rudge, a society Fellow and former member of the Government’s Scientific Advisory Committee, is one of the leaders of the rebellion who gathered signatures on a petition sent to Lord Rees, the society president.

He told The Times that the society had adopted an “unnecessarily alarmist position” on climate change.

Sir Alan, 72, an electrical engineer, is a member of the advisory council of the climate sceptic think-tank, the Global Warming Policy Foundation.

He said: “I think the Royal Society should be more neutral and welcome credible contributions from both sceptics and alarmists alike. There is a lot of science to be done before we can be certain about climate change and before we impose upon ourselves the huge economic burden of cutting emissions.”

He refused to name the other signatories but admitted that few of them had worked directly in climate science and many were retired.

“One of the reasons people like myself are willing to put our heads above the parapet is that our careers are not at risk from being labelled a denier or flat-Earther because we say the science is not settled. The bullying of people into silence has unfortunately been effective.”

Only a fraction of the society’s 1,300 Fellows were approached and a third of those declined to sign the petition.

The rebels are concerned by a document entitled Climate Change Controversies, published by the society in 2007. The document attempts to refute what it describes as the misleading arguments employed by sceptics.

The document, which the society has used to influence media coverage of climate change, concludes: “The science clearly points to the need for nations to take urgent steps to cut greenhouse gas emissions into the atmosphere, as much and as fast as possible, to reduce the more severe aspects of climate change.”

Lord Rees admitted that there were differing views among Fellows but said that the new guide would be “based on expert views backed up by sound scientific evidence”.

Bob Ward, policy director of the Grantham Research Institute on Climate Change at LSE, urged the other signatories to come forward. “If these scientists have doubts about the science on climate change, they should come out and speak about it.”

He said that the petition would fuel public doubt about climate change and that it was important to know how many of the signatories had professional knowledge of the topic.

http://www.timesonline.co.uk/tol/news/environment/article7139407.ece

Body-by-Guinness

  • Guest
The Inconvenient Math
« Reply #337 on: June 04, 2010, 08:07:46 AM »
The Idiot’s Guide to Why Renewable Energy is Not the Answer



In the salons of the coastal elites, it is a given wisdom that renewable energy sources are “the answer” - the answer to perceived climate change, the answer to our foreign oil dependency, the answer to how to feel good about yourself…the Answer.
In Washington, politicians can’t throw tax breaks at people and businesses fast enough, prodding them to install all manner of solar, wind, and geothermal devices.
Indeed, these energy sources are seductive. The wind blows, it’s free. Harnessing it, while not free, is certainly clean. Good stuff. The sun shines on us, why not use it? And so on.
The problem isn’t that these energy sources are bad, per se. They’re not.  You’re probably thinking that I’m now going to tell you that the problem is economics.  Yes, there’s truth to that argument; most renewables aren’t economic without subsidies, which is to say they aren’t economic. But some of them are close, and getting closer, so let’s put this aside. Let’s assume they are all inherently economic and can compete on an equal footing with traditional energy sources.
The problem is capacity. Renewables will not – cannot – ever be more than a fairly small fraction of our energy consumption because of fairly mundane reasons like land capacity. For a great perspective on this, I highly recommend William Tucker’s book, Terrestrial Energy.
Let’s go through these one by one.

Wind Power
O-klahoma, where the wind comes sweepin’ down the plain…
Yes, it does, and T. Boone Pickens wants to build lots of wind turbines there to take advantage of America’s “wind corridor.” He sure sounds compelling in those ads, and wind power is certainly growing. Right now, there is about 30,000 MW of wind capacity in the U.S., up considerably from a few years ago. This is the equivalent of about 40 power plants. That’s nice, but it’s only about 1% of our electrical needs (and, obviously, a much lower percentage of our overall power needs).
Actually, 1% overstates things quite a bit. The real contribution of wind is much lower, because wind mills generate electricity only about 30% of the time. Wind just doesn’t sweep down the plain all the time. It also doesn’t always sweep when you want it to. For instance, peak electrical demand is in the summer, but this is when the wind doesn’t blow (much).
Why not store the energy then? You can’t, because the technology isn't there. This is one of the most important things to understand about our electrical grid; electricity must be produced when it is being consumed, and it is a balancing act. Produce too much, and you get electrical surges. Too little, and you get brown outs.
You can see why wind is highly inconvenient. You simply can’t control when you get it, and as a result, it would be impossible to run an entire grid on wind. In Denmark, they get about 20% of their power from wind, but they couldn’t do it without being heavily reliant on coal as the “stabilizer.” (The Danes have the highest electrical rates in Europe, but I know, I promised to assume economics away…)
With conventional energy sources such as coal, oil, and nuclear, the output can be controlled to match demand. I can hear what you’re thinking: so what’s wrong with simply using wind to augment the power supply? Nothing, except that it will never amount to much. This is where some very simple realities get in the way.


Wind turbines are becoming gigantic. The one above is the largest in the world and produces 6 MW of power (when the wind’s blowing). It would take 125 of these to approximate just one typical power plant. So, the question for all of us is, where you going to put these babies? They completely alter the visual environment, and they make noise. Oh, and they kill birds.
Ted Kennedy and Robert Kennedy Jr., committed environmentalists both, bitterly fought a plan to put a wind farm of the shore of Cape Cod. Their problem? They’d have to look at them.
Do you know where wind blows the hardest? Across the tops of mountain ranges. Are you prepared to look at turbines like these along the tops of mountains?
You can see the problem, and yet even if people came to terms with the aesthetics, you could put turbines everywhere and it still wouldn’t add up to much power. It would take over six hundred thousand turbines like the one above to supplant our current power plants, assuming the wind blew constantly). Since it doesn't, it would take over two million. Never going to happen.

Solar
Solar may be closer to the heart of the environmental movement than all others, but it, too, will never amount to a hill of beans. The story is similar to wind: solar just can’t do any heavy lifting.
At the equator on a sunny day, 950 watts of power shines down on a square meter. That’s about 9 light bulbs’ worth. There is no way, short of violating the laws of physics, to enhance that number. In the U.S., the number is more like 400 watts over the course of a sunny day. We’re down to 4 light bulbs. However, we cannot convert 100% of this energy into electricity. Current technology captures about 15%.  Half a light bulb, more or less.
If we covered  every roof top of every home in America with solar panels we could likely power the lighting needs of our homes, but only during the day when the sun is shining. During the night, when we actually need lights, panels are useless. As with wind, electrical power can't be stored at large scale.
The basic problem here is that solar power isn’t very, well, powerful. Sure, you can construct huge arrays like this…


…but they don’t accomplish very much. For instance, let’s say you went really big, and you created a solar array somewhere in the U.S. the size of five hundred football fields (roughly a square mile). How much power would you get? The answer is roughly 150 MW, and only during the day when the sun is out. A typical power plant produces about 750 MW. So supplanting one power plant would require five square miles of panels. This is not compelling.
Supplanting our entire electrical supply with solar would require turning the entire state of South Carolina into one large solar panel. Or...maybe we should stick them out in the desert. Seems logical. Senator Feinstein has proposed paneling over 500,000 acres of the Mojave Desert. But again, we run into mundane practical problems, even before considering things like the environmental impact of covering that much land. When solar panels collect dust and grime, they lose much of their effectiveness, so they must be cleaned frequently. Where, exactly, are we going to get the water needed for cleaning in the middle of the desert? And who's going to be out there wiping down 500,000 acres of panels?

Furthermore, the more distant a source of electricity is from where it's used, the more of it you lose during transmission, as much as 50% over 115 miles. Not a lot of folks living near the Mojave. Feinstein is nuts.

Like wind, solar can be a marginally useful way to augment our power needs, but it will never be a significant contributor.
Hydro
Hydro power is really another form of capturing solar power. The sun evaporates water and redeposits it in the form of rain. Some of this rain is at higher altitudes and flows in rivers to lower altitudes. Dam a river, let the water flow through turbines, and you have hydro power.

The Hoover Dam
Hydro was once a major power source in the U.S., but it’s now down to less than 3%. It’s clean, yes, but the problem is that most of the good sites have already been used, and it is unlikely that politics will allow for any new ones. Groups like the Sierra Club want to remove ones we already have, because they say spawning patterns for fish are interfered with.
There’s also the matter of space. The Hoover Dam created Lake Mead, which is 247 square miles. Can anyone imagine that such a project could happen today? Correct answer: no. Tolerance for any significant new dam project is probably around zero. Hydro power will only decline as a percentage of our electrical needs.

Ethanol
Whose bright idea was it (Jimmy Carter) to take one third of our nation’s corn crop to create fuel? I’m not pointing fingers (Jimmy Carter), but this was a bad, bad idea. For one, it has driven food prices higher since the price of corn flows through to all kinds of other foods (think things like fructose and cow feed). Higher food prices have actually led to riots in the third world (see: Mexican Tortilla Riots).
But, that aside, food just doesn’t store much energy, so you need a lot of it to get any results.  Right now, the one third of our corn crop we are allocating to ethanol offsets less than 3% of our oil consumption. Tucker estimates that if you allocated every acre in the United States to ethanol production – assuming it was all arable – you’d offset about one-sixth of our oil needs.
Thank you for playing. Next.
Biomass
There's a big popular trend towards generating electricity (and home heating) by burning wood pellets. This is another innocuous idea, seemingly. Cut a tree, and another grows in its place. Environmentalists like burning trees because the released carbon is recaptured by the new trees that grow where the old ones were cut.
The problem, though, is the same as with corn: wood just doesn't store that much energy relative to its mass. To put it into perspective, one would have to cut and burn 56 million trees per year to match the output of one typical coal-fired plant. This is about 140,000 acres. If we replaced our entire power plant system with biomass, we would need to cut roughly 750 million acres of trees per year, or approximately one third of the land mass of the United States.
Once again, the simple math of it gets in the way of the best intentions.


Geothermal
Geothermal is mostly limited to places where magma comes close to the earth’s surface. In the U.S., this means California and Hawaii. California gets about 5% of its electricity from geothermal, which is great, but unless someone figures out how to tap much deeper magma sources, this is also a highly limited source of power. Oh, and it smells really bad.
_____________________
Sorry, I know all this is a big buzz kill for many of you. Believing in renewables is so comforting. It's also fool's gold.
Right now, the U.S. is getting about 6% of its overall energy needs from renewables. Given all the practical  constraints I've outlined, getting this to even 10% would seem remarkable, which leaves us with...conventional energy sources, specifically oil, gas, coal, and nuclear. They will continue to do the heavy lifting, and no amount of fairy dust will change that.
Not coincidentally, all four of these energy sources are continuously under fire from the enormously powerful environmental lobby. But frankly, until the Sierra Club, Earth First, and others come up with some solutions of their own, rather than just things they oppose, they have no credibility, at least not with me. 
A large part of the answer, it would seem, would be to scale up our nuclear capacity. It's great stuff, nuclear. A few grams of matter holds as much energy as hundreds of miles of forests, sunshine, or wind. It's clean, and there's now a fifty year operating history in the U.S. without a single fatality. France gets 80% of its power from nukes. We should be more like France. (Did I just say that?)

In the 1970s, the Three Mile Island incident and the film The China Syndrome happened almost simultaneously. The combination was enough to cement public opinion for a generation. For the left, the "no nukes" movement still has a warm and fuzzy resonance.  Some, though, are slowly coming around, because nukes don't produce greenhouse gasses. President Obama has even made some positive noises about nuclear power, so maybe there's hope.
Perhaps we need a movie about renewables. We could call it "The Inconvenient Math."

http://thenakeddollar.blogspot.com/2010/06/idiots-guide-to-why-renewable-energy-is.html

Body-by-Guinness

  • Guest
A New Thing to be Really Scared About
« Reply #338 on: June 09, 2010, 05:41:14 AM »
Climate Alarmism Takes Off in a New Direction

By F. Swemson
NASA has just voiced its concern over the threat that our modern technological society is now facing from "solar storms."

Now it's true of course, that our society has become quite dependent on new technology, such as satellite communications and GPS mapping, that is vulnerable to the effects of major solar storms, but NASA seems to be a bit too worried about how big the threat really is. Fortunately for us, legitimate climate scientists believe the next solar "maximum," which is due in four to five years, is not expected to be anything unusual. In any event, while major solar storms could screw things up pretty well for a while, they're not potentially fatal to mankind, as the AGW alarmists claim that global warming is.

According to my friend Dr. Ed Berry of climatephysics.com, there's a good reason why NASA is making a big deal out of it. It's called "Grantsmanship." If they can convince Congress that this is a serious threat, which they're obviously best-positioned to research and plan for, then there's a chance that they can get Congress to increase their funding. There's a good deal of exaggeration here, of course, as the "super maximum solar flare" that they're talking about, while possible, is no more likely to occur within the next few years than a hundred-year flood.

But it is possible, of course. So it doesn't seem at all unreasonable for them to want more funding so that they can enhance their forecasting capabilities, which have already come a long way, thanks to our existing satellites.

There's another aspect to this story, however, that might be more interesting for us to consider right now, as it may help us to understand how the global warming hoax arose. While NASA is talking about the issue, they're doing so in a relatively calm and reasonable manner. We can see this from the title of their latest release on the subject:

NASA: "As the Sun Awakens, NASA Keeps a Wary Eye on Space Weather"

Within 48 hours, however, as the media began to report on the story, it quickly began to morph into another pending apocalypse:

Washington Post: "Do Solar Storms Threaten Life as We Know It?"

Gawker.com: "The Newest Threat to All Human Life on Earth: Solar Storms"

It seems that Fox News isn't quite as over-the-top.

Fox News: "Solar Storms Could Be Threat in 2013"

Why does the media do this?

The simple (historical) answer, of course, is that catastrophes sell newspapers. William Randolph Hearst may have given formal birth to "yellow journalism," but he wasn't its only practitioner. I wrote about this back in January in my article "159 Years of Climate Alarmism at The New York Times."

In the last part of the 19th century, newspapers like The Times were warning us of an imminent new ice age. By 1940, they were worried about excessive warming; however, after the next 35 years of cooling, they were quick to transition to hysteria over another approaching ice age. By the late 1970s, after the earth began warming again, people like Maurice Strong, along with some other blossoming environmental extremists, politicians, and U.N. officials, began to see the huge potential for power and wealth that was built into the issue. This, along with the added fantasy of AGW, brought us to the brink of Cap & Trade and the fraudulent EPA classification of CO2 as a pollutant.

Of course, the warming stopped in 2002, and we've been cooling steadily since then. Now, as the AGW house of cards begins to crumble, we're already hearing rumblings about another ice age. It could happen, and if it does, we should be worried this time, because while warming was never any kind of threat at all, extreme cooling is. People starve when crops don't grow very well.

The obvious question is: "If they've been wrong each and every time in the past, why on earth would anyone still be listening to them today?"

H.L. Mencken answered it best when he said, "The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary."

The public has a short memory, to be sure, but rather than blaming the politicians alone for the current AGW hoax, we should be aware of the media's complicity in this sordid affair. They're still not reporting the news honestly. Like Hearst, they're still making it up as they go. Just to sell "newspapers."

Page Printed from: http://www.americanthinker.com/2010/06/climate_alarmism_takes_off_in.html at June 09, 2010 - 07:40:25 AM CDT

Rarick

  • Guest
Re: Pathological Science
« Reply #339 on: June 10, 2010, 02:48:27 AM »
I remember a couple of shows on the discover channel that came out a couple years before this global warming became an issue.  The first show was about a correlation between greenland core samples going back several hundred thousand years showing a cooling and warming cycle.  There was another scientist taking core sample from the sea bed, and found corresponding changes in Plankton/Diatoms that were lagging a bit but matched up with the greenland samples.   These temperature swing matched up with Ice ages, the Diatom record also matched up.  Thes swings were taking place long before humanity was any where near abundant enough to have any input.   The last 15 minutes of the show explored "possibilities" as to the cause ranging from a "solar low" to shutdown of the "oceanic conveyor"  the most appaerent part of that was the gulf stream.

A couple weeks later there was a show about the Oceanic Conveyor, talking about the Great Current in the oceans which slowly flowed water up and down from the arctic ocean going thru the Atlantic, Pacific and indian oceans around Africa.  The Antarctic current was basically a circular current that isolated antarctic waters except at the 2 mixxing ppoints of the Mageallan and Good Hope straits........  During the ice ages this conveyor apparently was shutdown and that was caused by a period of warming that melted enough Ice to lower salinity in the oceans.  Salinity is what caused the cold arctic waters to "goo deep" and make room for warm surface waters to flow in.  The Gulf Current is what keeps England from freezing up, and keeps the arctic ice melted back to maneagable levels.  The show then went on with speculation that all the Ice ages may have been preceeded by an extended/excessive heating period sufficient to shutdown the heat conveyor in the oceans.........

Both shows talked about how on going research was being undertaken to figure out these mechanisms.  They were both fascinating and a bit alarming, in that they showed mechanisms working on the climate we had NO apparent control over, and didn't even have any real understanding about either.  Both shows mentioned greenhouse gasses, but also stated that these had an input, but that it was unquantifiable without further study.

2 Years later an Inconvienient Truth came out and the hysteria statrted.  Ever since then it has been politics about Deniers and Advocates and all the other labels flying around.  I have never heard or seen any of the Scientists mentioned in those shows in any of the debate, and they were developing hard, long term, observed data and doing so with and attitude of what are the numbers? and what do those tell us?.

The current warming/climate change hysteria ia all about what computer model is legitimate, and what a government need to do about the threat of global warming.  If a computer model cannot be agrered upon- it is not surprising they all use short term data, and they are all made with the concept of:  Put in the numbers and see if they match our theory, or they do not, what is wrong with the observed numbers?

Meantime there is not a peep from the scientists that were doing the generation of the numbers by going out and freezing/ getting seasick/ or otherwise doing real observational science.  Why?  Is their funding pulled because their data was inconvienient to the political agenda? (there is evidence that opposing opinion and peer review were being actively suppressed in the "email outing")  or has their work become a sideline now that the political balls are all rolling the way the various power broking groups want them too?

The charts on global warming in Wikipedia I remember from the greenland ice core show. and some of the other information in the wikipedia is derived from the seacore show.  The larger part of the wikipedia is dedicated to the "Hockey stick" though, which we now know is due to doctored data.  I await further news from the field scientists, not the lobby/computer room scientists, but the MSM seems to be disinclined to meet their duty and mandate in that regard..........

Sorry about the rant, but that sums up about 15 years of keeping a casual eye on this debate. A lot of recent talk seems to indicate it is over, but any honest thinker can see that it is still on going.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72330
    • View Profile
Re: Pathological Science
« Reply #340 on: June 10, 2010, 05:15:17 AM »
That point about ocean currents is very interesting.  May I suggest that this subject is worthy of further attention from our resident point men for this issue?

Body-by-Guinness

  • Guest
New Climate Model, 1
« Reply #341 on: June 10, 2010, 06:20:21 AM »
Not exactly ocean current specific, but a new way of looking at the planet's energy (warming/cooling) budget:

A New And Effective Climate Model
Posted on April 6, 2010 by Anthony Watts
The problem with existing climate models:

Guest post by Stephen Wilde


From ETH, Zurich - Climate model (Ruddiman, 2001)
Even those who aver that man’s activity affects climate on a global scale rather than just locally or regionally appear to accept that the existing climate models are incomplete. It is a given that the existing models do not fully incorporate data or mechanisms involving cloudiness or global albedo (reflectivity) variations or variations in the speed of the hydrological cycle and that the variability in the temperatures of the ocean surfaces and the overall ocean energy content are barely understood and wholly inadequately quantified in the infant attempts at coupled ocean/atmosphere models. Furthermore the effect of variability in solar activity on climate is barely understood and similarly unquantified.

As they stand at present the models assume a generally static global energy budget with relatively little internal system variability so that measurable changes in the various input and output components can only occur from external forcing agents such as changes in the CO2 content of the air caused by human emissions or perhaps temporary after effects from volcanic eruptions, meteorite strikes or significant changes in solar power output.

If such simple models are to have any practical utility it is necessary to demonstrate that some predictive skill is a demonstrable outcome of the models. Unfortunately it is apparent that there is no predictive skill whatever despite huge advances in processing power and the application of millions or even billions of man hours from reputable and experienced scientists over many decades.

As I will show later on virtually all climate variability is a result of internal system variability and additionally the system not only sets up a large amount of variability internally but also provides mechanisms to limit and then reduce that internal variability. It must be so or we would not still have liquid oceans. The current models neither recognise the presence of that internal system variability nor the processes that ultimately stabilise it.

The general approach is currently to describe the climate system from ‘the bottom up’ by accumulating vast amounts of data, observing how the data has changed over time, attributing a weighting to each piece or class of data and extrapolating forward. When the real world outturn then differs from what was expected then adjustments are made to bring the models back into line with reality. This method is known as ‘hindcasting’.

Although that approach has been used for decades no predictive skill has ever emerged. Every time the models have been adjusted using guesswork (or informed judgment as some would say) to bring them back into line with ongoing real world observations a new divergence between model expectations and real world events has begun to develop.

It is now some years since the weighting attached to the influence of CO2 was adjusted to remove a developing discrepancy between the real world warming that was occurring at the time and which had not been fully accounted for in the then climate models. Since that time a new divergence began and is now becoming embarrassingly large for those who made that adjustment. At the very least the weighting given to the effect of more CO2 in the air was excessive.

The problem is directly analogous to a financial accounting system that balances but only because it contains multiple compensating errors. The fact that it balances is a mere mirage. The accounts are still incorrect and woe betide anyone who relies upon them for the purpose of making useful commercial decisions.

Correcting multiple compensating errors either in a climate model or in a financial accounting system cannot be done by guesswork because there is no way of knowing whether the guess is reducing or compounding the underlying errors that remain despite the apparent balancing of the financial (or in the case of the climate the global energy) budget.

The system being used by the entire climatological establishment is fundamentally flawed and must not be relied upon as a basis for policy decisions of any kind.

A better approach:

We know a lot about the basic laws of physics as they affect our day to day existence and we have increasingly detailed data about past and present climate behaviour.

We need a New Climate Model (from now on referred to as NCM) that is created from ‘the top down’ by looking at the climate phenomena that actually occur and using deductive reasoning to decide what mechanisms would be required for those phenomena to occur without offending the basic laws of physics.

We have to start with the broad concepts first and use the detailed data as a guide only. If a broad concept matches the reality then the detailed data will fall into place even if the broad concept needs to be refined in the process. If the broad concept does not match the reality then it must be abandoned but by adopting this process we always start with a broad concept that obviously does match the reality so by adopting a step by step process of observation, logic, elimination and refinement a serviceable NCM with some predictive skill should emerge and the more detailed the model that is built up the more predictive skill will be acquired.

That is exactly what I have been doing step by step in my articles here:

Articles by Stephen Wilde

for some two years now and I believe that I have met with a degree of success because many climate phenomena that I had not initially considered in detail seem to be falling into line with the NCM that I have been constructing.

In the process I have found it necessary to propound various novel propositions that have confused and irritated warming proponents and sceptics alike but that is inevitable if one just follows the logic without a preconceived agenda which I hope is what I have done.

I will now go on to describe the NCM as simply as I can in verbal terms, then I will elaborate on some of the novel propositions (my apologies if any of them have already been propounded elsewhere by others but I think I would still be the first to pull them all together into a plausible NCM) and I will include a discussion of some aspects of the NCM which I find encouraging.

Preliminary points:

Firstly we must abandon the idea that variations in total solar output have a significant effect over periods of time relevant to human existence. At this point I should mention the ‘faint sun paradox’:
http://en.wikipedia.org/wiki/Faint_young_Sun_paradox

Despite a substantial increase in the power of the sun over billions of years the temperature of the Earth has remained remarkably stable. My proposition is that the reason for that is the existence of water in liquid form in the oceans combined with a relatively stable total atmospheric density. If the power input from the sun changes then the effect is simply to speed up or slow down the hydrological cycle.

An appropriate analogy is a pan of boiling water. However much the power input increases the boiling point remains at 100C. The speed of boiling however does change in response to the level of power input. The boiling point only changes if the density of the air above and thus the pressure on the water surface changes. In the case of the Earth’s atmosphere a change in solar input is met with a change in evaporation rates and thus the speed of the whole hydrological cycle keeping the overall temperature stable despite a change in solar power input.

A change in the speed of the entire hydrological cycle does have a climate effect but as we shall see on timescales relevant to human existence it is too small to measure in the face of internal system variability from other causes.

Unless more CO2 could increase total atmospheric density it could not have a significant effect on global tropospheric temperature. Instead the speed of the hydrological cycle changes to a minuscule and unmeasurable extent in order to maintain sea surface and surface air temperature equilibrium. As I have explained previously a change limited to the air alone short of an increase in total atmospheric density and pressure is incapable of altering that underlying equilibrium.

2. Secondly we must realise that the absolute temperature of the Earth as a whole is largely irrelevant to what we perceive as climate. In any event those changes in the temperature of the Earth as a whole are tiny as a result of the rapid modulating effect of changes in the speed of the hydrological cycle and the speed of the flow of radiated energy to space that always seeks to match the energy value of the whole spectrum of energy coming in from the sun.

The climate in the troposphere is a reflection of the current distribution of energy within the Earth system as a whole and internally the system is far more complex than any current models acknowledge.

That distribution of energy can be uneven horizontally and vertically throughout the ocean depths, the troposphere and the upper atmosphere and furthermore the distribution changes over time.

We see ocean energy content increase or decrease as tropospheric energy content decreases or increases. We see the stratosphere warm as the troposphere cools and cool as the troposphere warms. We see the upper levels of the atmosphere warm as the stratosphere cools and vice versa. We see the polar surface regions warm as the mid latitudes cool or the tropics warm as the poles cool and so on and so forth in infinite permutations of timing and scale.

As I have said elsewhere:

“It is becoming increasingly obvious that the rate of energy transfer varies all the time between ocean and air, air and space and between different layers in the oceans and air. The troposphere can best be regarded as a sandwich filling between the oceans below and the stratosphere above. The temperature of the troposphere is constantly being affected by variations in the rate of energy flow from the oceans driven by internal ocean variability, possibly caused by temperature fluctuations along the horizontal route of the thermohaline circulation and by variations in energy flow from the sun that affect the size of the atmosphere and the rate of energy loss to space.

The observed climate is just the equilibrium response to such variations with the positions of the air circulation systems and the speed of the hydrological cycle always adjusting to bring energy differentials above and below the troposphere back towards equilibrium (Wilde’s Law ?).

Additionally my propositions provide the physical mechanisms accounting for the mathematics of Dr. F. Miskolczi..”

http://www.examiner.com/x-7715-Portland-Civil-Rights-Examiner~y2010m1d12-Hungarian-Physicist-Dr-Ferenc-Miskolczi-proves-CO2-emissions-irrelevant-in-Earths-Climate

He appears to have demonstrated mathematically that if greenhouse gases in the air other than water vapour increase then the amount of water vapour declines so as to maintain an optimum optical depth for the atmosphere which modulates the energy flow to maintain sea surface and surface air temperature equilibrium. In other words the hydrological cycle speeds up or slows down just as I have always proposed.

3. In my articles to date I have been unwilling to claim anything as grand as the creation of a new model of climate because until now I was unable to propose any solar mechanism that could result directly in global albedo changes without some other forcing agent or that could account for a direct solar cause of discontinuities in the temperature profile along the horizontal line of the oceanic thermohaline circulation.

I have now realised that the global albedo changes necessary and the changes in solar energy input to the oceans can be explained by the latitudinal shifts (beyond normal seasonal variation) of all the air circulation systems and in particular the net latitudinal positions of the three main cloud bands namely the two generated by the mid latitude jet streams plus the Inter Tropical Convergence Zone (ITCZ).

The secret lies in the declining angle of incidence of solar energy input from equator to poles.

It is apparent that the same size and density of cloud mass moved, say, 1000 miles nearer to the equator will have the following effects:

It will receive more intense irradiation from the sun and so will reflect more energy to space.

It will reduce the amount of energy reaching the surface compared to what it would have let in if situated more poleward.

In the northern hemisphere due to the current land/sea distribution the more equatorward the cloud moves the more ocean surface it will cover thus reducing total solar input to the oceans and reducing the rate of accretion to ocean energy content

It will produce cooling rains over a larger area of ocean surface.

As a rule the ITCZ is usually situated north of the equator because most ocean is in the southern hemisphere and it is ocean temperatures that dictate it’s position by governing the rate of energy transfer from oceans to air. Thus if the two mid latitude jets move equatorward at the same time as the ITCZ moves closer to the equator the combined effect on global albedo and the amount of solar energy able to penetrate the oceans will be substantial and would dwarf the other proposed effects on albedo from changes in cosmic ray intensity generating changes in cloud totals as per Svensmark and from suggested changes caused in upper cloud quantities by changes in atmospheric chemistry involving ozone which various other climate sceptics propose.

Thus the following NCM will incorporate my above described positional cause of changes in albedo and rates of energy input to the oceans rather than any of the other proposals. That then leads to a rather neat solution to the other theories’ problems with the timing of the various cycles as becomes clear below.

4. I have previously described why the solar effect on climate is not as generally thought but for convenience I will summarise the issue here because it will help readers to follow the logic of the NCM.Variations in total solar power output on timescales relevant to human existence are tiny and are generally countered by a miniscule change in the speed of the hydrological cycle as described above.

However according to our satellites variations in the turbulence of the solar energy output from sunspots and solar flares appear to have significant effects.

During periods of an active solar surface our atmosphere expands and during periods of inactive sun it contracts.

When the atmosphere expands it does so in three dimensions around the entire circumference of the planet but the number of molecules in the atmosphere remains the same with the result that there is an average reduced density per unit of volume with more space between the molecules. Consequently the atmosphere presents a reduced resistance to outgoing longwave energy photons that experience a reduced frequency of being obstructed by molecules in the atmosphere.

Additionally a turbulent solar energy flow disturbs the boundaries of the layers in the upper atmosphere thus increasing their surface areas allowing more energy to be transferred from layer to layer just as wind on water causes waves, an increased sea surface area and faster evaporation.

The changes in the rate of outgoing energy flow caused by changes in solar surface turbulence may be small but they appear to be enough to affect the air circulation systems and thereby influence the overall global energy budget disproportionately to the tiny variations in solar power intensity.

Thus when the sun is more active far from warming the planet the sun is facilitating an increased rate of cooling of the planet. That is why the stratosphere cooled during the late 20th Century period of a highly active sun although the higher levels of the atmosphere warmed. The higher levels were warmed by direct solar impacts but the stratosphere cooled because energy was going up faster than it was being received from the troposphere below.

The opposite occurs for a period of inactive sun.


Body-by-Guinness

  • Guest
New Climate Model, 2
« Reply #342 on: June 10, 2010, 06:20:42 AM »
Some do say that the expansion and contraction of the atmosphere makes no difference to the speed of the outward flow of longwave energy because that outgoing energy still has to negotiate the same mass but that makes no sense to me if that mass is more widely distributed over a three dimensional rather than two dimensional space. If one has a fine fabric container holding a body of liquid the speed at which the liquid escapes will increase if the fabric is stretched to a larger size because the space between the fibres will increase.

Furthermore all that the NCM requires is for the stratosphere alone to lose or gain energy faster or slower so as to influence the tropospheric polar air pressure cells. The energy does not need to actually escape to space to have the required effect. It could just as well simply take a little longer or a little less long to traverse the expanded or contracted upper atmospheric layers.

The New Climate Model (NCM)

Solar surface turbulence increases causing an expansion of the Earth’s atmosphere.
Resistance to outgoing longwave radiation reduces, energy is lost to space faster.
The stratosphere cools. Possibly also the number of chemical reactions in the upper atmosphere increases due to the increased solar effects with faster destruction of ozone.
The tropopause rises.
There is less resistance to energy flowing up from the troposphere so the polar high pressure systems shrink and weaken accompanied by increasingly positive Arctic and Antarctic Oscillations.
The air circulation systems in both hemispheres move poleward and the ITCZ moves further north of the equator as the speed of the hydrological cycle increases due to the cooler stratosphere increasing the temperature differential between stratosphere and surface.
The main cloud bands move more poleward to regions where solar insolation is less intense so total global albedo decreases.
More solar energy reaches the surface and in particular the oceans as more ocean surfaces north of the equator are exposed to the sun by the movement of the clouds to cover more continental regions.
Less rain falls on ocean surfaces allowing them to warm more.
Ocean energy input increases but not all is returned to the air. A portion enters the thermohaline circulation to embark on a journey of 1000 to 1500 years. A pulse of slightly warmer water has entered the ocean circulation.
Solar surface turbulence passes its peak and the Earth’s atmosphere starts to contract.
Resistance to outgoing longwave radiation increases, energy is lost to space more slowly.
The stratosphere warms. Ozone levels start to recover.
The tropopause falls
There is increased resistance to energy flowing up from the troposphere so the polar high pressure systems expand and intensify producing increasingly negative Arctic and Antarctic Oscillations.
The air circulation systems in both hemispheres move back equatorward and the ITCZ moves nearer the equator as the speed of the hydrological cycle decreases due to the warming stratosphere reducing the temperature differential between stratosphere and surface.
The main cloud bands move more equatorward to regions where solar insolation is more intense so total global albedo increases once more.
Less solar energy reaches the surface and in particular the oceans as less ocean surfaces north of the equator are exposed to the sun by the movement of the clouds to cover more oceanic regions.
More rain falls on ocean surfaces further cooling them.
Ocean energy input decreases and the amount of energy entering the thermohaline circulation declines sending a pulse of slightly cooler water on that 1000 to 1500 year journey.
After 1000 to 1500 years those variations in energy flowing through the thermohaline circulation return to the surface by influencing the size and intensity of the ocean surface temperature oscillations that have now been noted around the world in all the main ocean basins and in particular the Pacific and the Atlantic. It is likely that the current powerful run of positive Pacific Decadal Oscillations is the pulse of warmth from the Mediaeval Warm Period returning to the surface with the consequent inevitable increase in atmospheric CO2 as that warmer water fails to take up as much CO2 by absorption. Cooler water absorbs more CO2, warmer water absorbs less CO2. We have the arrival of the cool pulse from the Little Ice Age to look forward to and the scale of its effect will depend upon the level of solar surface activity at the time. A quiet sun would be helpful otherwise the rate of tropospheric cooling as an active sun throws energy into space at the same time as the oceans deny energy to the air will be fearful indeed. Fortunately the level of solar activity does seem to have begun a decline from recent peaks.
The length of the thermohaline circulation is not synchronous with the length of the variations in solar surface turbulence so it is very much a lottery as to whether a returning warm or cool pulse will encounter an active or inactive sun.
A returning warm pulse will try to expand the tropical air masses as more energy is released and will try to push the air circulation systems poleward against whatever resistance is being supplied at the time by the then level of solar surface turbulence. A returning cool pulse will present less opposition to solar effects.
Climate is simply a product of the current balance in the troposphere between the solar and oceanic effects on the positions and intensities of all the global air circulation systems
The timing of the solar cycles and ocean cycles will drift relative to one another due to their asynchronicity so there will be periods when solar and ocean cycles supplement one another in transferring energy out to space and other periods when they will offset one another.
26) During the current interglacial the solar and oceanic cycles are broadly offsetting one another to reduce overall climate variability but during glacial epochs they broadly supplement one another to produce much larger climate swings. The active sun during the Mediaeval Warm Period and the Modern Warm Period and the quiet sun during the Little Ice Age reduced the size of the climate swings that would otherwise have occurred. During the former two periods the extra energy from a warm ocean pulse was ejected quickly to space by an active sun to reduce tropospheric heating. During the latter period the effect on tropospheric temperatures of reduced energy from a cool ocean pulse was mitigated by slower ejection of energy to space from a less active sun.

Discussion points:

Falsification:

Every serious hypothesis must be capable of being proved false. In the case of this NCM my narrative is replete with opportunities for falsification if the future real world observations diverge from the pattern of cause and effect that I have set out.

However that narrative is based on what we have actually observed over a period of 1000 years with the gaps filled in by deduction informed by known laws of physics.

At the moment I am not aware of any observed climate phenomena that would effect falsification. If there be any that suggest such a thing then I suspect that they will call for refinement of the NCM rather than abandonment.

For true falsification we would need to observe events such as the mid latitude jets moving poleward during a cooling oceanic phase and a period of quiet sun or the ITCZ moving northward whilst the two jets moved equatorward or the stratosphere, troposphere and upper atmosphere all warming or cooling in tandem or perhaps an unusually powerful Arctic Oscillation throughout a period of high solar turbulence and a warming ocean phase.

They say nothing is impossible so we will have to wait and see.

Predictive skill:

To be taken seriously the NCM must be seen to show more predictive skill than the current computer based models.

In theory that shouldn’t be difficult because their level of success is currently zero.

From a reading of my narrative it is readily apparent that if the NCM matches reality then lots of predictions can be made. They may not be precise in terms of scale or timing but they are nevertheless useful in identifying where we are in the overall scheme of things and the most likely direction of future trend.

For example if the mid latitude jets stay where they now are then a developing cooling trend can be expected.

If the jets move poleward for any length of time then a warming trend may be returning.

If the solar surface becomes more active then we should see a reduction in the intensity of the Arctic Oscillation.

If the current El Nino fades to a La Nina then the northern winter snows should not be as intense next winter but it will nevertheless be another cold though drier northern hemisphere winter as the La Nina denies energy to the air.

The past winter is a prime example of what the NCM suggests for a northern winter with an El Nino during a period of quiet sun. The warmth from the oceans pumps energy upwards but the quiet sun prevents the poleward movement of the jets. The result is warming of the tropics and of the highest latitudes (but the latter stay below the freezing point of water) and a flow of cold into the mid latitudes and more precipitation in the form of snow at lower latitudes than normal.

So I suggest that a degree of predictive skill is already apparent for my NCM.

Likely 21st Century climate trend:

There are 3 issues to be resolved for a judgement on this question.

i) We need to know whether the Modern Warm Period has peaked or not. It seems that the recent peak late 20th Century has passed but at a level of temperature lower than seen during the Mediaeval Warm Period. Greenland is not yet as habitable as when the Vikings first colonised it. Furthermore it is not yet 1000 years since the peak of the Mediaeval Warm Period which lasted from about 950 to 1250 AD

http://www.theresilientearth.com/?q=content/medieval-warm-period-rediscovered

so I suspect that the Mediaeval warmth now emanating from the oceans may well warm the troposphere a little more during future years of warm oceanic oscillations. I would also expect the CO2 levels to continue drifting up until a while after the Mediaeval Warm Period water surface warming peak has begun it’s decline. That may still be some time away, perhaps a century or two.

ii) We need to know where we are in the solar cycles. The highest peak of solar activity in recorded history occurred during the late 20th Century but we don’t really know how active the sun became during the Mediaeval Warm Period. There are calculations from isotope proxies but the accuracy of proxies is in the doghouse since Climategate and the hockey stick farrago. However the current solar quiescence suggests that the peak of recent solar activity is now over.

http://solarscience.msfc.nasa.gov/images/ssn_predict_l.gif

iii) Then we need to know where we stand in relation to the other shorter term cycles of sun and oceans.

Each varies on at least two other timescales. The level of solar activity varies during each cycle and over a run of cycles. The rate of energy release from the oceans varies from each El Nino to the following La Nina and back again over several years and the entire Pacific Decadal Oscillation alters the rate of energy release to the air every 25 to 30 years or so.

All those cycles vary in timing and intensity and interact with each other and are then superimposed on the longer term cycling that forms the basis of this article.

Then we have the chaotic variability of weather superimposed on the whole caboodle.

We simply do not have the data to resolve all those issues so all I can do is hazard a guess based on my personal judgement. On that basis I think we will see cooling for a couple of decades due to the negative phase of the Pacific Decadal Oscillation which has just begun then at least one more 20 to 30 year phase of natural warming before we start the true decline as the cooler thermohaline waters from the Little Ice Age come back to the surface.

If we get a peak of active sun at the same time as the worst of the cooling from the Little Ice Age comes through the oceanic system then that may be the start of a more rapid ending of the current interglacial but that is 500 years hence by which time we will have solved our energy problems or will have destroyed our civilisation.

Other climate theories:

Following the implosion of the CO2 based theory there are lots of other good ideas going around and much effort being expended by many individuals on different aspects of the climate system.

All I would suggest at the moment is that there is room in my NCM for any of those theories that demonstrate a specific climate response from sources other than sun and oceans.

All I contend is that sun and oceans together with the variable speed of the hydrological cycle assisted by the latitudinal movements of the air circulation systems and the vertical movement of the tropopause overwhelmingly provide the background trend and combine to prevent changes in the air alone changing the Earth’s equilibrium temperature.

For example:

Orbital changes feed into the insolation and albedo effects caused by moveable cloud masses.

Asteroid strikes and volcanoes feed into the atmospheric density issue.

Changing length of day and external gravitational forces feed into the speed of the thermohaline circulation.

Geothermal energy feeds into temperatures along the horizontal path of the thermohaline circulation.

Cosmic ray variations and ozone chemistry feed into the albedo changes.

The NCM can account for all past climate variability, can give general guidance as to future trends and can accommodate all manner of supplementary climate theories provided their real world influence can be demonstrated.

I humbly submit that all this is an improvement on existing modelling techniques and deserves fuller and more detailed consideration and investigation.

Novel propositions:

I think it helpful to set out here some of the novel propositions that I have had to formulate in order to obtain a climate description that complies both with observations and with basic laws of physics. This list is not intended to be exhaustive. Other new propositions may be apparent from the content and/or context of my various articles

i) Earth’s temperature is determined primarily by the oceans and not by the air (The Hot Water Bottle Effect). The contribution of the Greenhouse effect is miniscule.

ii) Changes in the air alone cannot affect the global equilibrium temperature because of oceanic dominance that always seeks to maintain sea surface and surface air equilibrium whatever the air tries to do. Warm air cannot significantly affect the oceans due to the huge difference in thermal capacities and by the effect of evaporation which removes unwanted energy to latent form as necessary to maintain the said equilibrium.

iii) Counterintuitively an active sun means cooling not warming and vice versa.

iv) The net global oceanic rate of energy release to the air is what matters with regard to the oceanic effect on the latitudinal positions of the air circulation systems and the associated cloud bands. All the oceanic oscillations affecting the rates of energy release to the air operate on different timescales and different magnitudes as energy progresses through the system via surface currents (not the thermohaline circulation which is entirely separate).

v) More CO2 ought theoretically induce faster cooling of the oceans by increasing evaporation rates. Extra CO2 molecules simply send more infra red radiation back down to the surface but infra red cannot penetrate deeper than the region of ocean surface involved in evaporation and since evaporation has a net cooling effect due to the removal of energy as latent heat the net effect should be increased cooling and not warming of the oceans.

vi) The latitudinal position of the air circulation systems at any given moment indicates the current tropospheric temperature trend whether warming or cooling and their movement reveals any change in trend

vii) All the various climate phenomena in the troposphere serve to balance energy budget changes caused by atmospheric effects from solar turbulence changes on the air above which affect the rate of energy loss to space or from variable rates of energy release from the oceans below.

viii) The speed of the hydrological cycle globally is the main thermostat in the troposphere. Changes in its speed are achieved by latitudinal shifts in the air circulation systems and by changes in the height of the tropopause.

ix) The difference between ice ages and interglacials is a matter of the timing of solar and oceanic cycles. Interglacials only occur when the solar and oceanic cycles are offsetting one another to a sufficient degree to minimise the scale of climate variability thereby preventing winter snowfall on the northern continents from being sufficient to last through the following summer.

x) Landmass distribution dictates the relative lengths of glacials and interglacials. The predominance of landmasses in the northern hemisphere causes glaciations to predominate over interglacials by about 9 to 1 with a full cycle every 100, 000 years helped along by the orbital changes of the Milankovitch cycles that affect the pattern of insolation on those shifting cloud masses.

xi) Distribution of energy within the entire system is more significant for climate (which is limited to the troposphere) than the actual temperature of the entire Earth. The latter varies hardly at all.

xii) All regional climate changes are a result of movement in relation to the locally dominant air circulation systems which move cyclically poleward and equatorward.

xiii) Albedo changes are primarily a consequence of latitudinal movement of the clouds beyond normal seasonal variability.

ix) The faint sun paradox is explained by the effectiveness of changes in the speed of the hydrological cycle. Only if the oceans freeze across their entire surfaces thereby causing the hydrological cycle to cease or if the sun puts in energy faster than it can be pumped upward by the hydrological cycle will the basic temperature equilibrium derived from the properties of water and the density and pressure of the atmosphere fail to be maintained.

http://wattsupwiththat.com/2010/04/06/a-new-and-effective-climate-model/

Body-by-Guinness

  • Guest
Habitually Glossing Over Uncertainties
« Reply #343 on: June 10, 2010, 09:52:29 AM »
2nd post:

Wharton school blasts a hole in AGW

Clarice Feldman
The internet is buzzing with this report from Jason Scott Johnson of the University of Pennsylvania Law School for the ILE - Institute for Law and Economics a Joint Research Center of the Law School with the Wharton School of economics.
The report is critical of the existing climate change "studies". It's a long (110 page report) which you may freely download (PDF required). It's written clearly and its conclusions seem sound, most especially this one:

As things now stand, the advocates representing the establishment climate science story broadcast (usually with color diagrams) the predictions of climate models as if they were the results of experiments - actual evidence. Alongside these multi-colored multi-century model-simulated time series come stories, anecdotes, and photos - such as the iconic stranded polar bear - dramatically illustrating climate change today. On this rhetorical strategy, the models are to be taken on faith, and the stories and photos as evidence of the models' truth. Policy carrying potential costs in the trillions of dollars ought not to be based on stories and photos confirming faith in models, but rather on precise and replicable testing of the models' predictions against solid observational data.


Clarice Feldman



Page Printed from: http://www.americanthinker.com/blog/2010/06/wharton_school_blasts_a_hole_i.html at June 10, 2010 - 11:48:34 AM CDT

URL for the Wharton report: 

http://www.probeinternational.org/UPennCross.pdf

And the piece's abstract:

GLOBAL WARMING ADVOCACY SCIENCE: A CROSS EXAMINATION
By Jason Scott Johnston* Robert G. Fuller, Jr. Professor and Director, Program on Law, Environment and Economy University of Pennsylvania Law School
First Draft. September, 2008 This Revision. May, 2010
Abstract
Legal scholarship has come to accept as true the various pronouncements of the Intergovernmental Panel on Climate Change (IPCC) and other scientists who have been active in the movement for greenhouse gas (ghg) emission reductions to combat global warming. The only criticism that legal scholars have had of the story told by this group of activist scientists – what may be called the climate establishment – is that it is too conservative in not paying enough attention to possible catastrophic harm from potentially very high temperature increases.

This paper departs from such faith in the climate establishment by comparing the picture of climate science presented by the Intergovernmental Panel on Climate Change (IPCC) and other global warming scientist advocates with the peer-edited scientific literature on climate change.   A review of the peer-edited literature reveals a systematic tendency of the climate establishment to engage in a variety of stylized rhetorical techniques that seem to oversell what is actually known about climate change while concealing fundamental uncertainties and open questions regarding many of the key processes involved in climate change. Fundamental open questions include not only the size but the direction of feedback effects that are responsible for the bulk of the temperature increase predicted to result from atmospheric greenhouse gas increases: while climate models all presume that such feedback effects are on balance strongly positive, more and more peer-edited scientific papers seem to suggest that feedback effects may be small or even negative.   The cross-examination conducted in this paper reveals many additional areas where the peer-edited literature seems to conflict with the picture painted by establishment climate science, ranging from the magnitude of 20th century surface temperature increases and their relation to past temperatures; the possibility that inherent variability in the earth’s non-linear climate system, and not increases in CO2, may explain observed late 20th century warming; the ability of climate models to actually explain past temperatures; and, finally, substantial doubt about the methodological validity of models used to make highly publicized predictions of global warming impacts such as species loss.

Insofar as establishment climate science has glossed over and minimized such fundamental questions and uncertainties in climate science, it has created widespread misimpressions that have serious consequences for optimal policy design. Such misimpressions uniformly tend to support the case for rapid and costly decarbonization of the American economy, yet they characterize the work of even the most rigorous legal scholars. A more balanced and nuanced view of the existing state of climate science supports much more gradual and easily reversible policies regarding greenhouse gas emission reduction, and also urges a redirection in public funding of climate science away from the continued subsidization of refinements of computer models and toward increased spending on the development of standardized observational datasets against which existing climate models can be tested.

* I am grateful to Cary Coglianese for extensive conversations and comments on an early draft, and to the participants in the September, 2008 Penn Law Faculty Retreat for very helpful discussion about this project. Especially helpful comments from David Henderson, Julia Mahoney, Ross McKitrick, Richard Lindzen, and Roger Pielke, Sr. have allowed me to correct errors in earlier drafts, but it is important to stress that no one except myself has any responsibility for the views expressed herein.

Body-by-Guinness

  • Guest
Deep Ocean Currents & Atmosphere
« Reply #344 on: June 11, 2010, 03:47:10 PM »
Crafty,

Here's a link that speaks directly to the ocean convection questions you asked.

http://joannenova.com.au/2010/06/the-deep-oceans-drive-the-atmosphere/

Rarick

  • Guest
Re: Pathological Science
« Reply #345 on: June 12, 2010, 03:14:08 AM »
Ocean energy input increases but not all is returned to the air. A portion enters the thermohaline circulation to embark on a journey of 1000 to 1500 years. A pulse of slightly warmer water has entered the ocean circulation.


I remember that! Scientific for the conveyor I was talking about.

Body-by-Guinness

  • Guest
Browning the Green Revolution
« Reply #346 on: June 14, 2010, 07:25:13 AM »
Making Hay
The Supreme Court is set to weigh in on genetically modified crops.
 
This month, the Supreme Court will rule on its first-ever case involving genetically modified (GM) crops. It also prepares to welcome a new member who, as solicitor general, intervened on behalf of the controversial technology, angering many liberals.

The case revolves around alfalfa hay — a nutritious, easily digestible livestock feed that at $8 billion a year is the country’s fourth-most-valuable crop — and specifically, GM alfalfa seeds produced by the company Monsanto. These seeds, as part of the company’s Roundup Ready line, are genetically modified to tolerate glyphosate, an herbicide that is commercially known as Roundup. When farmers use Roundup instead of other chemicals to kill weeds, they actually cut down on overall chemical use.

After an exhaustive review, the USDA gave Roundup Ready Alfalfa the green light in 2005. But the Center for Food Safety, a group opposed to agricultural biotechnology, contended that the Department of Agriculture hadn’t adequately evaluated the potential environmental consequences. In 2007, in Monsanto Co. v. Geertson Seed Farms, a federal court agreed, prohibiting Monsanto from selling Roundup Ready Alfalfa pending another assessment.

A draft of that second evaluation, released last December, echoed the original findings. Solicitor General Elena Kagan filed a brief on the biotechnology company’s behalf, even though the government is not a defendant in the appeal.

The legal saga is unfolding on the heels of a controversial report by the National Research Council, the government’s official science advisers on agricultural genetics. In April, the scientists raised concerns about the possible emergence of so-called super weeds, but overall they strongly endorsed GM technology. The scientists detailed what they called its “long and impressive list of benefits,” including better weed control in conservation tillage and reduced erosion. With GM crops, farmers spend less on chemicals and avoid having to use carbon-belching tilling machines. The National Center for Food and Agricultural Policy estimates that GM corn seed reduces herbicide use by over 39 million pounds annually and saves farmers $250 million each year in weed-management costs.

The report encouraged governments to apply genetic engineering to a wider range of crops to help address a savage, persistent worldwide hunger crisis. It’s estimated that 12 million farmers are growing 282 million acres of GM crops — with increasing acreage in resource-poor developing countries. The authors also note that crops can be engineered to withstand harsh temperatures, providing food to areas that aren’t conducive to farming. Genetic modification also can increase nutrients in harvested crops — Vitamin A–enriched (“Golden”) rice, zinc-enhanced sorghum, and higher-protein potatoes already have been developed. 

Although GM crops face tough restrictions in Europe, which regulates under the precautionary principle, the U.S. has been less responsive to advocacy campaigns. Soybeans were the first Roundup Ready crop to hit the market, in 1996. Today, more than 80 percent of the corn, soybeans, and cotton grown in the U.S. is genetically modified.

In an attempt to slow the spread of GM technology, campaigners have zeroed in on GM alfalfa, stoking concerns that modified seeds could “contaminate” conventional and organic fields and damage the alfalfa market. In its two environmental assessments, the USDA downplayed the potential of gene drift because alfalfa hay is often cut before bloom, and is almost always cut before ripe seed is formed. There have been no recorded incidents of gene flow into organic alfalfa hay in five years, which has turned around some skeptics. “There are more safeguards in place,” says Drex Gauntt, president of the Washington State Hay Growers Association, which dropped its support of the suit.

“It’s not this ‘scourge of the earth’ from a scientific standpoint, as opponents of GM alfalfa would lead you to believe,” adds Washington farmer Bob Haberman, who has 205 acres of Roundup Ready Alfalfa. (Some 5,500 growers who began planting it across 200,000 acres are exempt from the court order.) “Applying technology to agriculture is what has made the United States the greatest agricultural country in the world.”

In the government’s supporting brief, Kagan argues that no serious problems have arisen, and that restrictions in place make it highly unlikely that any would occur. “She defended Monsanto’s fight to contaminate the environment with its GM alfalfa, not the American people’s right to safe feed and a protected environment,” huffed an article that anti-biotech activists widely disseminated on the Web.

The case will be decided before Kagan, if confirmed, dons her robes. But regardless of how the Supreme Court rules, the debate over GM technology may be back before the federal courts in short order. A coalition of liberal groups is attempting to block planting of GM sugar beets.

Although all sides anxiously await the Supreme Court’s ruling, the long-term fate of GM alfalfa, sugar beets, and other crops ultimately rests with the Department of Agriculture. Protesters claim to have flooded the agency with more than 200,000 angry letters since it released the impact draft report. Although its science panel publicly concluded the crop poses no danger to human health or the environment, the USDA is reviewing the comments and awaiting the judges’ decision before making a final determination.

— Jon Entine is a columnist for Ethical Corporation magazine and a visiting fellow at the American Enterprise Institute in Washington, D.C. His book Crop Chemophobia: Will Precaution Kill the Green Revolution? (AEI Press) will be published this fall.

http://article.nationalreview.com/436176/making-hay/jon-entine

Body-by-Guinness

  • Guest
Phony Consensus
« Reply #347 on: June 14, 2010, 04:56:30 PM »
Second post.

The IPCC consensus on climate change was phoney, says IPCC insider

Lawrence Solomon  June 13, 2010 – 8:50 am

The UN’s Intergovernmental Panel on Climate Change misled the press and public into believing that thousands of scientists backed its claims on manmade global warming, according to Mike Hulme, a prominent climate scientist and IPCC insider.  The actual number of scientists who backed that claim was “only a few dozen experts,” he states in a paper for Progress in Physical Geography, co-authored with student Martin Mahony.

“Claims such as ‘2,500 of the world’s leading scientists have reached a consensus that human activities are having a significant influence on the climate’ are disingenuous,” the paper states unambiguously, adding that they rendered “the IPCC vulnerable to outside criticism.”

Hulme, Professor of Climate Change in the School of Environmental Sciences at the University of East Anglia –  the university of Climategate fame — is the founding Director of the Tyndall Centre for Climate Change Research and one of the UK’s most prominent climate scientists. Among his many roles in the climate change establishment, Hulme was the IPCC’s co-ordinating Lead Author for its chapter on ‘Climate scenario development’ for its Third Assessment Report and a contributing author of several other chapters.

Hulme’s depiction of IPCC’s exaggeration of the number of scientists who backed its claim about man-made climate change can be found on pages 10 and 11 of his paper, found here.

Financial Post
LawrenceSolomon@nextcity.com
Lawrence Solomon is executive director of Energy Probe and the author of The Deniers.


Read more: http://fullcomment.nationalpost.com/2010/06/13/the-ipcc-consensus-on-climate-change-was-phoney-says-ipcc-insider/#ixzz0qsIAhBw5
The National Post is now on Facebook. Join our fan community today.

Rarick

  • Guest
Re: Pathological Science
« Reply #348 on: June 15, 2010, 03:39:31 AM »
So the various working groups involved in the IPCC report may never have reached a conclusion, but "this may be a possibility worth investigating" that was "exagerated" to make political hay............. Love it :evil:

Freki

  • Power User
  • ***
  • Posts: 513
    • View Profile
Re: Pathological Science
« Reply #349 on: June 25, 2010, 06:39:17 PM »
I was not sure where to put this.  The point it makes for me is how far the European mind has gone down the nanny state path.  This is just NUTS!!!!!

Italian scientists who failed to predict L'Aquila earthquake may face manslaughter charges
June 24, 2010 by Lisa Zyga

(PhysOrg.com) -- Six of Italy's top seismologists are being investigated for manslaughter for not warning the city of L'Aquila about an earthquake that struck on April 6, 2009. The magnitude-6.3 earthquake caused 308 deaths and 1600 injuries, and left more than 65,000 people homeless.

 
The L’Aquila public prosecutor’s office issued the indictments on June 3, a step that usually precedes a request for a court trial. The investigation originated when about 30 L’Aquila citizens registered an official complaint that the scientists had failed to recognize the danger of the earthquake during the days and weeks in advance.
In the six months leading up to the earthquake, a series of smaller seismic movements and tremors were recorded nearby, including a magnitude-4.0 earthquake on March 30. On March 31, six days before the large earthquake struck, Italy’s Civil Protection Agency held a meeting with the Major Risks Committee - composed of the six scientists - to assess the risk of a major earthquake. At that time, the committee concluded that there was "no reason to suppose a sequence of small earthquakes could be the prelude to a strong event" and that “a major earthquake in the area is unlikely but cannot be ruled out."
At a press conference after the meeting, government official Bernardo De Bernardinis, deputy technical head of the Civil Protection Agency, told reporters that "the scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable.” In addition to the six scientists, De Bernardinis is also under investigation.
According to the group of local citizens, many of the earthquake’s victims had been planning to leave their homes, but had changed their minds after the committee’s statements.
"Those responsible are people who should have given different answers to the public,” said Alfredo Rossini, L'Aquila's public prosecutor. “We're not talking about the lack of an alarm, the alarm came with the movements of the ground. We're talking about the lack of advice telling people to leave their homes."
Minutes from the March 31 meeting show that the scientists recommended that buildings in the area should be monitored to assess their ability to handle a major shock.
Although the scientists are unable to comment due to the investigation, an article in Nature News reported that one of the scientists, Enzo Boschi, president of the National Institute for Geophysics and Vulcanology (INGV) in Rome, wrote in a letter last September that the meeting was too short and that he had not been informed about the following press conference. Only one of the seismologists from the committee, Franco Barberi, a volcanologist at the University of Roma Tre, was at the press conference.

 
Susah Hough, a geophysicist at the US Geological Survey in Pasadena, California, who is not involved in the investigation, also disagrees with some of the remarks from the press conference. "The idea that minor earthquakes release energy and thus make things better is a common misperception,” she said. “But seismologists know it's not true. I doubt any scientist could have said that."
The article in Nature News lists the six scientists and officials under investigation for manslaughter as Boschi; Barberi; Giulio Selvaggi, director of the National Earthquake Center based at INGV; Claudio Eva, a professor of earth physics at the University of Genoa; Mauro Dolce, head of the seismic risk office in the Civil Protection Agency; and Gian Michele Calvi, director of the European Centre for Training and Research in Earthquake Engineering in Pavia.
Coming to the defense of the seismologists, nearly 4,000 scientists from around the world have signed a letter to Italy's president, Giorgio Napolitano, urging him to focus on earthquake preparation rather than holding scientists responsible for something that they cannot do - predict earthquakes.
"The proven and effective way of protecting populations is by enforcing strict building codes," said Barry Parsons of the University of Oxford, who signed the letter. "Scientists are often asked the wrong question, which is 'when will the next earthquake hit?' The right question is 'how do we make sure it won't kill so many people when it hits?'"
More information: via: Nature News and The Independent
© 2010 PhysOrg.com