Author Topic: Evolutionary biology/psychology  (Read 132668 times)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Re: Evolutionary biology/psychology
« Reply #50 on: December 15, 2008, 12:35:35 AM »
In contrast to the notions of this article are the writings of Austrian ethnologist (study of animals) and Nobel Laureate Konrad Lorenz who, like psychologist Carl Jung and Sandhurst military historian John Keegan (History of War) point out that over time War has become evermore efficient in its brutality.  Working from memory, the deaths of the American Civil War exceeded what went before, yet was exceeded by the trench warfare of WW1, then the 20 million or so killed by Stalin and the tens of millions killed by Mao, then WW2 (including the use of nuclear weapons etc.)

Against the long term trend, it is risky to see the last few decades as a historical turning point.  If could be, but there's plenty to suggest otherwise as well.

rachelg

  • Guest
Re: Evolutionary biology/psychology
« Reply #51 on: December 17, 2008, 06:17:36 PM »
Against the long term trend, it is risky to see the last few decades as a historical turning point.  If could be, but there's plenty to suggest otherwise as well.


It may certainly be soon to tell if the last few decades  are a turning point but the statistics are currently not in favor  of feminism/working mothers causing  rising violence and the breakdown of society. 


 I'm a little "overbooked" on economic stuff lately but I will add Wrights book to my to be read list

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Re: Evolutionary biology/psychology
« Reply #52 on: December 17, 2008, 11:46:29 PM »
"It may certainly be soon to tell if the last few decades  are a turning point but the statistics are currently not in favor  of feminism/working mothers causing  rising violence and the breakdown of society."

Delivered with panache and wit :lol: but I stilll insist upon the point that mothers matter and when they disappear from their children's lives the consequences are profound.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Lionel Tiger: Monkeys and Utopia
« Reply #53 on: December 26, 2008, 11:52:53 PM »
By LIONEL TIGER
Reveries about human perfection do not exist solely in the enthusiastic systems confected by Karl Marx, or in the REM sleep of Hugo Chávez, or through the utopian certainties of millenarians. There has been a persistent belief through countless societies that life is better, much better, somewhere else. In some yet-unfound reality there is an expression of our best natures -- our loving, peaceful, lyrically fair human core.

Anthropologists have been at the center of this quest, its practitioners sailing off to find that elusive core of perfection everywhere else corrupted by civilization. In the 1920s, Margaret Mead found it in Samoa, where the people, she said, enjoyed untroubled lives. Adolescents in particular were not bothered by the sexual hang-ups that plague our repressive society. Decades later an Australian researcher, Derek Freeman, retraced her work and successfully challenged its validity. Still, Mead's work and that of others reinforced the notion that our way of life was artificial, inauthentic, just plain wrong.

Enter primatology, which provided yet more questions about essential hominid nature -- and from which species we could, perhaps, derive guidance about our inner core. First studied in the wild were the baboons, which turned out to have harsh power politics and sexual inequity. Then Jane Goodall brought back heartwarming film of African chimps who were loving, loyal, fine mothers, with none of the militarism of the big bad baboons. But her subjects were well fed, and didn't need to scratch for a living in their traditional way. Later it became clear that chimps in fact formed hunting posses. They tore baby baboons they captured limb from limb, and seemed to enjoy it.

In Today's Opinion Journal
 

REVIEW & OUTLOOK

Rick Warren, Obama and the LeftA Rigged Auction DerailedPlankton Watch

TODAY'S COLUMNIST

Declarations: A Year for the Books
– Peggy Noonan

COMMENTARY

There's No Pain-Free Cure for Recession
– Peter SchiffCross Country: All I Wanted for Christmas Was a Newspaper
– Paul MulshineOf Monkeys and Utopia
– Lionel TigerWhere to look now for that perfect, pacifistic and egalitarian core? Franz de Waal, a talented and genial primatologist, observed the behavior of bonobos at Emory University's primate lab in the 1980s. These chimpanzees, he found, engaged in a dramatic amount of sexual activity both genital and oral, heterosexual and homosexual -- and when conflicts threatened to arise a bout of sex settled the score and life went on. Bonobos made love, not war. No hunting, killing, male dominance, or threats to the sunny paradise of a species so closely related to us. His research attracted enormous attention outside anthropology. Why not? How can this lifestyle not be attractive to those of us struggling on a committee, in a marriage, and seeking lubricious resolution?

Alas, Mr. de Waal also hadn't studied his species in the wild. And, with a disappointing shock in some quarters, for the past five years bonobos have been studied in their natural habitat in a national park in the Congo.

There, along with colleagues, Gottfried Hohman of the Institute for Evolutionary Anthropology in Leipzig has seen groups of bonobos engage in clearly willful and challenging hunts. Indeed, female bonobos took full part in the some 10 organized hunts which have been observed thus far. Another paradise lost.

Reveries about hidden human perfection centered in primate life have been sharply curtailed by what we've learned about the Malibu ape -- when it seeks its own food, doesn't live in an easy-hook-up dormitory, and may confront severe challenges in life.

Bonobo, we hardly know you.

Mr. Tiger is the Charles Darwin professor of anthropology at Rutgers University.

Body-by-Guinness

  • Guest
Better Recipes Cause Progress, I
« Reply #54 on: January 07, 2009, 01:18:37 PM »
Interesting interview, one that causes me to reflect on the dark age panic mongers would impose upon the world.

'Chiefs, Thieves, and Priests'
Science writer Matt Ridley on the causes of poverty and prosperity

Ronald Bailey | February 2009 Print Edition

Matt Ridley, an Oxford-educated zoologist, turned to journalism in 1983, when he got a job as The Economist’s science reporter. He soon became the magazine’s Washington correspondent and eventually served as it’s American editor. This time in the United States had a profound intellectual effect on Ridley, ultimately leading him to become a self-described classical liberal, a “person who believes in economic freedom and social freedom, too.”

Ridley, 50, has written several superb books that combine clear explanations of complex biology with discussions of the science’s implications for human society. In The Origins of Virtue: Human Instincts and the Evolution of Cooperation (1997), Ridley showed how natural selection led to human morality, including the development of property rights and our propensity to exchange. At the end he warned that government can subvert our natural tendency to cooperate. “We are not so nasty that we need to be tamed by intrusive government, nor so nice that too much government does not bring out the worst in us,” he concluded. Reviewing the book for reason, the UCLA economist Jack Hirshleifer noted that “Ridley leans in the anarchist direction.”

Written just before researchers announced the completed sequencing of the human genome, Ridley’s Genome: The Autobiography of a Species in 23 Chapters (2000) toured our 23 chromosome pairs to illustrate how genes cause disease, direct the production of proteins, and influence intelligence. While pointing out the differential heritability of many human characteristics, Ridley condemned genetic determinism and eugenics as unscientific. “Many modern accounts of the history of eugenics present it as an example of the dangers of letting science, genetics especially, out of control,” he wrote. “It is much more an example of the danger of letting government out of control.” Ridley further deflated genetic determinism in Nature via Nurture: Genes, Experience, and What Makes Us Human (2003), which explained how genes change their expression in response to environmental influences.

Ridley is now working on a book about how and why progress happens. During a visit to Blagdon Hall, Ridley’s home outside Newcastle upon Tyne, I took advantage of the author’s weakened state (he had broken his collarbone falling from a horse) to talk about the new book.

reason: What’s the book about?

Matt Ridley: My last three or four books have all argued that there is such a thing as an evolved human nature which is true all over the world and has been true throughout history. But something changes. Clearly, my life is completely different from what it would’ve been if I was an Ice Age hunter-gatherer. Technology changes. Society changes. Prosperity changes.

What I want to do is turn the question on its head and come at it from the point of view of an evolutionary biologist who looks at this species—man—which has a constant nature but has somehow acquired an ever-changing lifestyle. I want to understand what’s driving that change. Let’s give it the obvious word, even though it’s a very unfashionable one: progress. The book is about where progress came from, how it works, and, most important, how long it can continue in the future.

My major themes are specialization, exchange, technology, energy, and then population. Human beings have progressed in material living standards, on the whole, since the Stone Age, but they’ve also progressed enormously in terms of the number of people on the planet. That’s because we got better at turning the energy available into people, and the denser the population has got, the more things we’ve been able to invent that we wouldn’t have been able to invent with a sparse population. For example, if you’re going to smelt metals, you need a fairly dense population of customers before it’s worth building kilns.

Population density can also lead to reductions in the standard of living. There must be cases in history where people have tried to live at too a high a density for the resources that were available to them. They’ve either then suffered one of Malthus’ positive checks—war, famine, and disease—or, and this is a slightly more original point, they’ve reduced their division of labor, i.e., they’ve returned to self-sufficiency.

If you look at the Bronze Age empires in Mesopotamia or Egypt, or the Roman Empire, or some of the Chinese dynasties, at a certain point the population density gets too high for people to be able to generate a surplus of consumption income to support trade and specialization by others, and you have to go back to being self-sufficient. Essentially that’s what happened to every surge in productivity, wealth, and technology up to the one that came around 1800, the Industrial Revolution.

At some point there’s something you’re relying on that gets more and more expensive. If you look at Mesopotamia, it deforested itself. It has to go further and further for wood, for construction. Maybe it’s food.

The English Industrial Revolution had been bubbling along very nicely in the 18th century, with fantastic increases of productivity, particularly with respect to cotton textiles. We saw a quintupling of cotton cloth output in two consecutive decades, in the 1780s and 1790s, none of it based on fossil fuels yet but based on water power.

At some point, you run out of dams. You run out of rivers in Lancashire to dam. At some point England would suffer the fate of Holland, or Venice before that, or of China, Egypt, or Japan. What did England do that others didn’t? It started using fossil fuels.

By 1870 Britain is consuming the coal equivalent to 850 million human laborers. It could have done everything it did with coal with trees, with timber, but not from its own land. Timber was bound to get more expensive the more you used of it. Coal didn’t get more expensive the more you used of it. It didn’t get particularly cheaper either, but it didn’t get more expensive, so you don’t get diminishing returns the more you use of it.

reason: One of the things that Marco Polo reported to the amazement of Europe was that those Chinese people are burning rocks. So the Chinese had access to coal already, and that extra energy didn’t make them wealthy.

Ridley: That’s right. [University of California at Irvine historian] Kenneth Pomeranz’s answer to that is very straightforward: The coal was in Shanxi in Inner Mongolia in the far northwest. Those areas got hit very soon after Marco Polo was there by a peculiar combination of barbarians and plague. It was hit much harder than the rest of China and was totally depopulated. When China revived as an economy, it was a long way away from the coal, so it had a wood-based iron industry, for example on the Yangtze, which was impossibly far from the coal mines in the far northwest.

The north of England happened to have a coal field that was near the surface and near navigable water. Remember, you cannot transport anything by bulk in the 18th century unless it’s within a very short distance of water. It happened to have a huge demand on its doorstep too.

The fossil fuel industry itself did not get much more efficient. A miner in the early 20th century is using a pony and a lamp and a pick ax like he was in the 18th century, and the product he’s producing is not a lot cheaper. But it’s not more expensive, and it’s hugely larger in volume.

reason: What institutional environment favors progress?

Ridley: It’s very clear from history that markets bring forth innovation. If you’ve got free and fair exchange with decent property rights and a sufficiently dense population, then you get innovation. That’s what happens in west Asia around 50,000 years ago: the Upper Paleolithic Revolution.

The only institution that really counts is trust, if you like. And something’s got to allow that to build. Property rights are just another expression of trust, aren’t they? I trust you to deliver this property to me. I trust somebody else to allow me to keep this property if I acquire it from you.

But human beings are spectacularly good at destroying trust-generating institutions. They do this through three creatures: chiefs, thieves, and priests.

Chiefs think, “I’m in charge, I own everything, I’m taking over, I’m going to tell everyone how to do it, and I’m going to confiscate property whenever I feel like it.” That’s what happens again and again in the Bronze Age. You get a perfectly good trading diaspora and somebody turns it into an empire.

A classic example is the Chinese retreat in the 1400s, 1500s. China got rich and technologically sophisticated around the year 1000 A.D. That’s when it’s working at its best.

Interestingly, it’s just come out of a period when it’s not unified. Once you’re unified, people keep imposing monopolies and saying there’s only one way of doing things and you’ve got to do it this way. Whereas when you’re fragmented, as Europe remained throughout this period, people can move from one polity to another until they find one they like.

If you want a recipe for how to shut down an economy, just read what the early Ming emperors did. They nationalized foreign trade. They forbade population movements within the country, so villagers weren’t allowed to migrate to towns. They forbade merchants from trading on their own account without specific permission to do specific things. You had to actually register your inventories with the imperial bureaucrats every month, that kind of thing. And they did the usual idiotic thing of building walls, invading Vietnam.

Thieves—one of the reasons for the growth of the Arab civilization in the seventh and eighth centuries must be the fact that the Red Sea was increasingly infested with pirates. It became increasingly difficult to trade with India. Byzantium was having a real problem doing it, and the Arabs had come up with a great new technology for crossing the desert called the camel train. So the rule of law to prevent thievery is also important; but the rule of too much law, to allow chiefs to take everything, is equally a risk.

Priests—well, I must admit I don’t think one can necessarily blame religion for shutting down trust, trade, and exchange. But there’s little doubt that it didn’t help in the Middle Ages, surely. I won’t go further than that.

reason: They did try to adjust prices in the marketplace. Whether that actually had an effect I don’t know.

Ridley: Usury laws and that sort of thing. That’s exactly right.

reason: So periods of rising productivity are choked off by institutional barriers. You get an over-elaboration of rules and regulations and taxation. And that’s what killed them off, not lack of fuel or lack of ingenuity, but governance that just got so bad that it stopped it. Is that plausible?

Ridley: I think that’s a big part of it. How does that fit with my story that it shuts down because of a Malthusian thing or diminishing returns on sources of energy? Do they go together, or does one explain one collapse and another explain another? I don’t know. The problem with history is it tends to be overdetermined. You’ve got lots of different things happening at once.

If we were having this conversation in 1800, I think I would have very good reason for telling you that however wonderful the prosperity you can generate by elaborating division of labor and specialization and exchange, you’re never going to be able to escape this trap that living standards don’t seem to be able to go up faster than the population. But we’re not having this conversation in 1800, and we’ve had 200 years in which we’ve shown that you can actually have a dramatic transformation of living standards in a very large portion of the world purely by elaborating the division of labor, as long as you’ve got energy amplification in there too.


Body-by-Guinness

  • Guest
Better Recipes Cause Progress, II
« Reply #55 on: January 07, 2009, 01:19:08 PM »
reason: [Yale economist] William Nordhaus would say that at least 50 percent of economic growth in the 20th century is because we’re using better recipes, which is better technology.

Ridley: Absolutely. The compact fluorescent light bulb is a better recipe than the filament light bulb, which was better than the kerosene lamp, which was better than the tallow candle. If I overemphasized energy, maybe it’s just because I’ve been recently reading and writing on that subject. The proximate cause of our prosperity is technology. I quite agree.

The ultimate cause of technology is division of labor, though. The man who made a mango slicing machine in 1800 would have been lucky to sell 20, because he only had access to his village. Now he can have access through the Internet to the world, so it pays him to make something as specialized as a mango slicing device. And that makes living standards rise. My standard of living has risen because a man has made a mango slicing device that I really can use.

But I also need an awful lot of watts to run my lifestyle: to turn on the lights, to drive the machine that made my mango slicing device, to provide me with the transport that I deem necessary to make my life interesting, but in particular, to drive those container ships that are bringing my mango slicing devices from Korea.

The fact that I can now earn an hour of reading light with half a second of work, if I’m on the average American wage, whereas it took eight seconds in the 1950s, releases me to go and spend another seven and a half seconds consuming some other kind of energy, like driving my power boat across a lake where I have a recreation home which I’ve driven to in my 4x4, or even just deciding to leave the light on all night so that my daughter doesn’t have to worry about being left in the dark.

reason: Flipping this around a little bit, what’s the cause of poverty in the modern world?

Ridley: I think lack of access to networks of exchange and specialization is the principal cause of poverty. If you find yourself in a position where you make everything yourself rather than buy it from someone else, then you are by definition poor.

Now, I buy the argument that it is possible to be poorer in the modern world than it was a couple of hundred years ago because the diseases that would’ve killed you a couple of hundred years ago can be prevented. It is conceivable that some people in Africa are living at a lower standard of living than anyone was 200 years ago.

reason: Of course, living might be considered a higher standard of living than dying.

Ridley: Well, exactly. To get hopeful, is Africa really that different from South Asia in the ’60s and ’70s? The standard of living is rising in most of Africa. There are parts where it’s not—in Congo it’s not, but in Kenya and Ghana it is. They’re not great, these countries, but they’re not regressing. The health outcomes are improving pretty dramatically, child mortality in particular. Fertility is falling, as it does after child mortality has started falling.

And you also have got the beginnings of an explosion of entrepreneurship that will allow them to leapfrog onto new technologies that were not available. The lack of decent telephone networks means that they’re going straight into a mobile world. Mobile telephones are amazingly ubiquitous in Africa, even among people who are not particularly well off, often in a form of shared ownership. Just look at the effect that that’s had on Kenyan farmers finding markets for their produce. They call ahead and find where the best prices are and send their produce there.

reason: I was at a Cato Institute function where the British development economist Peter Bauer was giving a lecture, and I had a really smart-ass question: Isn’t the problem with a lot of poor countries, Africa in particular, that there’s corruption and we have to get rid of corruption? And he leaned back on the podium and smiled and shook his head, no. And he said when the United States and Britain were developing in the 19th century, their governments were as corrupt as anything you’d find in Africa, but the governments in Britain and the United States had control of 1 percent or 2 percent of the economy when those countries were growing. In many African countries, the government controls over 60 percent of the economy. That’s the difference.

Ridley: Very nice point. I find myself completely surrounded by pessimists, people who think that Africa is never going to get rich, that it’s deteriorating rather than improving, that living standards are about to get worse. And they’re not convinced they have been getting better in the last few years because things like congestion at airports have gotten worse. There’s a tremendous tendency to take improvements for granted and to notice deteriorations.

There are a lot of people who think, “Ah, we are in a uniquely dangerous situation in my generation. Back in my parents’ generation, they looked forward to the future with confidence and happiness.” That ain’t true either. If you go back and look at every generation, it was dominated by pessimists. There is this wonderful quote from Lord Macaulay in 1830, who says, why is it that with nothing but improvement behind us we anticipate nothing but disaster before us?

What the precautionary principle [the idea that when science has not yet determined whether a new product or process is safe, the government should prohibit or restrict its use] misses is the danger that in not progressing you might miss out on future improvements in living standards for poor people in Africa. I’m desperately hoping to persuade the world, not that everything’s going to be fine, but that there’s a chance everything’s going to be better for everybody and that we should be very careful not to cut ourselves off from that chance.

reason: How would you describe your politics?

Ridley: I’m a good old-fashioned 19th-century liberal. I love progress, and I love change. What makes what I’ve just said seem right-wing, particularly in Europe, is that it seems to be more concerned with wealth creation than social justice, i.e., with baking another cake rather than cutting up the existing cake. Actually, to some extent, I am an egalitarian. I think that there are ways in which you have to keep equal opportunities in life in order to generate the incentives for people to generate wealth. But I think I’m that classically underrepresented voter, the person who believes in economic freedom and social freedom, too.

I lived in America for three years, which is not a long time, but it was a very influential time for me. I arrived there a pretty standard statist in my views of the world and left a—not a completely convinced libertarian but a person who had suddenly started thinking about politics from the individual’s point of view much more than I had before. Meeting Julian Simon and Aaron Wildavsky and people from the Property and Environment Research Center and George Mason University had an influence on me. I encountered a view that’s hard to come across in Europe.

The fall of the Berlin Wall was also a very important moment in my life. It told me that all those people who said that the Soviet Union was actually a lot better place than it was made out to be, and I’d come across tons of them in my life, were plain wrong, not just a little bit wrong.

I recalled one conversation I had around 1985. A singer who is now a famous labor activist and a highly respected elder statesman, Billy Bragg—I happened to sit next to him on an airplane. He had just come back from playing East Berlin. He was perfectly friendly, but he spent most of that plane ride trying to persuade me that East Germans were much happier than West Germans and it was complete bollocks, this propaganda from the West that they were unhappy. And he’s hugely respected still as a Labour Party grandee.

reason: What would you say to people who say that progress is simply unsustainable, that the Africans and the Indians and the Chinese will never be able to live at the same living standards as we do?

Ridley: I’d respond to that by saying that in a sense they’re absolutely right. If we go on as we are, it’ll be very difficult to sustain things. But we won’t go on as we are. That’s what we never do. We always change what we do and we always get much more efficient at using things—energy, resources, etc.

Just take land area for feeding the world. If we’d gone on as we were, as hunter-gatherers, we’d have needed about 85 Earths to feed 6 billion people. If we’d gone on as early slash-and-burn farmers, we’d have needed a whole Earth, including all the oceans. If we’d gone on as 1950 organic farmers without a lot of fertilizer, we’d have needed 82 percent of the world’s land area for cultivation, as opposed to the 38 percent that we farm at the moment.

Sure, if every office in China uses as much paper as every office does in America now and there’re just as many of them, then we’re going to run out of trees to chop down to make the paper. Well, I’m willing to bet that we’ll have found ways of recycling paper or making paper from less material or not using so much paper. It might take paper getting expensive before that happens.

Ronald Bailey is reason’s science correspondent.

http://www.reason.com/news/show/130848.html


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Re: Evolutionary biology/psychology
« Reply #56 on: January 07, 2009, 07:01:26 PM »
I've read Ridley's book "The Red Queen" and recommend it highly and have another of his books in my ever growing "to read" pile.  Thanks for the nice find.

Body-by-Guinness

  • Guest
Irrational Intelligence
« Reply #57 on: January 30, 2009, 08:42:19 AM »
Hmm, think I've seen some of that manifested around here.

From the issue dated January 30, 2009
NOTA BENE
Irrational Intelligence; Get Smarter



Article tools
By KACIE GLENN

Ever bought a 12-foot Christmas tree for a 10-foot-high apartment? Picked up a hitchhiker in a nasty part of town? Or, perhaps, taken out a mortgage you couldn't afford? The good news is that poor decision-making skills may have little effect on your IQ score, according to Keith E. Stanovich, author of What Intelligence Tests Miss: The Psychology of Rational Thought (Yale University Press). The bad news? He thinks you'd lose a few points on a more-accurate gauge of intelligence.

Stanovich, an adjunct professor of human development and applied psychology at the University of Toronto, believes that the concept of intelligence, as measured by IQ tests, fails to capture key aspects of mental ability. But that doesn't mean he discounts the tests' credibility: "Readers might well expect me to say that IQ tests do not measure anything important, or that there are many kinds of intelligence, or that all people are intelligent in their own way," he writes. After all, theories about emotional and social intelligence — which weigh interpersonal skills, the ability to empathize, and other "supracognitive" characteristics — have gained popularity in recent years, in part by de-emphasizing the importance of IQ.

Instead, Stanovich suggests that IQ tests focus on valuable qualities and capacities that are highly relevant to our daily lives. But he believes the tests would be far more effective if they took into account not only mental "brightness" but also rationality — including such abilities as "judicious decision making, efficient behavioral regulation, sensible goal prioritization ... [and] the proper calibration of evidence."

Our understanding of intelligence, he writes, has been muddled by the discrepancy between the vague, comprehensive vernacular term, which encompasses all the functions and manifestations of "smarts," and the narrower theories that "confine the concept of intelligence to the set of mental abilities actually tested on extant IQ tests." The latter conceptualization allows intelligence to coexist with foolishness because IQ tests do not measure the rationality required to abstain from dumb decisions, according to the author. Casual observers, however, usually define intelligence broadly and are confused by inconsistencies: "Blatantly irrational acts committed by people of obvious intelligence ... shock and surprise us and call out for explanation."

The author notes that because most people — even educators and psychologists — accept test-defined intelligence as a fair assessment of mental faculties, we tend to dismiss inconsistencies between a person's IQ scores and rationality as indicators of a disorder or learning disability. So persistent is that faulty logic that "we are almost obligated to create a new disability category when an important skill domain is found to be somewhat dissociated from intelligence." As long as we continue to worship IQ tests that do not assess rational thought processes, we will continue to misjudge our own and others' cognitive abilities, warns the scholar.

In an earlier work, Stanovich coined his own term — dysrationalia — for "the inability to think and behave rationally despite adequate intelligence." That "disorder," he suggests, might afflict some of the smartest people you know.

***

In an age of Baby Einstein DVD's and French lessons for 5-year-olds, it may seem passé to suggest that a child's IQ is determined primarily by genetics. But until recently, writes Richard E. Nisbett in Intelligence and How to Get It: Why Schools and Cultures Count (Norton), most scientists who studied intelligence believed "that the overwhelming importance of heritability meant that the environment could do little and that social programs intended to improve intelligence were doomed to failure." Nisbett argues that a variety of social, cultural, and economic factors can significantly affect a child's IQ, and suggests ways to improve intelligence scores, as well as grades, by manipulating those factors.

Often-cited studies have shown that the difference in IQ between identical twins raised apart is only slightly less than the difference between twins raised together, whereas the correlation between the intelligence scores of a parent who adopts a child and that child is slim. Yet, Nisbett reminds us, even separated twins are likelier to grow up under similar economic and social conditions than two people chosen at random, and they might even be treated similarly because of shared looks and other characteristics in common. At the same time, most adoptive families are well-off and nurturing. The consistency of those environmental factors makes their impact on a child's intelligence seem smaller than it really is.

Opinions have changed over the last few years, and many scientists would now agree, "If you were to average the contribution of genetics to IQ over different social classes, you would probably find 50 percent to be the maximum contribution of genetics," says Nisbett, a professor of psychology at the University of Michigan at Ann Arbor. Class is a crucial determinant of intelligence; adoption studies, for example, have indicated that "raising someone in an upper-middle-class environment versus a lower-class environment is worth 12 to 18 points of IQ — a truly massive effect," he says. Children of middle-class parents are read to, spoken to, and encouraged more than children of working-class parents, all experiences that influence intellectual development.

Intelligence and How to Get It also examines how better schooling boosts IQ scores and how school systems can improve. Nisbett cautions that more money does not always equate to higher-quality education, and that parents who take advantage of vouchers to move their children to better schools are a self-selecting group of people who are motivated to help their children excel academically, which leads some researchers to overestimate the vouchers' effectiveness. On the other hand, he finds that class size and teachers' experience and skills can make a big difference, especially for poor and minority children. He notes, too, that children who are exposed to "instructional technologies" in the classroom benefit intellectually; working with word-processing programs, for example, can help students learn to read faster, which leads to further advantages.

The psychologist maintains that there are myriad ways to enhance a child's intelligence by changing his or her learning environment. Young kids who emulate their parents' self-control go on to achieve better grades and higher SAT scores than those who don't. They also learn better, and therefore are more successful in school and have a higher IQ, when they are praised for working hard but not offered incentives to do activities they already show interest in: The danger is turning play and learning into work. It couldn't hurt to angle for access to the best schools and most-experienced teachers, either, Nesbitt suggests.

"Intellectual capital" — which more fully captures academic potential than IQ, he says — "is the result of stimulation and support for exploration and achievement in the home, the neighborhood, and the schools." Nurturing young people's minds might not override their DNA, the author admits, but it does help them achieve their intellectual potential.

http://chronicle.com
Section: The Chronicle Review
Volume 55, Issue 21, Page B18

http://chronicle.com/temp/reprint.php?id=6pfm8ytzbg1p8n5p2vl4rrcmwvckp31x

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Re: Evolutionary biology/psychology
« Reply #58 on: February 10, 2009, 04:04:03 PM »
James Q. Wilson
The DNA of Politics
Genes shape our beliefs, our values, and even our votes.

Radek Pietruszka/epa/Corbis

Studies of identical twins, like Polish president Lech Kaczyński, right, and former prime minister Jaroslaw, show that 40 percent of our political views have a genetic component.Children differ, as any parent of two or more knows. Some babies sleep through the night, others are always awake; some are calm, others are fussy; some walk at an early age, others after a long wait. Scientists have proved that genes are responsible for these early differences. But people assume that as children get older and spend more time under their parents’ influence, the effect of genes declines. They are wrong.

For a century or more, we have understood that intelligence is largely inherited, though even today some mistakenly rail against the idea and say that nurture, not nature, is all. Now we know that much of our personality, too, is inherited and that many social attitudes have some degree of genetic basis, including our involvement in crime and some psychiatric illnesses. Some things do result entirely from environmental influences, such as whether you follow the Red Sox or the Yankees (though I suspect that Yankee fans have a genetic defect). But beyond routine tastes, almost everything has some genetic basis. And that includes politics.

When scholars say that a trait is “inherited,” they don’t mean that they can tell what role nature and nurture have played in any given individual. Rather, they mean that in a population—say, a group of adults or children—genes explain a lot of the differences among individuals.

There are two common ways of reaching this conclusion. One is to compare adopted children’s traits with those of their biological parents, on the one hand, and with those of their adoptive parents, on the other. If a closer correlation exists with the biological parents’ traits, then we say that the trait is to that degree inherited.

The other method is to compare identical twins’ similarity, with respect to some trait, with the similarity of fraternal twins, or even of two ordinary siblings. Identical twins are genetic duplicates, while fraternal twins share only about half their genes and are no more genetically alike than ordinary siblings are. If identical twins are more alike than fraternal twins, therefore, we conclude that the trait under consideration is to some degree inherited.

Three political science professors—John Alford, Carolyn Funk, and John Hibbing—have studied political attitudes among a large number of twins in America and Australia. They measured the attitudes with something called the Wilson-Patterson Scale (I am not the Wilson after whom it was named), which asks whether a respondent agrees or disagrees with 28 words or phrases, such as “death penalty,” “school prayer,” “pacifism,” or “gay rights.” They then compared the similarity of the responses among identical twins with the similarity among fraternal twins. They found that, for all 28 taken together, the identical twins did indeed agree with each other more often than the fraternal ones did—and that genes accounted for about 40 percent of the difference between the two groups. On the other hand, the answers these people gave to the words “Democrat” or “Republican” had a very weak genetic basis. In politics, genes help us understand fundamental attitudes—that is, whether we are liberal or conservative—but do not explain what party we choose to join.

Genes also influence how frequently we vote. Voting has always puzzled scholars: How is it rational to wait in line on a cold November afternoon when there is almost no chance that your ballot will make any difference? Apparently, people who vote often feel a strong sense of civic duty or like to express themselves. But who are these people? James Fowler, Laura Baker, and Christopher Dawes studied political participation in Los Angeles by comparing voting among identical and fraternal twins. Their conclusion: among registered voters, genetic factors explain about 60 percent of the difference between those who vote and those who do not.

A few scholars, determined to hang on to the belief that environment explains everything, argue that such similarities occur because the parents of identical twins—as opposed to the parents of fraternal twins—encourage them to be as alike as possible as they grow up. This is doubtful. First, we know that many parents make bad guesses about their children’s genetic connection—thinking that fraternal twins are actually identical ones, or vice versa. When we take twins’ accurate genetic relationships into account, we find that identical twins whom parents wrongly thought to be fraternal are very similar, while fraternal twins wrongly thought to be identical are no more alike than ordinary siblings.

Moreover, studying identical twins reared apart by different families, even in different countries, effectively shows that their similar traits cannot be the result of similar upbringing. The University of Minnesota’s Thomas Bouchard has done research on many identical twins reared apart (some in different countries) and has found that though they never knew each other or their parents, they proved remarkably alike, especially in personality—whether they were extroverted, agreeable, neurotic, or conscientious, for example.

Some critics complain that the fact that identical twins live together with their birth parents, at least for a time, ruins Bouchard’s findings: during this early period, they say, parenting must influence the children’s attitudes. But the average age at which the identical twins in Bouchard’s study became separated from their parents was five months. It is hard to imagine parents teaching five-month-old babies much about politics or religion.

The gene-driven ideological split that Alford and his colleagues found may, in fact, be an underestimate, because men and women tend to marry people with whom they agree on big issues—assortative mating, as social scientists call it. Assortative mating means that the children of parents who agree on issues will be more likely to share whatever genes influence those beliefs. Thus, even children who are not identical twins will have a larger genetic basis for their views than if their parents married someone with whom they disagreed. Since we measure heritability by subtracting the similarity among fraternal twins from the similarity among identical ones, this difference may neglect genetic influences that already exist on fraternal twins. And if it does, it means that we are underestimating genetic influences on attitudes.

When we step back and look at American politics generally, genes may help us understand why, for countless decades, about 40 percent of all voters have supported conservative causes, about 40 percent have backed liberal ones, and the 20 percent in the middle have decided the elections. On a few occasions, the winning presidential candidate has won about 60 percent of the vote. But these days we call a 55 percent victory a “landslide.” It is hard to imagine a purely environmental force that would rule out a presidential election in which one candidate got 80 percent of the vote and his rival only 20 percent. Something deeper must be going on.

All of this leaves open the question: Which genes help create which political attitudes? Right now, we don’t know. To discover the links will require lengthy studies of the DNA of people with different political views. Scientists are having a hard time locating the specific genes that cause diseases; it will probably be much harder to find the complex array of genes that affects politics.

There are problems with the observed link between genes and politics. One is that it is fairly crude so far. Liberals and conservatives come in many varieties: one can be an economic liberal and a social conservative, say, favoring a large state but opposing abortion; or an economic conservative and a social liberal, favoring the free market but supporting abortion and gay rights. If we add attitudes about foreign policy to the mix, the combinations double. Most tests used in genetic studies of political views do not allow us to make these important distinctions. As a result, though we know that genes affect ideology, that knowledge is clumsy. In time, I suspect, we will learn more about these subtleties.

Further, it’s important to emphasize that biology is not destiny. Genetic influences rarely operate independently of environmental factors. Take the case of serotonin. People who have little of this neurotransmitter are at risk for some psychological problems, but for many of them, no such problems occur unless they experience some personal crisis. Then the combined effect of genetic influences and disruptive experiences will trigger a deep state of depression, something that does not happen to people who either do not lack serotonin or who do lack it but encounter no crisis. Recently, in the first study to find the exact genes that affect political participation, Fowler and Dawes found two genes that help explain voting behavior. One of the genes, influencing serotonin levels, boosts turnout by 10 percent—if the person also attends church frequently. Nature and nurture interact.

The same is probably true of political ideology. When campus protests and attacks on university administrators began in the late 1960s, it was not because a biological upheaval had increased the number of radicals; it was because such people encountered events (the war in Vietnam, the struggle over civil rights) and group pressures that induced them to take strong actions. By the same token, lynchings in the South did not become common because there were suddenly more ultra-racists around. Rather, mob scenes, media frenzies, and the shock of criminal events motivated people already skeptical of civil rights to do terrible things.

Another challenge is politicized assessment of the genetic evidence. Ever since 1950, when Theodor Adorno and his colleagues published The Authoritarian Personality, scholars have studied right-wing authoritarianism but neglected its counterpart on the left. In his study of identical twins reared apart, Bouchard concludes that right-wing authoritarianism is, to a large degree, inherited—but he says nothing about the Left. This omission is puzzling, since as Bouchard was studying twins at the University of Minnesota, he was regularly attacked by left-wing students outraged by the idea that any traits might be inherited. A few students even threatened to kill him. When I pointed this out to him, he suggested, in good humor, that I was a troublemaker.

Yet if you ask who in this country has prevented people from speaking on college campuses, it is overwhelmingly leftists. If you ask who storms the streets and shatters the windows of Starbucks coffee shops to protest the World Trade Organization, it is overwhelmingly leftists. If you ask who produces campus codes that infringe on free speech, it is overwhelmingly leftists. If you ask who invaded the classroom of my late colleague Richard Herrnstein and tried to prevent him from teaching, it was overwhelmingly leftists.

A better way to determine if authoritarianism is genetic would be to ask people what the country’s biggest problems are. Liberals might say the inequality of income or the danger of global warming; conservatives might indicate the tolerance of abortion or the abundance of pornography. You would then ask each group what they thought should be done to solve these problems. An authoritarian liberal might say that we should tax high incomes out of existence and close down factories that emit greenhouse gases. A conservative authoritarian might suggest that we put abortion doctors in jail and censor books and television programs. This approach would give us a true measure of authoritarianism, left and right, and we would know how many of each kind existed and something about their backgrounds. Then, if they had twins, we would be able to estimate the heritability of authoritarianism. Doing all this is a hard job, which may explain why no scholars have done it.

Genes shape, to varying degrees, almost every aspect of human behavior. The struggle by some activists to deny or downplay that fact is worrisome. The anti-gene claim is ultimately an ill-starred effort to preserve the myth that, since the environment can explain everything, political causes that attempt to alter the environment can bring about whatever their leaders desire.

The truth is that though biology is not destiny, neither is it an easily changed path to utopia.

James Q. Wilson, formerly a professor at Harvard and at UCLA, now lectures at Pepperdine University. In 2003, he was awarded the Presidential Medal of Freedom.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
WSJ: Last Minute Changes
« Reply #59 on: February 13, 2009, 11:36:32 AM »
By CHRISTOPHER F. CHABRIS
The debate over the validity of evolutionary theory may be real enough when it comes to religious belief and cultural outlook. But it has nothing to do with science. No evidence seriously contradicts the idea that the plant and animal species found on Earth today are descended from common ancestors that existed long ago. Indeed, the evidence for natural selection is infinitely stronger than it was when Charles Darwin proposed it 150 years ago, mainly because later discoveries in the field of genetics supplied the biological mechanisms to explain the patterns that Darwin and his contemporaries were observing.

But scientists do disagree over the pace and time-span of human evolution. Gregory Cochran and Henry Harpending begin "The 10,000 Year Explosion" with a remark from the paleontologist Stephen J. Gould, who said that "there's been no biological change in humans for 40,000 or 50,000 years." They also cite the evolutionist Ernst Mayr, who agrees that "man's evolution towards manness suddenly came to a halt" in the same epoch. Such claims capture the consensus in anthropology, too, which dates the emergence of "behaviorally modern humans" -- beings who acted much more like us than like their predecessors -- to about 45,000 years ago.

But is the timeline right? Did human evolution really stop? If not, our sense of who we are -- and how we got this way -- may be radically altered. Messrs. Cochran and Harpending, both scientists themselves, dismiss the standard view. Far from ending, they say, evolution has accelerated since humans left Africa 40,000 years ago and headed for Europe and Asia.

Evolution proceeds by changing the frequency of genetic variants, known as "alleles." In the case of natural selection, alleles that enable their bearers to leave behind more offspring will become more common in the next generation. Messrs. Cochran and Harpending claim that the rate of change in the human genome has been increasing in recent millennia, to the point of turmoil. Literally hundreds or thousands of alleles, they say, are under selection, meaning that our social and physical environments are favoring them over other -- usually older -- alleles. These "new" variants are sweeping the globe and becoming more common.

 The 10,000 Year Explosion
By Gregory Cochran and Henry Harpending
(Basic, 288 pages, $27)
But genomes don't just speed up their evolution willy-nilly. So what happened, the authors ask, to keep human evolution going in the "recent" past? Two crucial events, they contend, had to do with food production. As humans learned the techniques of agriculture, they abandoned their diffuse hunter-gatherer ways and established cities and governments. The resulting population density made humans ripe for infectious diseases like smallpox and malaria. Alleles that helped protect against disease proved useful and won out.

The domestication of cattle for milk production also led to genetic change. Among people of northern European descent, lactose intolerance -- the inability to digest milk in adulthood -- is unusual today. But it was universal before a genetic mutation arose about 8,000 years ago that made lactose tolerance continue beyond childhood. Since you can get milk over and over from a cow, but can get meat from it only once, you can harvest a lot more calories over time for the same effort if you are lactose tolerant. Humans who had this attribute would have displaced those who didn't, all else being equal. (If your opponent has guns and you don't, drinking milk won't save you.)

To make their case for evolution having continued longer than is usually claimed, Messrs. Cochran and Harpending remind us that dramatic changes in human culture appeared about 40,000 years ago, resulting in painting, sculpture, and better tools and weapons. A sudden change in the human genome, they suggest, made for more creative, inventive brains. But how could such a change come about? The authors propose that the humans of 40,000 years ago occasionally mated with Neanderthals living in Europe, before the Neanderthals became extinct. The result was an "introgression" of Neanderthal alleles into the human lineage. Some of those alleles may have improved brain function enough to give their bearers an advantage in the struggle for survival, thus becoming common.

In their final chapter, Messrs. Cochran and Harpending venture into recorded history by observing two interesting facts about Ashkenazi Jews (those who lived in Europe after leaving the Middle East): They are disproportionately found among intellectual high-achievers -- Nobel Prize winners, world chess champions, people who score well on IQ tests -- and they are victims of rare genetic diseases, like Gaucher's and Tay-Sachs. The authors hypothesize that these two facts are connected by natural selection.

Just as sickle-cell anemia results from having two copies of an allele that protects you against malaria if you have just one, perhaps each Ashkenazi disease occurs when you have two copies of an allele that brings about something useful when you have just one. That useful thing, according to Messrs. Cochran and Harpending, is higher cognitive ability. They argue that the rare diseases are unfortunate side-effects of natural selection for intelligence, which Messrs. Cochran and Harpending think happened during the Middle Ages in Europe, when Jews rarely intermarried with other Europeans.

"The 10,000 Year Explosion" is important and fascinating but not without flaw. Messrs. Cochran and Harpending do not stop often enough to acknowledge and rebut the critics of their ideas. And though the authors cite historical sources and scientific articles in support of their thesis, they too often write in a speculative voice, qualifying claims with "possible," "likely," "might" and "probably." This voice is inevitable in any discussion of events tens of thousands of years ago. But it leads to another problem: The authors don't say enough about the developments in genetic science that allow them to make inferences about humanity's distant past. Readers will wonder, for instance, exactly how it is possible to recognize ancient Neanderthal DNA in our modern genomes. Despite all this, the provocative ideas in "The 10,000 Year Explosion" must be taken seriously by anyone who wants to understand human origins and humanity's future.

Mr. Chabris is a psychology professor at Union College in Schenectady, N.Y.


Body-by-Guinness

  • Guest
Brain Focus and Neural Inhibitions
« Reply #60 on: February 25, 2009, 10:31:06 AM »
My ex-wife, who could belabor a point to a mind numbing degree, use to get quite annoyed when I'd tune out her long winded meanderings. From my end the process was an unbidden one: some mechanism conducted a signal to noise evaluation and filtered out the noise, at which point my brain would cast about for something germane to focus on, at least until interrupted by an "are you listening to me!?" Be that as it may, those memories caused me to mull this piece:

Brain mechanism recruited to reduce noise during challenging tasks

New research reveals a sophisticated brain mechanism that is critical for filtering out irrelevant signals during demanding cognitive tasks. The study, published by Cell Press in the February 26 issue of the journal Neuron, also provides some insight into how disruption of key inhibitory pathways may contribute to schizophrenia.

"The ability to keep track of information and one's actions from moment to moment is necessary to accomplish even the simple tasks of everyday life," explains senior study author, Dr. Helen Barbas from Boston University and School of Medicine. "Equally important is the ability to focus on relevant information and ignore noise."

Dr. Barbas and colleague, Dr. Maria Medalla, were interested in examining the synaptic mechanisms for selection and suppression of signals involved in working memory. They focused on the fine synaptic interactions of pathways with excitatory and inhibitory neurons in brain areas involved in attention.

"The primate dorsolateral prefrontal cortex (DLPFC) and anterior cingulated cortex (ACC) are brain regions that focus attention on relevant signals and suppress noise in cognitive tasks. However, their synaptic communication and unique roles in cognitive control are largely unknown," explains Dr. Barbas.

The researchers found that a pathway linking two related prefrontal areas within DLPFC and a pathway from the functionally distinct ACC to DLPFC similarly innervated excitatory neurons associated with paying attention to relevant stimuli. Interestingly, large nerve fiber endings from ACC contacted selectively inhibitory neurons that help suppress "noisy" excitatory neurons nearby.

These observations suggest that ACC has a greater impact in reducing noise in dorsolateral areas during challenging cognitive tasks involving conflict, error, or reversing decisions. These mechanisms are often disrupted in schizophrenia, and previous functional imaging studies by others have shown that schizophrenia is associated with reduced activity in ACC.

The authors conclude that ACC pathways may help reduce noise by stimulating inhibitory neurons in DLPFC. "The present data provide a circuit mechanism to suggest that pathology in the output neurons of ACC in schizophrenia might reduce excitatory drive to inhibitory neurons of dorsolateral prefrontal cortices, perturbing the delicate balance of excitation and inhibition," offers Dr. Barbas.

http://www.eurekalert.org/pub_releases/2009-02/cp-bmr022309.php

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Sexual Insanity
« Reply #61 on: February 28, 2009, 09:39:16 PM »
Bill Muehlenberg | Friday, 27 February 2009
Sexual insanity
A 13-year-old father? A woman with 14 IVF children? We can’t say we were not warned.
Week by week the stories become more sensational. Blogs were still buzzing over California’s “octomom”, Nadya Suleman, when the story of Alfie Patten, a baby-faced British 13-year-old and putative father, grabbed the international headlines. In Australia, where I live, an appeal court has awarded a lesbian duo hundreds of thousands of dollars in compensation for getting two babies from IVF treatment rather than one.

Strangely enough, such dramatic consequences of the erosion of marriage and the explosion of out-of-control sexuality were foreseen -- in some instances long ago. In 1968 Will and Ariel Durant’s important book, The Lessons of History appeared. In it they said, The sex drive in the young is a river of fire that must be banked and cooled by a hundred restraints if it is not to consume in chaos both the individual and the group.

Although the sexual revolution took off in the mid-60s, other social commentators had made similar warnings earlier on. In 1956 Harvard sociologist Pitirim Sorokin put it this way:

This sex revolution is as important as the most dramatic political or economic upheaval. It is changing the lives of men and women more radically than any other revolution of our time… Any considerable change in marriage behaviour, any increase in sexual promiscuity and sexual relations, is pregnant with momentous consequences. A sex revolution drastically affects the lives of millions, deeply disturbs the community, and decisively influences the future of society.

And back in 1927, J.D. Unwin of Cambridge University made similar remarks:

The whole of human history does not contain a single instance of a group becoming civilised unless it has been completely monogamous, nor is there any example of a group retaining its culture after it has adopted less rigorous customs. Marriage as a life-long association has been an attendant circumstance of all human achievement, and its adoption has preceded all manifestations of social energy… Indissoluble monogamy must be regarded as the mainspring of all social activity, a necessary condition of human development.

But these warnings have fallen on deaf ears, and our sexual decline is now gathering speed. Let’s look again at the stories I mentioned at the beginning of this article -- reported in the media within days of each other. Any one of them reveals a culture in crisis, but taken together they show a West on a slide to sexual suicide.

The case of Nadya Suleman, America’s single mom extraordinaire, is so well publicised we need only briefly recap here. Nadya had “a dream…to have a large family, huge family”, so she went right ahead and got herself six children with the aid of a sperm donor and IVF. But that was not enough; she went back again to the clinic and, wonder of wonders, produced octuplets. The 33-year-old California woman is unrepentant. “This is my choice to be a single parent,” she said.

It’s hard to know who has been more reckless and irresponsible, the woman or her IVF doctor. He recently implanted a 49-year-old woman with seven embryos, who is now pregnant with quadruplets. One can understand there are those in the IVF industry simply happy to make money, regardless of the consequences. Now that the consequence in this case is a single mother with 14 children, they will no doubt try to wash their hands of the whole affair and let society pick up the tab for supporting them.

Equally famous is the case of Alfie Patten, the 13-year-old English father who was just twelve when he conceived the child. He and his 15-year-old girlfriend are now parents, but seemingly clueless as to what all this entails. And now it turns out that there is a question as to who the real father is. Evidently, two other young boys (14 and 16) are now claiming to be the father. Speculation is rife about lucrative publicity deals to be made. Meanwhile, a child has been born into social and sexual chaos.

There is something sadly predictable about Alfie’s case, but my first Australian example of sexual insanity is truly startling. It concerns two lesbians who successfully sued a Canberra IVF doctor for creating two babies instead of one. The case actually has three stages, one sane and two outrageous. In 2007 the lesbian pair outrageously sued the doctor, claiming they only wanted one child, and that two would damage their livelihood (even though their combined income is more than $100,000).

In July 2008 the ACT Supreme Court, sanely, rejected their claim, but an appeals court recently, and again outrageously, reversed the decision, ordering the doctor to pay the lesbians $317,000 in compensation. The women said having two children damaged their relationship. (Mind you, in the light of the Nadya Suleman story it is difficult to feel sorry for IVF doctors.)


Our last story involves the growing trend of rental agreements in Australian cities involving sex instead of rent money. It seems that some men are taking advantage of the rental crisis by placing online ads which offer women free rooms in exchange for sex. One ad, for a Melbourne townhouse, offered "free rent for someone special: instead of rent, I am looking for someone to help me with certain needs/requirements on a regular basis''.

The Sunday Telegraph explains: “The zero-rent ads, targeting desperate women looking for somewhere to live, are becoming increasingly common on popular ‘share house’ rental websites. Although there have been numerous complaints about the ads, which some website users have dubbed ‘offensive’, they do not breach policy guidelines for sites such as flatmates com.au”


“Desperate women”? Let’s not be too ready to excuse those who accept what amounts to an invitation to prostitution, thereby putting themselves in danger and contributing to the environment of sexual insanity. Like the previous examples, the blatant sexual pitch in these flatmate ads is a sign of a society which is fast losing all bearings concerning things sexual or things moral.

Our wiser, saner and more moral forebears provided plenty of warning about these things, but we have chosen to ignore such warnings and now each passing day seems to bring out another horror story of sexual insanity.

As G.K. Chesterton wrote a century ago: A society that claims to be civilized and yet allows the sex instinct free-play is inoculating itself with a virus of corruption which sooner or later will destroy it. It is only a question of time. He is worth quoting at length:

What had happened to the human imagination, as a whole, was that the whole world was coloured by dangerous and rapidly deteriorating passions; by natural passions becoming unnatural passions. Thus the effect of treating sex as only one innocent natural thing was that every other innocent natural thing became soaked and sodden with sex. For sex cannot be admitted to a mere equality among elementary emotions or experiences like eating and sleeping. The moment sex ceases to be a servant it becomes a tyrant. There is something dangerous and disproportionate in its place in human nature, for whatever reason; and it does really need a special purification and dedication. The modern talk about sex being free like any other sense, about the body being beautiful like any tree or flower, is either a description of the Garden of Eden or a piece of thoroughly bad psychology, of which the world grew weary two thousand years ago.

We are today witnessing the bitter fruit of allowing sex to become a tyrant. Each day new headlines testify to the fact that when we abuse the wonderful gift of sex, we abuse ourselves and our neighbours. The question is, how much more abuse can we take as a culture before society can no longer function? One suspects that we should find this out quite soon.

Bill Muehlenberg is a lecturer in ethics and philosophy at several Melbourne theological colleges and a PhD candidate at Deakin University.

Body-by-Guinness

  • Guest
New Adult Stem Cell Technique
« Reply #62 on: March 02, 2009, 12:42:02 PM »
Wonderful Stem Cell News

Ronald Bailey | March 2, 2009, 10:15am

Canadian and British stem cell researchers are reporting an exciting new method for producing stem cells from adult cells without using viruses. In 2006, researchers in Japan and Wisconsin discovered how to use viruses to ferry four genes that turn adult cells into stem cells that act very much like embryonic stem cells. Like stem cells derived from embryos, the induced pluripotent stem (iPS) cells can differentiate into various cell types that could be used as transplants to replace diseased or damaged tissues. In addition, since the stem cells are produced using adult cells taken from individual patients, they would be genetic matches for each patient. This would mean that transplants of such cells would not risk being rejected by a patient's immune system.

However, researchers worried that using viruses to produce iPS cells might result in cancer. The new technique uses the piggyBac transposon derived from butterflies to incorporate into skin cells the suite of four genes necessary to transform them into stem cells. (A transposon ia a mobile DNA sequence that can move from one site in a chromosome to another, or between different chromosomes.) Once the genes are installed, the transposon can be completely eliminated from the cells. If iPS cells work out, another tremendous advantage to them is that they can be produced without using scarce human eggs.

In addition, opponents of human embryonic stem cell research argue that the new iPS cells are not morally problematic (from their point of view) because they are not derived from human embryos. On the other hand, it might be that iPS cells produced from skin cells could become embryos capable of developing into babies if implanted in a womb. The possibility that a soul can enter a specific cell evidently may depend on whether or not a single genetic switch is on or off.

In any case, the new research is a very promising avenue to the development of regenerative medicine.

http://www.reason.com/blog/printer/131989.html

Body-by-Guinness

  • Guest
Future Planning in the Animal Kingdom
« Reply #63 on: March 10, 2009, 06:31:08 AM »
Last line of this piece bothers me quite a bit, but otherwise it contains many interesting tidbits.

Arsenal Confirms Chimp's Ability to Plan, Study Says
Animal at Swedish Zoo Collects Stones to Hurl at Visitors
By David Brown
Washington Post Staff Writer
Tuesday, March 10, 2009; A06

Santino evidently knows he's going to get upset, so he plans ahead.

The 30-year-old chimpanzee, who has lived in a Swedish zoo most of his life, sometimes gets agitated when zoo visitors begin to gather on the other side of the moat that surrounds his enclosure, where he is the dominant -- and only -- male in a group that includes half a dozen females.

He shows his displeasure by flinging stones or bits of concrete at the human intruders, but finding a suitable weapon on the spur of the moment perhaps isn't so easy. To prepare, Santino often begins his day by roaming the enclosure, finding stones and stacking them in handy piles.

On some days, he's barraged visitors with up to 20 projectiles thrown in rapid succession, always underhand. Several times he has hit spectators standing 30 feet away across the water-filled moat.

The behavior, witnessed dozens of times, has made Santino something of a local celebrity.

It also made him the subject of a scientific paper, published yesterday, documenting one of the more elaborate examples of contingency planning in the animal world.

"Many animals plan. But this is planning for a future psychological state. That is what is so advanced," said Mathias Osvath, director of the primate research station at Lund University and author of the paper in the journal Current Biology.

The animal's preparations include not only stockpiling the stones he finds but also, more recently, also fashioning projectiles from pieces of concrete he has broken off artificial rocks in his habitat.

Others have observed great apes planning, both in the wild and in captivity. Some birds in the corvid family, which includes jays and ravens, also plan for future contingencies. In general, though, planning by animals is thought to occur only when the payoff is immediate and more or less certain.

"People always assume that animals live in the present. This seems to indicate that they don't live entirely in the present," said Frans de Waal, a primatologist at Emory University in Atlanta, who was not involved in the research.

Santino was born in a zoo in Munich in 1978 but has lived all but five years of his life at the Furuvik Zoo, about 60 miles north of Stockholm.

He began throwing stones at age 16 when he became the sole -- and therefore dominant -- male in the group. None of the other chimpanzees, including a male that was in the group briefly, stored or threw stones.

The troop's habitat is an island surrounded by a moat. The stone-throwing is more frequent early in the season when the zoo reopens after the winter and Santino sees crowds of people across the water for the first time in months. Sometimes particular individuals seem to bother him, Osvath said.

On some days, zookeepers have found as many as five caches, containing three to eight stones each, along the shore facing the viewing area. Once, a hidden observer saw him gather stones five mornings in a row before the zoo opened.

Most of the stones are taken from the shallows at the edge of the moat. About a year after his storing and throwing began, however, Santino began tapping stones against the concrete artificial rocks, evidently listening for a hollow sound that indicates a fissure. He would then hit the concrete harder until a piece chipped off, occasionally then hitting it again to make it fist-size.

"I have seen him going around doing this. It is very impressive," Osvath said.

The throwing behavior is part of a normal display of dominance and territorial protection by male chimpanzees that occasionally involves throwing feces. Osvath doesn't think this animal is particularly smart or aggressive.

"I don't think he is unusual in any way. If anything, chimpanzees in the wild would plan more, I suspect," he said.

Osvath and others have tested chimpanzees' ability to plan. In one experiment, the animals were given a choice between eating grapes at the moment and getting and storing a rubber hose they could use sometime in the future to gain access to fruit soup, one of their favorite foods. Many chose the hose.

De Waal, who is also affiliated with the Yerkes National Primate Research Center in Atlanta, said he's observed a female chimp at a zoo in the Netherlands that in cold weather -- but not warm -- would bring an armful of straw from her enclosure when she went outside in order to have something to sit on.

Amy Fultz, a primatologist at Chimp Haven, a sanctuary in Louisiana for animals once used for entertainment or research, said she also has seen planning in some of the 132 chimpanzees living there.

As in the wild, some fashion tools from stalks of plants that they use to fish ants from anthills.

"One, named Karin, will gather up a particular species of verbena and save it in a place in her habitat. I have watched her go back and get them later in the day, or even later in the week," Fultz said.

One expert said planning by chimpanzees has been observed often enough in the wild that she questioned the novelty of Santino's behavior.

Sue Taylor Parker, a retired professor of biological anthropology at California's Sonoma State University who has compared the cognitive development of humans and primates, said wild chimpanzees sometimes carry rocks long distances to "anvil sites" for future use in cracking nuts. Cooperative hunting also implies a certain minimum of planning.

"Chimpanzee behavior that is at the edge of their highest abilities is always interesting to read about. I just question the uniqueness of this," she said. She added that the level of planning seen in Santino is roughly the same as that of 3-to-5-year-old children.

Unusual or not, Santino's rock-throwing may not be in evidence when spring comes to Sweden this year and he again sees visitors across the water.

In order to decrease his agitation, which was fueled in part by high testosterone levels characteristic of dominant males, the animal was castrated last fall.

http://www.washingtonpost.com/wp-dyn/content/article/2009/03/09/AR2009030901458.html?nav=hcmodule

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
NYT: Can we increase our intelligence?
« Reply #64 on: March 11, 2009, 05:53:19 AM »
Guest Column: Can We Increase Our Intelligence?

Many thanks to Steve Quake for four stimulating articles on some of the dilemmas facing scientists today. He now hands off to Sandra Aamodt and Sam Wang, two neuroscientists famous for their award-winning book, “Welcome to Your Brain: Why You Lose Your Car Keys But Never Forget How to Drive and Other Puzzles of Everyday Life.” Sandra and Sam will be writing their articles together; please welcome them.


By Sam Wang and Sandra Aamodt

It’s an honor to be invited to fill in for Olivia. We’ll be writing about slow and fast forces that shape the brain: natural selection, operating relatively slowly over many generations; and environmental influences, whose effects are visible across a few generations or even within one individual’s lifetime.

We’re often asked whether the human brain is still evolving. Taken at face value, it sounds like a silly question. People are animals, so selection pressure would presumably continue to apply across generations.

But the questioners are really concerned about a larger issue: how our brains are changing over time — and whether we have any control over these developments. This week we discuss intelligence and the “Flynn effect,” a phenomenon that is too rapid to be explained by natural selection.

It used to be believed that people had a level of general intelligence with which they were born that was unaffected by environment and stayed the same, more or less, throughout life. But now it’s known that environmental influences are large enough to have considerable effects on intelligence, perhaps even during your own lifetime.

A key contribution to this subject comes from James Flynn, a moral philosopher who has turned to social science and statistical analysis to explore his ideas about humane ideals. Flynn’s work usually pops up in the news in the context of race issues, especially public debates about the causes of racial differences in performance on intelligence tests. We won’t spend time on the topic of race, but the psychologist Dick Nisbett has written an excellent article on the subject.

Flynn first noted that standardized intelligence quotient (I.Q.) scores were rising by three points per decade in many countries, and even faster in some countries like the Netherlands and Israel. For instance, in verbal and performance I.Q., an average Dutch 14-year-old in 1982 scored 20 points higher than the average person of the same age in his parents’ generation in 1952. These I.Q. increases over a single generation suggest that the environmental conditions for developing brains have become more favorable in some way.

What might be changing? One strong candidate is working memory, defined as the ability to hold information in mind while manipulating it to achieve a cognitive goal. Examples include remembering a clause while figuring out how it relates the rest of a sentence, or keeping track of the solutions you’ve already tried while solving a puzzle. Flynn has pointed out that modern times have increasingly rewarded complex and abstract reasoning. Differences in working memory capacity account for 50 to 70 percent of individual differences in fluid intelligence (abstract reasoning ability) in various meta-analyses, suggesting that it is one of the major building blocks of I.Q. (Ackerman et al; Kane et al; Süss et al.) This idea is intriguing because working memory can be improved by training.


Felix Sockwell
 
A common way to measure working memory is called the “n-back” task. Presented with a sequential series of items, the person taking the test has to report when the current item is identical to the item that was presented a certain number (n) of items ago in the series. For example, the test taker might see a sequence of letters like

L K L R K H H N T T N X

presented one at a time. If the test is an easy 1-back task, she should press a button when she sees the second H and the second T. For a 3-back task, the right answers are K and N, since they are identical to items three places before them in the list. Most people find the 3-back condition to be challenging.

A recent paper reported that training on a particularly fiendish version of the n-back task improves I.Q. scores. Instead of seeing a single series of items like the one above, test-takers saw two different sequences, one of single letters and one of spatial locations. They had to report n-back repetitions of both letters and locations, a task that required them to simultaneously keep track of both sequences. As the trainees got better, n was increased to make the task harder. If their performance dropped, the task was made easier until they recovered.

Each day, test-takers trained for 25 minutes. On the first day, the average participant could handle the 3-back condition. By the 19th day, average performance reached the 5-back level, and participants showed a four-point gain in their I.Q. scores.

The I.Q. improvement was larger in people who’d had more days of practice, suggesting that the effect was a direct result of training. People benefited across the board, regardless of their starting levels of working memory or I.Q. scores (though the results hint that those with lower I.Q.s may have shown larger gains). Simply practicing an I.Q. test can lead to some improvement on the test, but control subjects who took the same two I.Q. tests without training improved only slightly. Also, increasing I.Q. scores by practice doesn’t necessarily increase other measures of reasoning ability (Ackerman, 1987).

Since the gains accumulated over a period of weeks, training is likely to have drawn upon brain mechanisms for learning that can potentially outlast the training. But this is not certain. If continual practice is necessary to maintain I.Q. gains, then this finding looks like a laboratory curiosity. But if the gains last for months (or longer), working memory training may become as popular as — and more effective than — games like sudoku among people who worry about maintaining their cognitive abilities.

Now, some caveats. The results, though tantalizing, are not perfect. It would have been better to give the control group some other training not related to working memory, to show that the hard work of training did not simply motivate the experimental group to try harder on the second I.Q. test. The researchers did not test whether working memory training improved problem-solving tasks of the type that might occur in real life. Finally, they did not explore how much improvement would be seen with further training.

Research on working memory training, as well as Flynn’s original observations, raise the possibility that the fast-paced modern world, despite its annoyances (or even because of them) may be improving our reasoning ability. Maybe even multitasking — not the most efficient way to work — is good for your brain because of the mental challenge. Something to think about when you’re contemplating retirement on a deserted island.

**********

NOTES:

C. Jarrold and J.N. Towse (2006) Individual differences in working memory. Neuroscience 139 (2006) 39–50.

P.L. Ackerman, M.E. Beier, and M.O. Boyle (2005) Working memory and intelligence: the same or different constructs? Psychological Bulletin 131:30–60.

M.J. Kane, D.Z. Hambrick, and A.R.A. Conway (2005) Working memory capacity and fluid intelligence are strongly related constructs: comment on Ackerman, Beier, and Boyle (2005). Psychological Bulletin 131:66–71.

H.-M. Süss, K. Oberauer, W.W. Wittmann, O. Wilhelm, and R. Schulze (2002) Working-memory capacity explains reasoning ability—and a little bit more. Intelligence 30:261–288.

S.M. Jaeggi, M. Buschkuehl, J. Jonides, and W.J. Perrig (2008) Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences USA 105:6829-6833. [full text]

D.A. Bors, F. Vigneau (2003) The effect of practice on Raven’s Advanced Progressive Matrices. Learning and Individual Differences 13:291–312.

P.L. Ackerman (1987) Individual differences in skill learning: An integration of psychometric and information processing perspectives. Psychological Bulletin 102:3–27.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Remembering jokes
« Reply #65 on: March 17, 2009, 09:52:17 AM »
Basics
In One Ear and Out the Other
NYT
Published: March 16, 2009

By all accounts, my grandfather Nathan had the comic ambitions of a Jack Benny but the comic gifts of a John Kerry. Undeterred, he always kept a few blank index cards in his pocket, so that if he happened to hear a good joke, he’d have someplace to write it down.

How I wish I knew where Nathan stashed that deck.

Like many people, I can never remember a joke. I hear or read something hilarious, I laugh loudly enough to embarrass everybody else in the library, and then I instantly forget everything about it — everything except the fact, always popular around the dinner table, that “I heard a great joke today, but now I can’t remember what it was.”

For researchers who study memory, the ease with which people forget jokes is one of those quirks, those little skids on the neuronal banana peel, that end up revealing a surprising amount about the underlying architecture of memory.

And there are plenty of other similarly illuminating examples of memory’s whimsy and bad taste — like why you may forget your spouse’s birthday but will go to your deathbed remembering every word of the “Gilligan’s Island” theme song. And why you must chop a string of data like a phone number into manageable and predictable chunks to remember it and will fall to pieces if you are in Britain and hear a number read out as “double-four, double-three.” And why your efforts to fill in a sudden memory lapse by asking your companions, “Hey, what was the name of that actor who starred in the movie we saw on Friday?” may well fail, because (what useless friends!) now they’ve all forgotten, too.

Welcome to the human brain, your three-pound throne of wisdom with the whoopee cushion on the seat.

In understanding human memory and its tics, Scott A. Small, a neurologist and memory researcher at Columbia, suggests the familiar analogy with computer memory.

We have our version of a buffer, he said, a short-term working memory of limited scope and fast turnover rate. We have our equivalent of a save button: the hippocampus, deep in the forebrain is essential for translating short-term memories into a more permanent form.

Our frontal lobes perform the find function, retrieving saved files to embellish as needed. And though scientists used to believe that short- and long-term memories were stored in different parts of the brain, they have discovered that what really distinguishes the lasting from the transient is how strongly the memory is engraved in the brain, and the thickness and complexity of the connections linking large populations of brain cells. The deeper the memory, the more readily and robustly an ensemble of like-minded neurons will fire.

This process, of memory formation by neuronal entrainment, helps explain why some of life’s offerings weasel in easily and then refuse to be spiked. Music, for example. “The brain has a strong propensity to organize information and perception in patterns, and music plays into that inclination,” said Michael Thaut, a professor of music and neuroscience at Colorado State University. “From an acoustical perspective, music is an overstructured language, which the brain invented and which the brain loves to hear.”

A simple melody with a simple rhythm and repetition can be a tremendous mnemonic device. “It would be a virtually impossible task for young children to memorize a sequence of 26 separate letters if you just gave it to them as a string of information,” Dr. Thaut said. But when the alphabet is set to the tune of the ABC song with its four melodic phrases, preschoolers can learn it with ease.

And what are the most insidious jingles or sitcom themes but cunning variations on twinkle twinkle ABC?

Really great jokes, on the other hand, punch the lights out of do re mi. They work not by conforming to pattern recognition routines but by subverting them. “Jokes work because they deal with the unexpected, starting in one direction and then veering off into another,” said Robert Provine, a professor of psychology at the University of Maryland, Baltimore County, and the author of “Laughter: A Scientific Investigation.” “What makes a joke successful are the same properties that can make it difficult to remember.”

This may also explain why the jokes we tend to remember are often the most clichéd ones. A mother-in-law joke? Yes, I have the slot ready and labeled.

Memory researchers suggest additional reasons that great jokes may elude common capture. Daniel L. Schacter, a professor of psychology at Harvard and the author of “The Seven Sins of Memory,” says there is a big difference between verbatim recall of all the details of an event and gist recall of its general meaning.

“We humans are pretty good at gist recall but have difficulty with being exact,” he said. Though anecdotes can be told in broad outline, jokes live or die by nuance, precision and timing. And while emotional arousal normally enhances memory, it ends up further eroding your attention to that one killer frill. “Emotionally arousing material calls your attention to a central object,” Dr. Schacter said, “but it can make it difficult to remember peripheral details.”

As frustrating as it can be to forget something new, it’s worse to forget what you already know. Scientists refer to this as the tip-of-the-tongue phenomenon, when you know something but can’t spit it out, and the harder you try the more noncompliant the archives.

It’s such a virulent disorder that when you ask friends for help, you can set off so-called infectious amnesia. Behind the tying up of tongues are the too-delicate nerves of our brain’s frontal lobes and their sensitivity to anxiety and the hormones of fight or flight. The frontal lobes that rifle through stored memories and perform other higher cognitive tasks tend to shut down when the lower brain senses danger and demands that energy be shunted its way.

For that reason anxiety can be a test taker’s worst foe, and the anxiety of a pop quiz from a friend can make your frontal lobes freeze and your mind go blank. That is also why you’ll recall the frustratingly forgotten fact later that night, in the tranquillity of bed.

Memories can be strengthened with time and practice, practice, practice, but if there’s one part of the system that resists improvement, it’s our buffers, the size of our working memory on which a few items can be temporarily cached. Much research suggests that we can hold in short-term memory only five to nine data chunks at a time.

The limits of working memory again encourage our pattern-mad brains, and so we strive to bunch phone numbers into digestible portions and could manage even 10-digit strings when they had area codes with predictable phrases like a middle zero or one. But with the rise of atonal phone numbers with random strings of 10 digits, memory researchers say the limits of working memory have been crossed. Got any index cards?


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
The End of Philosophy
« Reply #66 on: April 07, 2009, 08:46:16 AM »
The End of Philosophy
DAVID BROOKS
Published: April 6, 2009
Socrates talked. The assumption behind his approach to philosophy, and the approaches of millions of people since, is that moral thinking is mostly a matter of reason and deliberation: Think through moral problems. Find a just principle. Apply it.

One problem with this kind of approach to morality, as Michael Gazzaniga writes in his 2008 book, “Human,” is that “it has been hard to find any correlation between moral reasoning and proactive moral behavior, such as helping other people. In fact, in most studies, none has been found.”

Today, many psychologists, cognitive scientists and even philosophers embrace a different view of morality. In this view, moral thinking is more like aesthetics. As we look around the world, we are constantly evaluating what we see. Seeing and evaluating are not two separate processes. They are linked and basically simultaneous.

As Steven Quartz of the California Institute of Technology said during a recent discussion of ethics sponsored by the John Templeton Foundation, “Our brain is computing value at every fraction of a second. Everything that we look at, we form an implicit preference. Some of those make it into our awareness; some of them remain at the level of our unconscious, but ... what our brain is for, what our brain has evolved for, is to find what is of value in our environment.”

Think of what happens when you put a new food into your mouth. You don’t have to decide if it’s disgusting. You just know. You don’t have to decide if a landscape is beautiful. You just know.

Moral judgments are like that. They are rapid intuitive decisions and involve the emotion-processing parts of the brain. Most of us make snap moral judgments about what feels fair or not, or what feels good or not. We start doing this when we are babies, before we have language. And even as adults, we often can’t explain to ourselves why something feels wrong.

In other words, reasoning comes later and is often guided by the emotions that preceded it. Or as Jonathan Haidt of the University of Virginia memorably wrote, “The emotions are, in fact, in charge of the temple of morality, and ... moral reasoning is really just a servant masquerading as a high priest.”

The question then becomes: What shapes moral emotions in the first place? The answer has long been evolution, but in recent years there’s an increasing appreciation that evolution isn’t just about competition. It’s also about cooperation within groups. Like bees, humans have long lived or died based on their ability to divide labor, help each other and stand together in the face of common threats. Many of our moral emotions and intuitions reflect that history. We don’t just care about our individual rights, or even the rights of other individuals. We also care about loyalty, respect, traditions, religions. We are all the descendents of successful cooperators.

The first nice thing about this evolutionary approach to morality is that it emphasizes the social nature of moral intuition. People are not discrete units coolly formulating moral arguments. They link themselves together into communities and networks of mutual influence.

The second nice thing is that it entails a warmer view of human nature. Evolution is always about competition, but for humans, as Darwin speculated, competition among groups has turned us into pretty cooperative, empathetic and altruistic creatures — at least within our families, groups and sometimes nations.

The third nice thing is that it explains the haphazard way most of us lead our lives without destroying dignity and choice. Moral intuitions have primacy, Haidt argues, but they are not dictators. There are times, often the most important moments in our lives, when in fact we do use reason to override moral intuitions, and often those reasons — along with new intuitions — come from our friends.

The rise and now dominance of this emotional approach to morality is an epochal change. It challenges all sorts of traditions. It challenges the bookish way philosophy is conceived by most people. It challenges the Talmudic tradition, with its hyper-rational scrutiny of texts. It challenges the new atheists, who see themselves involved in a war of reason against faith and who have an unwarranted faith in the power of pure reason and in the purity of their own reasoning.

Finally, it should also challenge the very scientists who study morality. They’re good at explaining how people make judgments about harm and fairness, but they still struggle to explain the feelings of awe, transcendence, patriotism, joy and self-sacrifice, which are not ancillary to most people’s moral experiences, but central. The evolutionary approach also leads many scientists to neglect the concept of individual responsibility and makes it hard for them to appreciate that most people struggle toward goodness, not as a means, but as an end in itself.

Bob Herbert is off today.

Body-by-Guinness

  • Guest
Neuroworld
« Reply #67 on: April 10, 2009, 05:51:24 AM »
Interesting blog that tracks odd neurological tidbits:

http://trueslant.com/ryansager/

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
NYT: Animal "regret"?
« Reply #68 on: June 02, 2009, 07:02:10 AM »
In That Tucked Tail, Real Pangs of Regret?

 TIERNEY
Published: June 1, 2009
If you own a dog, especially a dog that has anointed your favorite rug, you know that an animal is capable of apologizing. He can whimper and slouch and tuck his tail and look positively mortified — “I don’t know what possessed me.” But is he really feeling sorry?



Could any animal feel true pangs of regret? Scientists once scorned this notion as silly anthropomorphism, and I used to side with the skeptics who dismissed these displays of contrition as variations of crocodile tears. Animals seemed too in-the-moment, too busy chasing the next meal, to indulge in much self-recrimination. If old animals had a song, it would be “My Way.”

Yet as new reports keep appearing — moping coyotes, rueful monkeys, tigers that cover their eyes in remorse, chimpanzees that second-guess their choices — the more I wonder if animals do indulge in a little paw-wringing.

Your dog may not share Hamlet’s dithering melancholia, but he might have something in common with Woody Allen.

The latest data comes from brain scans of monkeys trying to win a large prize of juice by guessing where it was hidden. When the monkeys picked wrongly and were shown the location of the prize, the neurons in their brain clearly registered what might have been, according to the Duke University neurobiologists who recently reported the experiment in Science.

“This is the first evidence that monkeys, like people, have ‘would-have, could-have, should-have’ thoughts,” said Ben Hayden, one of the researchers. Another of the authors, Michael Platt, noted that the monkeys reacted to their losses by shifting their subsequent guesses, just like humans who respond to a missed opportunity by shifting strategy.

“I can well imagine that regret would be highly advantageous evolutionarily, so long as one doesn’t obsess over it, as in depression,” Dr. Platt said. “A monkey lacking in regret might act like a psychopath or a simian Don Quixote.”

In earlier experiments, both chimpanzees and monkeys that traded tokens for cucumbers responded negatively once they saw that other animals were getting a tastier treat — grapes — for the same price. They made angry sounds and sometimes flung away the cucumbers or their tokens, reported Sarah Brosnan, a psychologist at Georgia State University.

“I think animals do experience regret, as defined as the recognition of a missed opportunity,” Dr. Brosnan said. “In the wild, these abilities may help them to recognize when they should forage in different areas or find a different cooperative partner who will share the spoils more equitably.”

No one knows, of course, exactly how this sense of regret affects an animal emotionally. When we see a dog slouching and bowing, we like to assume he’s suffering the way we do after a faux pas, but maybe he’s just sending a useful signal: I messed up.

“It’s possible that this kind of social signal in animals could have evolved without the conscious experience of regret,” said Sam Gosling, a psychologist at the University of Texas, Austin. “But it seems more plausible that there is some kind of conscious experience even if it’s not the same kind of thing that you or I feel.”

Marc Bekoff, a behavioral ecologist at the University of Colorado, says he’s convinced that animals feel emotional pain for their mistakes and missed opportunities. In “Wild Justice,” a new book he wrote with the philosopher Jessica Pierce, Dr. Bekoff reports on thousands of hours of observation of coyotes in the wild as well as free-running domesticated dogs.

When a coyote recoiled after being bitten too hard while playing, the offending coyote would promptly bow to acknowledge the mistake, Dr. Bekoff said. If a coyote was shunned for playing unfairly, he would slouch around with his ears slightly back, head cocked and tail down, tentatively approaching and then withdrawing from the other animals. Dr. Bekoff said the apologetic coyotes reminded him of the unpopular animals skulking at the perimeter of a dog park.

“These animals are not as emotionally sophisticated as humans, but they have to know what’s right and wrong because it’s the only way their social groups can work,” he said. “Regret is essential, especially in the wild. Humans are very forgiving to their pets, but if a coyote in the wild gets a reputation as a cheater, he’s ignored or ostracized, and he ends up leaving the group.” Once the coyote is on his own, Dr. Bekoff discovered, the coyote’s risk of dying young rises fourfold.

If our pets realize what soft touches we are, perhaps their regret is mostly just performance art to sucker us. But I like to think that some of the ruefulness is real, and that researchers will one day compile a list of the Top 10 Pet Regrets. (You can make nominations at TierneyLab, at nytimes.com/tierneylab.) At the very least, I’d like to see researchers tackle a few of the great unanswered questions:

When you’re playing fetch with a dog, how much regret does he suffer when he gives you back the ball? As much as when he ends the game by hanging on to the ball?

Do animal vandals feel any moral qualms? After seeing rugs, suitcases and furniture destroyed by my pets, I’m not convinced that evolution has endowed animals with any reliable sense of property rights. But I’m heartened by Eugene Linden’s stories of contrite vandals in his book on animal behavior, “The Parrot’s Lament.”

He tells of a young tiger that, after tearing up all the newly planted trees at a California animal park, covered his eyes with his paws when the zookeeper arrived. And there were the female chimpanzees at the Tulsa Zoo that took advantage of a renovation project to steal the painters’ supplies, don gloves and paint their babies solid white. When confronted by their furious keeper, the mothers scurried away, then returned with peace offerings and paint-free babies.

How awkward is the King Kong Syndrome? Both male and female gorillas have become so fond of their human keepers that they’ve made sexual overtures — one even took to dragging his keeper by her hair. After the inevitable rebuff, do they regret ruining a beautiful friendship?

Do pet cats ever regret anything?

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Younger women good for you
« Reply #69 on: June 03, 2009, 10:07:42 PM »
Men 'live longer' if they marry a younger woman
Men are likely to live longer if they marry a younger woman, new research suggests.


By Murray Wardrop
Published: 7:31AM BST 02 Jun 2009

A man's chances of dying early are cut by a fifth if their bride is between 15 and 17 years their junior.

The risk of premature death is reduced by 11 per cent if they marry a woman seven to nine years younger.

The study at Germany's Max Planck Institute also found that men marrying older women are more likely to die early.

The results suggest that women do not experience the same benefits of marrying a toy boy or a sugar daddy.

Wives with husbands older or younger by between seven and nine years increase their chances of dying early by 20 per cent.

This rises to 30 per cent if the age difference is close to 15 and 17 years.

Scientists say the figures for men may be the result of natural selection – that only the healthiest, most successful older men are able to attract younger mates.

"Another theory is that a younger woman will care for a man better and therefore he will live longer," said institute spokesman Sven Drefahl.

The study examined deaths between 1990 and 2005 for the entire population of Denmark.

On average in Europe, most men marry women around three years younger.

matinik

  • Newbie
  • *
  • Posts: 41
    • View Profile
Re: Evolutionary biology/psychology
« Reply #70 on: June 05, 2009, 03:03:17 PM »

Boys with 'Warrior Gene' More Likely to Join Gangs

LiveScience.com

Boys who have a so-called "warrior gene" are more likely to join gangs and also more likely to be among the most violent members and to use weapons, a new study finds.

"While gangs typically have been regarded as a sociological phenomenon, our investigation shows that variants of a specific MAOA gene, known as a 'low-activity 3-repeat allele,' play a significant role," said biosocial criminologist Kevin M. Beaver of Florida State University.

In 2006, the controversial warrior gene was implicated in the violence of the indigenous Maori people in New Zealand, a claim that Maori leaders dismissed.

But it's no surprise that genes would be involved in aggression. Aggression is a primal emotion like many others, experts say, and like cooperation, it is part of human nature, something that's passed down genetically. And almost all mammals are aggressive in some way or another, said Craig Kennedy, professor of special education and pediatrics at Vanderbilt University in Tennessee, whose research last year suggested that humans crave violence just like they do sex, food or drugs.

"Previous research has linked low-activity MAOA variants to a wide range of antisocial, even violent, behavior, but our study confirms that these variants can predict gang membership," says Beaver, the Florida State researcher. "Moreover, we found that variants of this gene could distinguish gang members who were markedly more likely to behave violently and use weapons from members who were less likely to do either."

The MAOA gene affects levels of neurotransmitters such as dopamine and serotonin that are related to mood and behavior, and those variants that are related to violence are hereditary, according to a statement from the university.

The new study examined DNA data and lifestyle information drawn from more than 2,500 respondents to the National Longitudinal Study of Adolescent Health. Beaver and colleagues from Florida State, Iowa State and Saint Louis universities will detail their findings in a forthcoming issue of the journal Comprehensive Psychiatry.

A separate study at Brown University from earlier this year found that individuals with the warrior gene display higher levels of aggression in response to provocation.

Over networked computers, 78 test subjects were asked to cause physical pain to an opponent they believed had taken money from them by administering varying amounts of hot sauce. While the results were not dramatic, low-activity MAOA subjects displayed slightly higher levels of aggression overall, the researchers said.

The Brown University results, published in the journal Proceedings of the National Academy of Sciences, support previous research suggesting that MAOA influences aggressive behavior, the scientists said. "

i wonder if this "warrior gene" is now being studied and/or synthesized by some egghead to be applied as  some sort of supersoldier serum
(shades of Capt. America!) :-D
« Last Edit: June 05, 2009, 03:10:28 PM by matinik »

rachelg

  • Guest
Women: The choosier sex?
« Reply #71 on: July 08, 2009, 07:42:48 PM »
http://www.salon.com/mwt/broadsheet/feature/2009/07/08/mate_selection/index.html
 
I  recently listened to  a great podcast at econtalk and Alan Wolfe made the point that some evolutionary biologists are really  atheist Calvinists --Everything is predestined  and there is no free will.
 http://www.econtalk.org/archives/2009/05/wolfe_on_libera.html

I'm not against all evolutionary biology  but I thinks it sometimes  shares the pitfall of all social sciences -- taking  three post-it notes worth of data and writing a textbook worth of material

Women: The choosier sex?
That isn't the case in speed-dating where ladies approach men, says a new study

Tracy Clark-Flory
http://www.salon.com/mwt/broadsheet/feature/2009/07/08/mate_selection/print.html

Jul. 08, 2009 |

You've likely encountered this question many a time before: When it comes to sex, why do men do the chasing while women do the choosing? Maybe the query was first answered by your mother: Men have to fight for women because it's the fairer sex that gets pregnant, gives birth and does all the work of raising the kids! Perhaps at some point you got the sober evo-psych explanation: Females are more selective because they bear the greater reproductive burden. Or, maybe you're more familiar with pickup artist parlance: Chicks are choosier 'cause they're the ones who get knocked up. Most of us have heard the same answer put a number of different ways -- but now a team of researchers are casting doubt on our assumption about the push-pull of human courtship.

In a new study from Northwestern University, 350 college-age men and women attended speed-dating events. In half of the games of romantical chairs, the guys went from girl to girl; in the other half, the girls went from guy to guy. Each pair got four minutes to chat, after which they evaluated their interest in each other. When it came to the events where men worked the room, everyone performed just as expected: The men were less selective than the women. But when the usual speed-dating routine was turned on its head and the women made the rounds, the guys were more selective and the ladies were less picky.

The study's press release puts the findings simply: "Regardless of gender, the participants who rotated experienced greater romantic desire for and chemistry with their partners, compared to participants who sat throughout the event." Researcher Eli J. Finkel says the results suggest that research revealing women as the choosier sex might be best explained by "the roles men and women play in the opening seconds of new romantic contacts."

Now, don't go discarding the theory of human sexual selection just yet! Note that the study doesn't show that the sexes are equally selective. It does, however, raise some interesting questions: Could the disparity in sexual selectivity be a result of nurture (as in, "go out and get some nookie, you stud!") rather than nature ("man need sex -- grunt, scratch")? Are evolutionary tendencies easily overthrown by simple social engineering? One thing is for sure: This study will set the so-called seduction community abuzz with debate on how to recreate this speed-dating reversal in everyday life.

-- Tracy Clark-Flory

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Re: Evolutionary biology/psychology
« Reply #72 on: July 09, 2009, 07:15:23 AM »
Wouldn't birth control have a lot to do with it?

Also, are current behaviors shown to be evolutionarily successful?  Intuitively it seems to me that there is a correlation between them and birth rates of less than population maitainance.

rachelg

  • Guest
Speed Dating and the end of the world
« Reply #73 on: July 09, 2009, 06:49:14 PM »
Marc,
 The birth control  issue would have affected the moving men and women and the non-moving speed dating men and  women the same.
 
Are you saying the moving  men and women would not have been effected with out birth control ?  It is possible but probably unknowable.
 
Birth control being convenient, legal, safe and effective( half all pregnancies are unplanned though)  is a recent development
Birth control (onanism)  has existed at least since biblical times . There have also been herbs, pessaries ,etc that have been used for birth control and or abortion  for a very long time .


 I do not think the problem with our society is either birth control or  women chasing men. I think the problem is  how our society defines success and happiness and what is valuable .     Money is often treated as the highest good  and  seen as more valuable than relationships.  Children are seen as a drain and not worth the effort.   
 
We don't exactly praise those who get married young and have large families.
 
   I  do know religious women who control the size and timing of their family  with birth control and have large families.

It would seem better to me that children would mostly be planned or wanted additions to families. I do realize that unplanned children  often end up to be the best thing that every happened to some people.

Everything has consequences but I am strongly  in favor of both families and family planning. 

Also do why do we even care about the fate of our genes?  Why is what is best for our genes necessarily best for us?



 I would be content to blame  all of society's ills on speed dating. I have never participated but I have heard it was awful.

Body-by-Guinness

  • Guest
Evolution & Level Playing Fields
« Reply #74 on: July 11, 2009, 09:02:13 AM »
I think the jury's still out on the evolutionary success of safe, effective, easily reversible contraception. There's a lot of research out there that associates lower birthrates caused by contraception to an increased standard of living, though which is the cause or effect is argued. Until just the past few decades, most humans had what would be considered by today's standards limited choices for leisure while subsistence demanded time and energy be devoted primarily to issues linked directly to survival. Children past the age of 5 or so were a labor asset, while below that age their mortality was high. Entertainment options were few and far between, with horizontal recreation being one of the few diversions consistently and easily available.

All that has changed both with the introduction of modern contraception, cheap consumer goods, and then the resources to obtain 'em. Indeed, I think the margin worth keeping an eye on here are first world/third world margins: will the downward trend in the price of relative luxury items continue, the leisure time to pursue them increase, or will environmental and religious zealots maintain or roll back the first world status quo and, in doing so, preserve the nasty, brutish and short third world status quo?

As mentioned, this will be interesting to watch. Think long experience has demonstrated that handing third world kleptocracies money does little to increase the standard of living of their citizens, though the falling cost of consumer goods makes products accessible in all sorts of unlikely places. Will environmental zealots via regulation or religious zealots via proscription derail the trickle down of consumer goods and resultant rise in the standard or living? Don't think we'll begin to have an answer to the evolutionary impact question until the playing field is more level, and think first world/third world margins will continue to be conflict points until a ubiquitous distribution of consumer goods is achieved.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Security, Group Size, and the Human Brain
« Reply #75 on: July 15, 2009, 04:27:37 AM »
      Security, Group Size, and the Human Brain



If the size of your company grows past 150 people, it's time to get name
badges. It's not that larger groups are somehow less secure, it's just
that 150 is the cognitive limit to the number of people a human brain
can maintain a coherent social relationship with.

Primatologist Robin Dunbar derived this number by comparing neocortex --
the "thinking" part of the mammalian brain -- volume with the size of
primate social groups. By analyzing data from 38 primate genera and
extrapolating to the human neocortex size, he predicted a human "mean
group size" of roughly 150.

This number appears regularly in human society; it's the estimated size
of a Neolithic farming village, the size at which Hittite settlements
split, and the basic unit in professional armies from Roman times to the
present day. Larger group sizes aren't as stable because their members
don't know each other well enough. Instead of thinking of the members as
people, we think of them as groups of people. For such groups to
function well, they need externally imposed structure, such as name badges.

Of course, badges aren't the only way to determine in-group/out-group
status. Other markers include insignia, uniforms, and secret handshakes.
They have different security properties and some make more sense than
others at different levels of technology, but once a group reaches 150
people, it has to do something.

More generally, there are several layers of natural human group size
that increase with a ratio of approximately three: 5, 15, 50, 150, 500,
and 1500 -- although, really, the numbers aren't as precise as all that,
and groups that are less focused on survival tend to be smaller. The
layers relate to both the intensity and intimacy of relationship and the
frequency of contact.

The smallest, three to five, is a "clique": the number of people from
whom you would seek help in times of severe emotional distress. The
twelve to 20 group is the "sympathy group": people with which you have
special ties. After that, 30 to 50 is the typical size of
hunter-gatherer overnight camps, generally drawn from the same pool of
150 people. No matter what size company you work for, there are only
about 150 people you consider to be "co-workers." (In small companies,
Alice and Bob handle accounting. In larger companies, it's the
accounting department -- and maybe you know someone there personally.)
The 500-person group is the "megaband," and the 1,500-person group is
the "tribe." Fifteen hundred is roughly the number of faces we can put
names to, and the typical size of a hunter-gatherer society.

These numbers are reflected in military organization throughout history:
squads of 10 to 15 organized into platoons of three to four squads,
organized into companies of three to four platoons, organized into
battalions of three to four companies, organized into regiments of three
to four battalions, organized into divisions of two to three regiments,
and organized into corps of two to three divisions.

Coherence can become a real problem once organizations get above about
150 in size.  So as group sizes grow across these boundaries, they have
more externally imposed infrastructure -- and more formalized security
systems. In intimate groups, pretty much all security is ad hoc.
Companies smaller than 150 don't bother with name badges; companies
greater than 500 hire a guard to sit in the lobby and check badges.  The
military have had centuries of experience with this under rather trying
circumstances, but even there the real commitment and bonding invariably
occurs at the company level. Above that you need to have rank imposed by
discipline.

The whole brain-size comparison might be bunk, and a lot of evolutionary
psychologists disagree with it. But certainly security systems become
more formalized as groups grow larger and their members less known to
each other. When do more formal dispute resolution systems arise: town
elders, magistrates, judges? At what size boundary are formal
authentication schemes required? Small companies can get by without the
internal forms, memos, and procedures that large companies require; when
does what tend to appear? How does punishment formalize as group size
increase? And how do all these things affect group coherence? People act
differently on social networking sites like Facebook when their list of
"friends" grows larger and less intimate. Local merchants sometimes let
known regulars run up tabs. I lend books to friends with much less
formality than a public library. What examples have you seen?

An edited version of this essay, without links, appeared in the
July/August 2009 issue of IEEE Security & Privacy.

A copy of this essay, with all embedded links, is here:
http://www.schneier.com/blog/archives/2009/07/security_group.html

Body-by-Guinness

  • Guest
Crowd Control Rethinking
« Reply #76 on: July 19, 2009, 07:12:23 PM »
Why cops should trust the wisdom of the crowds

17 July 2009 by Michael Bond
Magazine issue 2717. Subscribe and get 4 free issues.

Would letting crowds manage themselves be a better alternative? (Image: Simon Dack/Brighton Argus)
THE protests that took place on the streets of London on the eve of the G20 summit in April lived up to many people's expectations. Around 2000 protestors turned up, and were heavily marshalled by police. There was a bit of trouble, but the police tactics - specifically, the decision to corral the entire crowd into a small area near the Bank of England, an approach known as "kettling" - kept a lid on the violence.

That, at least, is the official version of events, and it reflects a belief about crowds that is shared by police, governments and to a large degree the general public across the world: that they are hotbeds of trouble and must be contained. Trouble is seen as especially likely when something goes wrong at a large gathering. Under such circumstances, the expectation is that the crowd will lose its head and all hell will break loose.

The "unruly mob" concept is usually taken as read and used as the basis for crowd control measures and evacuation procedures across the world. Yet it is almost entirely a myth. Research into how people behave at demonstrations, sports events, music festivals and other mass gatherings shows not only that crowds nearly always act in a highly rational way, but also that when facing an emergency, people in a crowd are more likely to cooperate than panic. Paradoxically, it is often actions such as kettling that lead to violence breaking out. Often, the best thing authorities can do is leave a crowd to its own devices.

"In many ways, crowds are the solution," says psychologist Stephen Reicher, who studies group behaviour at the University of St Andrews, UK. Rather than being prone to irrational behaviour and violence, members of a crowd undergo a kind of identity shift that drives them to act in the best interests of themselves and everyone around them. This identity shift is often strongest in times of danger or threat. "The 'mad mob' is not an explanation, but a fantasy," says Reicher.

All this has profound implications for policing and the management of public events. "The classic view of crowd psychology, which is still widespread, talks about the loss of selfhood, leaving people at best out of control and at worst generically violent," says Reicher. "That is not only wrong, it's also counterproductive. If you believe all crowds are irrational, and that even rational people are liable to be dangerous in them, then you'll treat them accordingly, often harshly, and stop people doing things they have a right to do. And that can lead to violence."

If you believe all crowds are irrational and treat them accordingly, it can lead to violence
All that said, there's no question that being part of a group can sometimes lead people to do appalling things that they would usually abhor. Examples of crowd-fuelled violence abound, from Hutu death-squads in the Rwandan genocide to racist lynch mobs in the southern states of the US. Likewise, the cover crowds offer can attract individuals who are intent on causing trouble. We can all too easily be led astray by the influence of others (New Scientist, 14 April 2007, p 42).

However, crowd violence is actually extremely rare. "If 100 football matches happen on a Saturday and there is violence at one of them, we know which will appear on the front pages the next day," says Reicher. Widespread panic during crowd emergencies is also uncommon and only occurs in special circumstances, such as when escape routes start to close, says Tricia Wachtendorf of the Disaster Research Center at the University of Delaware in Newark. In most situations - particularly those involving large numbers of strangers - the crowd ends up behaving remarkably sensibly.

Evidence against the irrationality of crowds has been building for some time, largely from studies of emergencies. In a study to be published in the British Journal of Social Psychology (DOI: 10.1348/014466608X357893), a team led by John Drury at the University of Sussex, UK, talked to survivors of 11 crowd-based disasters or near-disasters, including the 1989 Hillsborough stadium crush that killed 96 soccer fans, and a free concert by Fatboy Slim on Brighton beach in 2002 that was swamped by 250,000 people, four times as many as expected, and led to around 100 injuries. In each case, most interviewees recalled a strong sense of unity with those around them as a result of their shared experience. Rather than being competitive or antagonistic, people did their best to be orderly and courteous - and went out of their way to help strangers. Researchers think that without such cooperation, more people could have been injured and killed.

The team found a similar pattern of solidarity and cooperative behaviour in a study of the suicide attacks in London on 7 July 2005, which led to crowds of commuters being trapped underground (International Journal of Mass Emergencies and Disasters, vol 27, p 66). "The public in general and crowds specifically are more resilient than they are given credit for," says Drury. During disasters, governments should treat them as the "fourth emergency service", he adds.

If anything, a crowd's disinclination to panic can work against it. "It's often difficult to get people to move and act," says Wachtendorf. An analysis of the 9/11 attacks on the World Trade Center, for example, performed by the US National Institute of Standards and Technology, showed that most people prevaricated for several minutes after the planes struck, making phone calls, filing papers or shutting down their computers before attempting to escape.

Having established that unruly mob behaviour is the exception, researchers are now getting to grips with the psychological processes that can transform hundreds or thousands of individuals into a unit. The key, according to Drury, Reicher and others, is the recognition that you share something important with those around you, which forces you to identify with them in a meaningful way. "It is a cognitive shift, a difference in self-appraisal, in understanding who you are and how you stand in relation to others," says Reicher.

The trigger is often a dramatic situational change such as a fire in a public place or aggressive police tactics at a protest march, but group solidarity can also arise from seemingly inconsequential circumstances, such as being stuck together in a train carriage. Reicher describes it as a shift towards intimacy: "People start agreeing with each other, trusting each other," he says. At the point when members of a crowd start to share a common social identity, the crowd goes from being a mere physical entity to a psychological unit, according to Clifford Stott at the University of Liverpool, UK, who specialises in the behaviour of soccer crowds.

United by circumstances

A study carried out by Drury, Reicher and David Novelli of the University of Sussex, to be published in the British Journal of Social Psychology, provides a graphic illustration of how quickly and easily we throw ourselves into "psychological crowds" united by circumstances. The researchers divided a group of volunteers into two according to whether they overestimated or underestimated the number of dots in a pattern - a deliberately arbitrary distinction. They then told each person that they would be talking to someone either from their own group or the other, and that they should arrange some chairs in preparation. Those who had been told they would be talking to a member of their own group placed the chairs on average 20 per cent closer together than those who had been told they would be talking to a member of the other group (DOI: 10.1348/014466609X449377). "We want to be closer to fellow group members, not only metaphorically but also physically, and physical proximity is a precondition for any kind of action coordination," says Reicher.

The fluidity of group psychology was also demonstrated in a 2005 experiment on English soccer fans by Mark Levine at the University of Lancaster, UK. He found that supporters of Manchester United who had been primed to think about how they felt about their team were significantly more likely to help an injured stranger if he was wearing a Manchester United shirt, rather than an unbranded shirt or one of rival team Liverpool. However, fans who were primed to think about their experience of being a football fan in general were equally likely to help strangers in Liverpool shirts and Manchester United shirts, but far less likely to help someone wearing an unbranded one (Personality and Social Psychology Bulletin, vol 31, p 443). This shows the potency of group membership, and also how fluid the boundaries can be.

This also happens in the real world, resulting in group bonding which, though transient, can override social, racial and political differences. A good example is the poll tax riots in London in 1990, when protestors from a wide spectrum of backgrounds and interest groups joined forces in the face of what they saw as overly aggressive police tactics. "You had people who were previously antagonistic - anarchists, conservatives, class-war activists - who in the context of the baton charges were united in common group membership," says Stott. This temporary homogenisation is common: think of the cohesiveness of soccer fans supporting an international team who might be hostile when supporting their own local clubs.

People who were previously antagonistic - anarchists, conservatives, class war activists - end up uniting in common group membership
Not everyone agrees. One criticism is that the cohesiveness of crowds is superficial, and that people preferentially draw close to those they know or are related to and remain far less attached to strangers around them. Anthony Mawson, an epidemiologist at the University of Mississippi Medical Center in Jackson, maintains that people's typical response in times of threat is to seek out people familiar to them (Public Health Reports, vol 123, p 555). Strangers can develop a shared identity only when they are together "for long enough that a sense of camaraderie develops among them", he says.

Yet studies by Drury and others suggest the bonds that form between strangers in crowds are very robust, and although people might help family members first in an emergency, they will also help others irrespective of their connection to them. "What is really of interest," says Drury, "is why so many people - strangers without any formal organisation, hierarchy or means of communication - join together and act as one."

So where does this inclination come from to empathise so strongly with others on the basis of shared fate alone? Nobody is really sure, though it appears to be uniquely human. As Mark van Vugt at the University of Kent, UK, and Justin Park at the University of Groningen in the Netherlands point out, no other species appears to have the capacity to form rapid emotional attachments to large, anonymous groups (The Psychology of Prosocial Behaviour, published by Wiley-Blackwell next month). The tendency of people to form strong social bonds while experiencing terror together also appears a universal human trait. "This is well known in traditional societies where boys going through puberty rituals in the transition to manhood are often put through frightening experiences," says Robin Dunbar, who studies the evolution of sociality at the University of Oxford.

Control and contain

What are the lessons from all this? One of the most important is that the current approach to managing crowds, which is all about control and containment, can be counterproductive. Police tend to assume that people in crowds are prone to random acts of violence and disorder, and treat them accordingly. But aggressive policing is likely to trigger an aggressive response as the crowd reacts collectively against the external threat. This is why many researchers consider kettling to be a bad idea. "You're treating the crowd indiscriminately, and that can change the psychology of the crowd, shifting it towards rather than away from violence," says Stott. He has found that low-profile policing can significantly reduce the aggressiveness of football crowds, and that if left alone they will usually police themselves.

Emergency services should also take note: in a situation such as a terrorist attack or fire, a crowd left to its own devices will often find the best solution. Attempts to intervene to prevent people panicking, such as restricting their movements, could make panic more likely. The key, says Wachtendorf, is to give crowds as much information as possible, as they are likely to use it wisely.

If you find yourself in a crowd emergency, the worst thing you can do is resist the group mentality. One of Drury's conclusions from his research into disasters is that the more people try to act individualistically - which results in competitive and disruptive behaviour - the lower everyone's chances of survival are. This is what some researchers believe happened in August 1985 when a British Airtours plane caught fire on the runway at Manchester Airport, UK, killing 55. Non-cooperative behaviour among passengers may have made it harder for people to reach the exits.

It can be hard to shake off the idea of crowds as inherently violent or dangerous, but it is worth remembering that they have also been responsible for just about every major societal change for the good in recent history, from the success of the US civil rights movement to the overthrowing of communist regimes in eastern Europe. Good leadership and individual heroics are all very well, but if you're looking for a revolution - or even just a good way out of a difficult situation - what you really need, it seems, is a crowd.

Michael Bond is a New Scientist consultant in London

http://www.newscientist.com/article/mg20327171.400-why-cops-should-trust-the-wisdom-of-the-crowds.html?full=true

rachelg

  • Guest
Seeking/How the brain hard-wires us to love Google, Twitter, and texting.
« Reply #77 on: August 16, 2009, 07:45:12 AM »
I  almost always tend to think technology advances are makeing the world a much better place but everything but this was a little frightening.

Science
Seeking
How the brain hard-wires us to love Google, Twitter, and texting. And why that's dangerous.
By Emily Yoffe
Posted Wednesday, Aug. 12, 2009, at 5:40 PM ET

Seeking. You can't stop doing it. Sometimes it feels as if the basic drives for food, sex, and sleep have been overridden by a new need for endless nuggets of electronic information. We are so insatiably curious that we gather data even if it gets us in trouble. Google searches are becoming a cause of mistrials as jurors, after hearing testimony, ignore judges' instructions and go look up facts for themselves. We search for information we don't even care about. Nina Shen Rastogi confessed in Double X, "My boyfriend has threatened to break up with me if I keep whipping out my iPhone to look up random facts about celebrities when we're out to dinner." We reach the point that we wonder about our sanity. Virginia Heffernan in the New York Times said she became so obsessed with Twitter posts about the Henry Louis Gates Jr. arrest that she spent days "refreshing my search like a drugged monkey."

We actually resemble nothing so much as those legendary lab rats that endlessly pressed a lever to give themselves a little electrical jolt to the brain. While we tap, tap away at our search engines, it appears we are stimulating the same system in our brains that scientists accidentally discovered more than 50 years ago when probing rat skulls.

In 1954, psychologist James Olds and his team were working in a laboratory at McGill University, studying how rats learned. They would stick an electrode in a rat's brain and, whenever the rat went to a particular corner of its cage, would give it a small shock and note the reaction. One day they unknowingly inserted the probe in the wrong place, and when Olds tested the rat, it kept returning over and over to the corner where it received the shock. He eventually discovered that if the probe was put in the brain's lateral hypothalamus and the rats were allowed to press a lever and stimulate their own electrodes, they would press until they collapsed.

Olds, and everyone else, assumed he'd found the brain's pleasure center (some scientists still think so). Later experiments done on humans confirmed that people will neglect almost everything—their personal hygiene, their family commitments—in order to keep getting that buzz.

But to Washington State University neuroscientist Jaak Panksepp, this supposed pleasure center didn't look very much like it was producing pleasure. Those self-stimulating rats, and later those humans, did not exhibit the euphoric satisfaction of creatures eating Double Stuf Oreos or repeatedly having orgasms. The animals, he writes in Affective Neuroscience: The Foundations of Human and Animal Emotions, were "excessively excited, even crazed." The rats were in a constant state of sniffing and foraging. Some of the human subjects described feeling sexually aroused but didn't experience climax. Mammals stimulating the lateral hypothalamus seem to be caught in a loop, Panksepp writes, "where each stimulation evoked a reinvigorated search strategy" (and Panksepp wasn't referring to Bing).

It is an emotional state Panksepp tried many names for: curiosity, interest, foraging, anticipation, craving, expectancy. He finally settled on seeking. Panksepp has spent decades mapping the emotional systems of the brain he believes are shared by all mammals, and he says, "Seeking is the granddaddy of the systems." It is the mammalian motivational engine that each day gets us out of the bed, or den, or hole to venture forth into the world. It's why, as animal scientist Temple Grandin writes in Animals Make Us Human, experiments show that animals in captivity would prefer to have to search for their food than to have it delivered to them.

For humans, this desire to search is not just about fulfilling our physical needs. Panksepp says that humans can get just as excited about abstract rewards as tangible ones. He says that when we get thrilled about the world of ideas, about making intellectual connections, about divining meaning, it is the seeking circuits that are firing.

The juice that fuels the seeking system is the neurotransmitter dopamine. The dopamine circuits "promote states of eagerness and directed purpose," Panksepp writes. It's a state humans love to be in. So good does it feel that we seek out activities, or substances, that keep this system aroused—cocaine and amphetamines, drugs of stimulation, are particularly effective at stirring it.

Ever find yourself sitting down at the computer just for a second to find out what other movie you saw that actress in, only to look up and realize the search has led to an hour of Googling? Thank dopamine. Our internal sense of time is believed to be controlled by the dopamine system. People with hyperactivity disorder have a shortage of dopamine in their brains, which a recent study suggests may be at the root of the problem. For them even small stretches of time seem to drag. An article by Nicholas Carr in the Atlantic last year, "Is Google Making Us Stupid?" speculates that our constant Internet scrolling is remodeling our brains to make it nearly impossible for us to give sustained attention to a long piece of writing. Like the lab rats, we keep hitting "enter" to get our next fix.

University of Michigan professor of psychology Kent Berridge has spent more than two decades figuring out how the brain experiences pleasure. Like Panksepp, he, too, has come to the conclusion that what James Olds' rats were stimulating was not their reward center. In a series of experiments, he and other researchers have been able to tease apart that the mammalian brain has separate systems for what Berridge calls wanting and liking.

Wanting is Berridge's equivalent for Panksepp's seeking system. It is the liking system that Berridge believes is the brain's reward center. When we experience pleasure, it is our own opioid system, rather than our dopamine system, that is being stimulated. This is why the opiate drugs induce a kind of blissful stupor so different from the animating effect of cocaine and amphetamines. Wanting and liking are complementary. The former catalyzes us to action; the latter brings us to a satisfied pause. Seeking needs to be turned off, if even for a little while, so that the system does not run in an endless loop. When we get the object of our desire (be it a Twinkie or a sexual partner), we engage in consummatory acts that Panksepp says reduce arousal in the brain and temporarily, at least, inhibit our urge to seek.

But our brains are designed to more easily be stimulated than satisfied. "The brain seems to be more stingy with mechanisms for pleasure than for desire," Berridge has said. This makes evolutionary sense. Creatures that lack motivation, that find it easy to slip into oblivious rapture, are likely to lead short (if happy) lives. So nature imbued us with an unquenchable drive to discover, to explore. Stanford University neuroscientist Brian Knutson has been putting people in MRI scanners and looking inside their brains as they play an investing game. He has consistently found that the pictures inside our skulls show that the possibility of a payoff is much more stimulating than actually getting one.

Just how powerful (and separate) wanting is from liking is illustrated in animal experiments. Berridge writes that studies have shown that rats whose dopamine neurons have been destroyed retain the ability to walk, chew, and swallow but will starve to death even if food is right under their noses because they have lost the will to go get it. Conversely, Berridge discovered that rats with a mutation that floods their brains with dopamine learned more quickly than normal rats how to negotiate a runway to reach the food. But once they got it, they didn't find the food more pleasurable than the nonenhanced rats. (No, the rats didn't provide a Zagat rating; scientists measure rats' facial reactions to food.)

That study has implications for drug addiction and other compulsive behaviors. Berridge has proposed that in some addictions the brain becomes sensitized to the wanting cycle of a particular reward. So addicts become obsessively driven to seek the reward, even as the reward itself becomes progressively less rewarding once obtained. "The dopamine system does not have satiety built into it," Berridge explains. "And under certain conditions it can lead us to irrational wants, excessive wants we'd be better off without." So we find ourselves letting one Google search lead to another, while often feeling the information is not vital and knowing we should stop. "As long as you sit there, the consumption renews the appetite," he explains.

Actually all our electronic communication devices—e-mail, Facebook feeds, texts, Twitter—are feeding the same drive as our searches. Since we're restless, easily bored creatures, our gadgets give us in abundance qualities the seeking/wanting system finds particularly exciting. Novelty is one. Panksepp says the dopamine system is activated by finding something unexpected or by the anticipation of something new. If the rewards come unpredictably—as e-mail, texts, updates do—we get even more carried away. No wonder we call it a "CrackBerry."

The system is also activated by particular types of cues that a reward is coming. In order to have the maximum effect, the cues should be small, discrete, specific—like the bell Pavlov rang for his dogs. Panksepp says a way to drive animals into a frenzy is to give them only tiny bits of food: This simultaneously stimulating and unsatisfying tease sends the seeking system into hyperactivity. Berridge says the "ding" announcing a new e-mail or the vibration that signals the arrival of a text message serves as a reward cue for us. And when we respond, we get a little piece of news (Twitter, anyone?), making us want more. These information nuggets may be as uniquely potent for humans as a Froot Loop to a rat. When you give a rat a minuscule dose of sugar, it engenders "a panting appetite," Berridge says—a powerful and not necessarily pleasant state.

If humans are seeking machines, we've now created the perfect machines to allow us to seek endlessly. This perhaps should make us cautious. In Animals in Translation, Temple Grandin writes of driving two indoor cats crazy by flicking a laser pointer around the room. They wouldn't stop stalking and pouncing on this ungraspable dot of light—their dopamine system pumping. She writes that no wild cat would indulge in such useless behavior: "A cat wants to catch the mouse, not chase it in circles forever." She says "mindless chasing" makes an animal less likely to meet its real needs "because it short-circuits intelligent stalking behavior." As we chase after flickering bits of information, it's a salutary warning.
Emily Yoffe is the author of What the Dog Did: Tales From a Formerly Reluctant Dog Owner. You can send your Human Guinea Pig suggestions or comments to emilyyoffe@hotmail.com.

Article URL: http://www.slate.com/id/2224932/

Body-by-Guinness

  • Guest
An Appendix isn't a Vestige?
« Reply #78 on: August 25, 2009, 07:37:40 AM »
Conventional wisdom does have a habit of getting turned on its head:

The Appendix: Useful and in Fact Promising
By Charles Q. Choi, Special to LiveScience
posted: 24 August 2009 07:05 am ET
The body's appendix has long been thought of as nothing more than a worthless evolutionary artifact, good for nothing save a potentially lethal case of inflammation.

Now researchers suggest the appendix is a lot more than a useless remnant. Not only was it recently proposed to actually possess a critical function, but scientists now find it appears in nature a lot more often than before thought. And it's possible some of this organ's ancient uses could be recruited by physicians to help the human body fight disease more effectively.

In a way, the idea that the appendix is an organ whose time has passed has itself become a concept whose time is over.

"Maybe it's time to correct the textbooks," said researcher William Parker, an immunologist at Duke University Medical Center in Durham, N.C. "Many biology texts today still refer to the appendix as a 'vestigial organ.'"

Slimy sac

The vermiform appendix is a slimy dead-end sac that hangs between the small and large intestines. No less than Charles Darwin first suggested that the appendix was a vestigial organ from an ancestor that ate leaves, theorizing that it was the evolutionary remains of a larger structure, called a cecum, which once was used by now-extinct predecessors for digesting food.

"Everybody likely knows at least one person who had to get their appendix taken out — slightly more than 1 in 20 people do — and they see there are no ill effects, and this suggests that you don't need it," Parker said.

However, Parker and his colleagues recently suggested that the appendix still served as a vital safehouse where good bacteria could lie in wait until they were needed to repopulate the gut after a nasty case of diarrhea. Past studies had also found the appendix can help make, direct and train white blood cells.

Now, in the first investigation of the appendix over the ages, Parker explained they discovered that it has been around much longer than anyone had suspected, hinting that it plays a critical function.

"The appendix has been around for at least 80 million years, much longer than we would estimate if Darwin's ideas about the appendix were correct," Parker said.

Moreover, the appendix appears in nature much more often than previously acknowledged. It has evolved at least twice, once among Australian marsupials such as the wombat and another time among rats, lemmings, meadow voles, Cape dune mole-rats and other rodents, as well as humans and certain primates.

"When species are divided into groups called 'families,' we find that more than 70 percent of all primate and rodent groups contain species with an appendix," Parker said.

Several living species, including several lemurs, certain rodents and the scaly-tailed flying squirrel, still have an appendix attached to a large cecum, which is used in digestion. Darwin had thought appendices appeared in only a small handful of animals.

"We're not saying that Darwin's idea of evolution is wrong — that would be absurd, as we're using his ideas on evolution to do this work," Parker told LiveScience. "It's just that Darwin simply didn't have the information we have now."

He added, "If Darwin had been aware of the species that have an appendix attached to a large cecum, and if he had known about the widespread nature of the appendix, he probably would not have thought of the appendix as a vestige of evolution."

What causes appendicitis?

Darwin was also not aware that appendicitis, or a potentially deadly inflammation of the appendix, is not due to a faulty appendix, but rather to cultural changes associated with industrialized society and improved sanitation, Parker said.

"Those changes left our immune systems with too little work and too much time their hands — a recipe for trouble," he said. "Darwin had no way of knowing that the function of the appendix could be rendered obsolete by cultural changes that included widespread use of sewer systems and clean drinking water."

Now that scientists are uncovering the normal function of the appendix, Parker notes a critical question to ask is whether anything can be done to prevent appendicitis. He suggests it might be possible to devise ways to incite our immune systems today in much the same manner that they were challenged back in the Stone Age.

"If modern medicine could figure out a way to do that, we would see far fewer cases of allergies, autoimmune disease, and appendicitis," Parker said.

The scientists detailed their findings online August 12 in the Journal of Evolutionary Biology.

http://www.livescience.com/health/090824-appendix-evolution.html

http://news.yahoo.com/s/livescience/20090824/sc_livescience/theappendixusefulandinfactpromising

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Madam, I'm Adam
« Reply #79 on: September 15, 2009, 05:44:51 AM »
New Clues to Sex Anomalies in How Y Chromosomes Are Copied

NY Times
By NICHOLAS WADE
Published: September 14, 2009
The first words ever spoken, so fable holds, were a palindrome and an introduction: “Madam, I’m Adam.”


A few years ago palindromes — phrases that read the same backward as forward — turned out to be an essential protective feature of Adam’s Y, the male-determining chromosome that all living men have inherited from a single individual who lived some 60,000 years ago. Each man carries a Y from his father and an X chromosome from his mother. Women have two X chromosomes, one from each parent.

The new twist in the story is the discovery that the palindrome system has a simple weakness, one that explains a wide range of sex anomalies from feminization to sex reversal similar to Turner’s syndrome, the condition of women who carry only one X chromosome.

The palindromes were discovered in 2003 when the Y chromosome’s sequence of bases, represented by the familiar letters G, C, T and A, was first worked out by David C. Page of the Whitehead Institute in Cambridge, Mass., and colleagues at the DNA sequencing center at Washington University School of Medicine in St. Louis.

They came as a total surprise but one that immediately explained a serious evolutionary puzzle, that of how the genes on the Y chromosome are protected from crippling mutations.

Unlike the other chromosomes, which can repair one another because they come in pairs, one from each parent, the Y has no evident backup system. Nature has prevented it from recombining with its partner, the X, except at its very tips, lest its male-determining gene should sneak into the X and cause genetic chaos.

Discovery of the palindromes explained how the Y chromosome has managed over evolutionary time to discard bad genes: it recombines with itself. Its essential genes are embedded in a series of eight giant palindromes, some up to three million DNA units in length. Each palindrome readily folds like a hairpin, bringing its two arms together. The cell’s DNA control machinery detects any difference between the two arms and can convert a mutation back to the correct sequence, saving the Y’s genes from mutational decay.

After Dr. Page discovered the palindromes, he wondered whether the system had weaknesses that might explain the male sex chromosome anomalies that are a major object of his studies. In the current issue of Cell, with Julian Lange and others, he describes what they call the “Achilles’ heel” of the Y chromosome and the wide variety of sexual disorders that it leads to.

The danger of the palindrome protection system occurs when a cell has duplicated all its chromosomes prior to cell division, and each pair is held together at a site called the centromere. Soon, the centromere will split, with each half and its chromosome tugged to opposite sides of the dividing cell.

Before the split, however, a serious error can occur. Palindromes on one Y chromosome can occasionally reach over and form a fatal attraction with the counterpart palindrome on its neighbor. The two Y’s fuse at the point of joining, and everything from the juncture to the end of the chromosome is lost

The double-Y’s so generated come in a range of lengths, depending on which of the palindromes makes the unintended liaison. Like other chromosomes, the Y has a left arm and a right arm with the centromere in between. The male-determining gene lies close to the end of the left arm. If the palindromes at the very end of the right arm make the join, a very long double-Y results in which the two centromeres are widely separated. But if the joining palindromes are just to the right of the centromere, a short double-Y is formed in which the two centromeres lie close together.

Dr. Page detected among his patients both short and long double-Y’s and those of all lengths in between. He and his colleagues then noticed a surprising difference in the patients’ sexual appearance that depended on the length between the centromeres of their double-Y’s.

The patients in whom the distance between the Y’s two centromeres is short are males. But the greater the distance between the centromeres, the more likely the patients are to be anatomically feminized. A few of the patients were so feminized that they had the symptoms of Turner’s syndrome, a condition in which women are born with a single X chromosome.

The explanation for this spectrum of results, in Dr. Page’s view, lies in how the double-Y’s are treated in dividing cells and in the consequences for determining the sex of the fetus.

========

(Page 2 of 2)



When the centromeres are close together, they are seen as one and dragged to one side of the dividing cell. As long as the Y’s male-determining gene is active in the cells of the fetal sex tissue, or gonad, the gonads will turn into testes whose hormones will masculinize the rest of the body.


But when the centromeres lie far apart, chromosomal chaos results. During cell division, both centromeres are recognized by the cell division machinery, and in the tug of war the double-Y chromosome may sometimes survive and sometimes be broken and lost to the cell.

Such individuals can carry a mixture of cells, some of which carry a double-Y and some of which carry no Y chromosome. In the fetal gonads, that mixture of cells produces people of intermediate sex. In many of these cases the patients had been raised as female but had testicular tissue on one side of the body and ovarian tissue on the other.

In the extreme version of this process, the distribution of cells may be such that none of the fetal gonad cells possess a Y chromosome, even though other cells in the body may do so. Dr. Page and his colleagues found five of the feminized patients had symptoms typical of Turner’s syndrome. The patients had been brought to Dr. Page’s attention because their blood cells contained Y chromosomes. Evidently by the luck of the draw, the blood cell lineage had retained Y chromosomes but the all important fetal gonad cells had been denied them.

In 75 percent of women with Turner’s syndrome, the single X comes from the mother. “Since they are females, everyone imagines it’s Dad’s X that is missing,” Dr. Page said. “But it could easily be Dad’s Y.”

That the degree of feminization parallels the distance between the two centromeres of the double Y chromosome is “a fantastic experiment of nature,” Dr. Page said. Despite having studied the Y chromosome for nearly 30 years, he has learned that it is always full of surprises.

“I continue to see the Y as an infinitely rich national park where we go to see unusual things, and we are never disappointed,” he said.

Dr. Cynthia Morton, editor of the American Journal of Human Genetics, said the new explanation of Turner’s syndrome was plausible. “It’s another beautiful David Page contribution to the science of genetics,” Dr. Morton said.

rachelg

  • Guest
Color-Blind Monkeys Get Full Color Vision
« Reply #80 on: September 16, 2009, 06:39:11 PM »
Wednesday, September 16, 2009
Color-Blind Monkeys Get Full Color Vision
http://www.technologyreview.com/computing/23483/?a=f
Gene therapy can transform the visual system, even in adults.
By Emily Singer

Squirrel monkeys, which are naturally red-green color-blind, can attain humanlike color vision when injected with the gene for a human photoreceptor. The research, performed in adult animals, suggests that the visual system is much more flexible than previously thought--the monkeys quickly learned to use the new sensory information. Researchers hope these results will also hold true for humans afflicted with color blindness and other visual disorders, expanding the range of blinding diseases that might be treated with gene therapy.

"The core observation here is that the animal can use this extra input on such a rapid timescale and make decisions with it," says Jeremy Nathans, a neuroscientist at Johns Hopkins University in Baltimore. "That's incredibly cool."

"This is an amazing step forward in terms of our ability to modify the retina with genetic engineering," says David Williams, director of the Center for Visual Science at the University of Rochester in New York, who was not involved in the study.

Normal vision in squirrel monkeys is almost identical to red-green colorblindness in humans, making the monkeys excellent subjects for studying the disorder. Most people have three types of color photoreceptors--red, green, and blue--which allow them to see the full spectrum of colors. People with red-green color blindness, a genetic disorder that affects about 5 percent of men and a much smaller percentage of women, lack the light-sensitive protein for either red or green wavelengths of light. Because they have only two color photoreceptors, their color vision is limited--they can't distinguish a red X on a green background, for example.

In the new study, published today in Nature, scientists from the University of Washington in Seattle injected the gene for the human version of the red photopigment directly into two animals' eyes, near the retina. The gene, which sits inside a harmless virus often used for gene therapy, is engineered so that it only becomes active in a subset of green photoreceptors. It begins producing the red pigment protein about nine to 20 weeks after injection, transforming that cell into one that responds to the color red.

Researchers screened the monkeys before and after the treatment, using a test very similar to the one used to assess color blindness in people. Colored shapes were embedded in a background of a different color, and the monkeys touched the screen where they saw the shape. The researchers found that the animals' color vision changed dramatically after the treatment. "Human color vision is very good; you only need a tiny bit of red tint to distinguish two shades," says Jay Neitz, one of the authors of the study. "[The] cured animals are not quite as good as other [types of] monkeys with normal color vision, but they are close."

Both animals described in the study have also retained their new tricolor sensory capacity for more than two years. And neither has shown harmful side effects, such as an immune reaction to the foreign protein. The researchers have since treated four additional animals, with no signs of complications. "The results are quite compelling," says Gerald Jacobson, a neuroscientist at the University of California, Santa Barbara, who was not involved in the study. "There is the potential to do the same for humans."

Gene-therapy trials are already under way for a more severe visual impairment, called Leber congenital amaurosis, in which an abnormal protein in sufferers' photoreceptors severely impairs their sensitivity to light. Whether this research should be converted into a treatment for human color blindness is likely to be controversial. "I think it would be a poor use of medical technology when there are so many more serious problems," says Nathans. "Color-vision variation is one of the kinds of variations that make life more interesting. One may think of it as a deficiency, but color-blind people are also better at some things, such as breaking through camouflage." They may also have slightly improved acuity, he says.

However, both Nietz and Jacobson say they frequently receive calls from color-blind people searching for cures, and they hope the research can eventually be used in humans.

"It seems a trivial defect for those of us who are not color-blind, but it does close a lot of avenues," says Jacobson. People who are color-blind can't become commercial pilots, police officers, or firefighters, for example. "People tell me every day how they feel that they miss out because they don't have normal color vision," says Neitz. "You obviously don't want to risk other aspects of vision, but I think this could get to a point where this could be done relatively without risk."

The findings challenge existing notions about the visual system, which was thought to be hardwired early in development. This is supported, for instance, by the fact that cats deprived of vision in one eye early in life never gain normal use of that eye. "People had explored visual plasticity and development using deprivation in a lot of different ways," says Neitz. "But no one has been able to explore it by adding something that wasn't there."

That flexibility is also important for clinical applications of the technology. The fact that adult monkeys could use their novel sensory information suggests that corrective gene therapies for color blindness need not be delivered early in development, as some had feared. However, it's not yet clear whether color vision will be a unique example of plasticity in the adult visual system, or one of many.

Researchers hope the findings will prove applicable to other retinal diseases. Hundreds of mutations have already been identified that are linked to defects in the photoreceptors and other retinal cells, leading to diseases such as retinitis pigmentosa, a degenerative disease that can lead to blindness. However, unlike color blindness, in which the visual system is intact, save for the missing photopigment, many of these diseases trigger damage to the photoreceptor cells. "I think it's hard to know in what way it will extrapolate to more serious blinding disorders that involve more serious degeneration of retina," says Nathans.

The research also raises the possibility of adding new functionality to the visual system, which might be of particular interest to the military. "You might be able to take people with normal vision and give them a pigment for infrared," says Williams. "I'm sure a lot of soldiers would like to have their infrared camera built right into the retina."

Copyright Technology Review 2009.
Upcoming Events

Lab to Market Workshop
Cambridge, MA
Tuesday, September 22, 2009
http://www.technologyreview.com/emtech/09/workshop.aspx

EmTech 09
Cambridge, MA
Tuesday, September 22, 2009 - Thursday, September 24, 2009
http://www.technologyreview.com/emtech

Nanotech Europe 2009
Berlin, Germany
Monday, September 28, 2009 - Wednesday, September 30, 2009
http://www.nanotech.net

2009 Medical Innovation Summit
Cleveland, OH
Monday, October 05, 2009 - Wednesday, October 07, 2009
http://www.ClevelandClinic.org/innovations/summit

Optimizing Innovation 2009
New York, NY
Wednesday, October 21, 2009 - Thursday, October 22, 2009
http://www.connecting-group.com/Web/EventOverview.aspx?Identificador=6

Body-by-Guinness

  • Guest
Why Women Have Sex
« Reply #81 on: October 04, 2009, 02:19:51 PM »
The flip tone doesn't lend much, but some of the findings are interesting.

Why women have sex
According to a new book, there are 237 reasons why women have sex. And most of them have little to do with romance or pleasure
Monday 28 September 2009

Do you want to know why women have sex with men with tiny little feet? I am stroking a book called Why Women Have Sex. It is by Cindy Meston, a clinical psychologist, and David Buss, an evolutionary psychologist. It is a very thick, bulging book. I've never really wondered Why Women Have Sex. But after years of not asking the question, the answer is splayed before me.

Meston and Buss have interviewed 1,006 women from all over the world about their sexual motivation, and in doing so they have identified 237 different reasons why women have sex. Not 235. Not 236. But 237. And what are they? From the reams of confessions, it emerges that women have sex for physical, emotional and material reasons; to boost their self-esteem, to keep their lovers, or because they are raped or coerced. Love? That's just a song. We are among the bad apes now.

Why, I ask Meston, have people never really talked about this? Alfred Kinsey, the "father" of sexology, asked 7,985 people about their sexual histories in the 1940s and 50s; Masters and Johnson observed people having orgasms for most of the 60s. But they never asked why. Why?

"People just assumed the answer was obvious," Meston says. "To feel good. Nobody has really talked about how women can use sex for all sorts of resources." She rattles off a list and as she says it, I realise I knew it all along: "promotion, money, drugs, bartering, for revenge, to get back at a partner who has cheated on them. To make themselves feel good. To make their partners feel bad." Women, she says, "can use sex at every stage of the relationship, from luring a man into the relationship, to try and keep a man so he is fulfilled and doesn't stray. Duty. Using sex to get rid of him or to make him jealous."

"We never ever expected it to be so diverse," she says. "From the altruistic to the borderline evil." Evil? "Wanting to give someone a sexually transmitted infection," she explains. I turn to the book. I am slightly afraid of it. Who wants to have their romantic fantasies reduced to evolutional processes?

The first question asked is: what thrills women? Or, as the book puts it: "Why do the faces of Antonio Banderas and George Clooney excite so many women?"

We are, apparently, scrabbling around for what biologists call "genetic benefits" and "resource benefits". Genetic benefits are the genes that produce healthy children. Resource benefits are the things that help us protect our healthy children, which is why women sometimes like men with big houses. Jane Eyre, I think, can be read as a love letter to a big house.

"When a woman is sexually attracted to a man because he smells good, she doesn't know why she is sexually attracted to that man," says Buss. "She doesn't know that he might have a MHC gene complex complimentary to hers, or that he smells good because he has symmetrical features."

So Why Women Have Sex is partly a primer for decoding personal ads. Tall, symmetrical face, cartoonish V-shaped body? I have good genes for your brats. Affluent, GSOH – if too fond of acronyms – and kind? I have resource benefits for your brats. I knew this already; that is how Bill Clinton got sex, despite his astonishing resemblance to a moving potato. It also explains why Vladimir Putin has become a sex god and poses topless with his fishing rod.

Then I learn why women marry accountants; it's a trade-off. "Clooneyish" men tend to be unfaithful, because men have a different genetic agenda from women – they want to impregnate lots of healthy women. Meston and Buss call them "risk-taking, womanising 'bad boys'". So, women might use sex to bag a less dazzling but more faithful mate. He will have fewer genetic benefits but more resource benefits that he will make available, because he will not run away. This explains why women marry accountants. Accountants stick around – and sometimes they have tiny little feet!

And so to the main reason women have sex. The idol of "women do it for love, and men for joy" lies broken on the rug like a mutilated sex toy: it's orgasm, orgasm, orgasm. "A lot of women in our studies said they just wanted sex for the pure physical pleasure," Meston says. Meston and Buss garnish this revelation with so much amazing detail that I am distracted. I can't concentrate. Did you know that the World Health Organisation has a Women's Orgasm Committee? That "the G-spot" is named after the German physician Ernst Gräfenberg? That there are 26 definitions of orgasm?

And so, to the second most important reason why women have sex – love. "Romantic love," Meston and Buss write, "is the topic of more than 1,000 songs sold on iTunes." And, if people don't have love, terrible things can happen, in literature and life: "Cleopatra poisoned herself with a snake and Ophelia went mad and drowned." Women say they use sex to express love and to get it, and to try to keep it.

Love: an insurance policy

And what is love? Love is apparently a form of "long-term commitment insurance" that ensures your mate is less likely to leave you, should your legs fall off or your ovaries fall out. Take that, Danielle Steele – you may think you live in 2009 but your genes are still in the stone age, with only chest hair between you and a bloody death. We also get data which confirms that, due to the chemicals your brain produces – dopamine, norepinephrine and phenylethylamine – you are, when you are in love, technically what I have always suspected you to be – mad as Stalin.

And is the world mad? According to surveys, which Meston and Buss helpfully whip out from their inexhaustible box of every survey ever surveyed, 73% of Russian women are in love, and 63% of Japanese women are in love. What percentage of women in north London are in love, they know not. But not as many men are in love. Only 61% of Russian men are in love and only 41% of Japanese men are in love. Which means that 12% of Russian women and 22% of Japanese women are totally wasting their time.

And then there is sex as man-theft. "Sometimes men who are high in mate value are in relationships or many of them simply pursue a short-term sexual strategy and don't want commitment," Buss explains. "There isn't this huge pool of highly desirable men just sitting out there waiting for women." It's true. So how do we liberate desirable men from other women? We "mate poach". And how do we do that? We "compete to embody what men want" – high heels to show off our pelvises, lip-gloss to make men think about vaginas, and we see off our rivals with slander. We spread gossip – "She's easy!" – because that makes the slandered woman less inviting to men as a long-term partner. She may get short-term genetic benefits but she can sing all night for the resource benefits, like a cat sitting out in the rain. Then – then! – the gossiper mates with the man herself.

We also use sex to "mate guard". I love this phrase. It is so evocative an image – I can see a man in a cage, and a woman with a spear and a bottle of baby oil. Women regularly have sex with their mates to stop them seeking it elsewhere. Mate guarding is closely related to "a sense of duty", a popular reason for sex, best expressed by the Meston and Buss interviewee who says: "Most of the time I just lie there and make lists in my head. I grunt once in a while so he knows I'm awake, and then I tell him how great it was when it's over. We are happily married."

Women often mate guard by flaunting healthy sexual relationships. "In a very public display of presumed rivalry," Meston writes, "in 2008 singer and actress Jessica Simpson appeared with her boyfriend, Dallas Cowboys quarterback Tony Romo, wearing a shirt with the tagline Real Girls Eat Meat. Fans interpreted it as a competitive dig at Romo's previous mate, who is a vegetarian."

Meston and Buss also explain why the girls in my class at school went down like dominoes in 1990. One week we were maidens, the following week, we were not. We were, apparently, having sex to see if we liked it, so we could tell other schoolgirls that we had done it and to practise sexual techniques: "As a woman I don't want to be a dead fish," says one female. Another interviewee wanted to practise for her wedding night.

The authors lubricate this with a description of the male genitalia, again food themed. I include it because I am immature. "In Masters & Johnson's [1966] study of over 300 flaccid penises the largest was 5.5 inches long (about the size of a bratwurst sausage); the smallest non-erect penis was 2.25 inches (about the size of a breakfast sausage)."

Ever had sex out of pity and wondered why? "Women," say Meston and Buss, "for the most part, are the ones who give soup to the sick, cookies to the elderly and . . . sex to the forlorn." "Tired, but he wanted it," says one female. Pause for more amazing detail: fat people are more likely to stay in a relationship because no one else wants them.

Women also mate to get the things they think they want – drugs, handbags, jobs, drugs. "The degree to which economics plays out in sexual motivations," Buss says, "surprised me. Not just prostitution. Sex economics plays out even in regular relationships. Women have sex so that the guy would mow the lawn or take out the garbage. You exchange sex for dinner." He quotes some students from the University of Michigan. It is an affluent university, but 9% of students said they had "initiated an attempt to trade sex for some tangible benefit".

Medicinal sex

Then there is sex to feel better. Women use sex to cure their migraines. This is explained by the release of endormorphins during sex – they are a pain reliever. Sex can even help relieve period pains. (Why are periods called periods? Please, someone tell me. Write in.)

Women also have sex because they are raped, coerced or lied to, although we have defences against deception – men will often copulate on the first date, women on the third, so they will know it is love (madness). Some use sex to tell their partner they don't want them any more – by sleeping with somebody else. Some use it to feel desirable; some to get a new car. There are very few things we will not use sex for. As Meston says, "Women can use sex at every stage of the relationship."

And there you have it – most of the reasons why women have sex, although, as Meston says, "There are probably a few more." Probably. Before I read this book I watched women eating men in ignorance. Now, when I look at them, I can hear David Attenborough talking in my head: "The larger female is closing in on her prey. The smaller female has been ostracised by her rival's machinations, and slinks away." The complex human race has been reduced in my mind to a group of little apes, running around, rutting and squeaking.

I am not sure if I feel empowered or dismayed. I thought that my lover adored me. No – it is because I have a symmetrical face. "I love you so much," he would say, if he could read his evolutionary impulses, "because you have a symmetrical face!" "Oh, how I love the smell of your compatible genes!" I would say back. "Symmetrical face!" "Compatible genes!" "Symmetrical face!" "Compatible genes!" And so we would osculate (kiss). I am really just a monkey trying to survive. I close the book.

I think I knew that.

http://www.guardian.co.uk/lifeandstyle/2009/sep/28/sex-women-relationships-tanya-gold

Body-by-Guinness

  • Guest
Testosterone & Risk
« Reply #82 on: October 31, 2009, 04:20:44 PM »
Why testosterone-charged women behave like men (they're hungry for sex and ready to take risks with money)

By Daily Mail Reporter
Last updated at 12:04 PM on 25th August 2009


Women with an appetite for risk may also be hungry for sex, a study suggests.

Scientists found that risk-taking women have unusually high testosterone levels.

The hormone fuels sex-drive in both men and women and is associated with competitiveness and dominance.

Prior research has shown that high levels of testosterone are also linked to risky behaviour such as gambling or excessive drinking.


Women with an appetite for risk may also be hungry for sex
Scientists in the US measured the amount of testosterone in saliva samples taken from 500 male and female MBA business students at the University of Chicago.

Participants in the study were asked to play a computer game that evaluated their attitude towards risk.

A series of questions allowed them to choose between a guaranteed monetary reward or a risky lottery with a higher potential pay-out.

The students had to decide repeatedly whether to play safe for less or gamble on a bigger win.

Women who were most willing to take risks were also found to have the highest levels of testosterone, but this was not true of men.

However, men and women with the same levels of the hormone shared a similar attitude to risk.

The link between risk-taking and testosterone also had a bearing on the students' career choices after graduation.

Testosterone-driven individuals who liked to gamble went on to choose riskier careers in finance.

"This is the first study showing that gender differences in financial risk aversion have a biological basis, and that differences in testosterone levels between individuals can affect important aspects of economic behaviour and career decisions," said Professor Dario Maestripieri, one of the study leaders.

In general, women are known to be more risk-averse than men when it comes to financial decision making. Among the students taking part in the study, 36% of the women chose high-risk financial careers such as investment banking or trading compared with 57% of the men.

Overall, male participants displayed lower risk-aversion than their female counterparts and also had significantly higher levels of salivary testosterone.

The findings are published in the journal Proceedings of the National Academy of Sciences.

Co-author Professor Luigi Zingales said: "This study has significant implications for how the effects of testosterone could impact actual risk-taking in financial markets, because many of these students will go on to become major players in the financial world.

"Furthermore, it could shed some light on gender differences in career choices. Future studies should further explore the mechanisms through which testosterone affects the brain."

http://www.dailymail.co.uk/news/article-1208859/Women-appetite-risk-hungry-sex-study-suggests.html#

Body-by-Guinness

  • Guest
Missing Heritability and Other Problems
« Reply #83 on: December 04, 2009, 08:55:59 PM »
The looming crisis in human genetics
Nov 13th 2009


Some awkward news ahead

Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010. The crisis has depressing health implications and alarming political ones. In a nutshell: the new genetics will reveal much less than hoped about how to cure disease, and much more than feared about human evolution and inequality, including genetic differences between classes, ethnicities and races.

About five years ago, genetics researchers became excited about new methods for “genome-wide association studies” (GWAS). We already knew from twin, family and adoption studies that all human traits are heritable: genetic differences explain much of the variation between individuals. We knew the genes were there; we just had to find them. Companies such as Illumina and Affymetrix produced DNA chips that allowed researchers to test up to 1m genetic variants for their statistical association with specific traits. America’s National Institutes of Health and Britain’s Wellcome Trust gave huge research grants for gene-hunting. Thousands of researchers jumped on the GWAS bandwagon. Lab groups formed and international research consortia congealed. The quantity of published GWAS research has soared.

In 2010, GWAS fever will reach its peak. Dozens of papers will report specific genes associated with almost every imaginable trait—intelligence, personality, religiosity, sexuality, longevity, economic risk-taking, consumer preferences, leisure interests and political attitudes. The data are already collected, with DNA samples from large populations already measured for these traits. It’s just a matter of doing the statistics and writing up the papers for Nature Genetics. The gold rush is on throughout the leading behaviour-genetics centres in London, Amsterdam, Boston, Boulder and Brisbane.

GWAS researchers will, in public, continue trumpeting their successes to science journalists and Science magazine. They will reassure Big Pharma and the grant agencies that GWAS will identify the genes that explain most of the variation in heart disease, cancer, obesity, depression, schizophrenia, Alzheimer’s and ageing itself. Those genes will illuminate the biochemical pathways underlying disease, which will yield new genetic tests and blockbuster drugs. Keep holding your breath for a golden age of health, happiness and longevity.

In private, though, the more thoughtful GWAS researchers are troubled. They hold small, discreet conferences on the “missing heritability” problem: if all these human traits are heritable, why are GWAS studies failing so often? The DNA chips should already have identified some important genes behind physical and mental health. They simply have not been delivering the goods.

Certainly, GWAS papers have reported a couple of hundred genetic variants that show statistically significant associations with a few traits. But the genes typically do not replicate across studies. Even when they do replicate, they never explain more than a tiny fraction of any interesting trait. In fact, classical Mendelian genetics based on family studies has identified far more disease-risk genes with larger effects than GWAS research has so far.

Why the failure? The missing heritability may reflect limitations of DNA-chip design: GWAS methods so far focus on relatively common genetic variants in regions of DNA that code for proteins. They under-sample rare variants and DNA regions translated into non-coding RNA, which seems to orchestrate most organic development in vertebrates. Or it may be that thousands of small mutations disrupt body and brain in different ways in different populations. At worst, each human trait may depend on hundreds of thousands of genetic variants that add up through gene-expression patterns of mind-numbing complexity.

Political science

We will know much more when it becomes possible to do cheap “resequencing”—which is really just “sequencing” a wider variety of individuals beyond the handful analysed for the Human Genome Project. Full sequencing means analysing all 3 billion base pairs of an individual’s DNA rather than just a sample of 1m genetic variants as the DNA chips do. When sequencing costs drop within a few years below $1,000 per genome, researchers in Europe, China and India will start huge projects with vast sample sizes, sophisticated bioinformatics, diverse trait measures and detailed family structures. (American bioscience will prove too politically squeamish to fund such studies.) The missing heritability problem will surely be solved sooner or later.

The trouble is, the resequencing data will reveal much more about human evolutionary history and ethnic differences than they will about disease genes. Once enough DNA is analysed around the world, science will have a panoramic view of human genetic variation across races, ethnicities and regions. We will start reconstructing a detailed family tree that links all living humans, discovering many surprises about mis-attributed paternity and covert mating between classes, castes, regions and ethnicities.

We will also identify the many genes that create physical and mental differences across populations, and we will be able to estimate when those genes arose. Some of those differences probably occurred very recently, within recorded history. Gregory Cochran and Henry Harpending argued in “The 10,000 Year Explosion” that some human groups experienced a vastly accelerated rate of evolutionary change within the past few thousand years, benefiting from the new genetic diversity created within far larger populations, and in response to the new survival, social and reproductive challenges of agriculture, cities, divisions of labour and social classes. Others did not experience these changes until the past few hundred years when they were subject to contact, colonisation and, all too often, extermination.

If the shift from GWAS to sequencing studies finds evidence of such politically awkward and morally perplexing facts, we can expect the usual range of ideological reactions, including nationalistic retro-racism from conservatives and outraged denial from blank-slate liberals. The few who really understand the genetics will gain a more enlightened, live-and-let-live recognition of the biodiversity within our extraordinary species—including a clearer view of likely comparative advantages between the world’s different economies.



Geoffrey Miller: evolutionary psychologist, University of New Mexico; author of “Spent: Sex, Evolution, and Consumer Behavior” (Viking)

http://www.economist.com/PrinterFriendly.cfm?story_id=14742737

Body-by-Guinness

  • Guest
"Change Blindness"
« Reply #84 on: December 14, 2009, 02:33:17 PM »
Interesting experiment. Though I think I'd catch this, I've met my share of folks who aren't particularly situationally aware.

[youtube]http://www.youtube.com/watch?v=38XO7ac9eSs&feature=player_embedded#[/youtube]

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
Re: Evolutionary biology/psychology
« Reply #85 on: December 14, 2009, 06:35:56 PM »
 8-) 8-) 8-)

Body-by-Guinness

  • Guest
Invertebrate Tool Use
« Reply #86 on: December 15, 2009, 09:35:48 AM »
Coconut-carrying octopus stuns scientists

Tue Dec 15, 4:46 am ET

SYDNEY (AFP) – Australian scientists on Tuesday revealed the eight-tentacled species can carry coconut shells to use as armour -- the first case of an invertebrate using tools.
Research biologist Julian Finn said he was "blown away" the first time he saw the fist-sized veined octopus, Amphioctopus marginatus, pick up and scoot away with its portable protection along the sea bed.

"We don't normally associate complex behaviours with invertebrates -- with lower life forms I guess you could say," Finn, from Museum Victoria, told AFP.

"And things like tool-use and complex behaviour we generally associate with the higher vertebrates: humans, monkeys, a few birds, that kind of thing.

"This study, if anything, shows that these complex behaviours aren't limited to us. They are actually employed by a wide range of animals."

The use of tools is considered one of the defining elements of intelligence and, although originally considered only present in humans, has since been found in other primates, mammals and birds.
But this is the first time that the behaviour has been observed in an invertebrate, according to an article co-authored by Finn and published in the US-based journal Current Biology.
Finn said when he first saw the octopus walk along awkwardly with its shell, he didn't know whether it was simply a freak example of wacky underwater behaviour by the animal whose closest relative is a snail.

"So over the 10-year period basically we observed about 20 octopuses and we would have seen about four different individuals carrying coconut shells over large distances," he said of his research in Indonesia.

"There were lots that were buried with coconuts in the mud. But we saw four individuals actually pick them up and carry them, jog them across the sea floor carrying them under their bodies. It's a good sight."

Finn said the animals were slower and more vulnerable to predators while carrying the broken shells, which they later used as shelters.

"They are doing it for the later benefit and that's what makes it different from an animal that picks up something and puts it over its head for the immediate benefit," he said.
Other animals were likely to be discovered to exhibit similar behaviours, he said.



An octopus, wrapped around the shell of a coconut, uses it to protect itself on the seabed floor. Australian scientists have revealed that the eight-tentacled species can carry coconut shells to use as armour -- the first case of an invertebrate using tools. (AFP/HO/File/Roger Steene)

http://news.yahoo.com/nphotos/File/photo//091215/photos_sc_afp/0b2621955b75a067f7490914d54ddc41//s:/afp/20091215/sc_afp/scienceaustraliaanimaloctopus_20091215094832;_ylt=ArjI1ZsRmfkf.0rxJk5lsg7QOrgF;_ylu=X3oDMTE5czdndmNvBHBvcwMxBHNlYwN5bl9yX3RvcF9waG90bwRzbGsDY29jb251dC1jYXJy

Body-by-Guinness

  • Guest
Neanderthal Genome Sequenced
« Reply #87 on: May 06, 2010, 12:33:47 PM »
http://reason.com/blog/2010/05/06/the-neaderthal-in-us-neanderth
Reason Magazine


The Neanderthal in Us -- Neanderthal Genome Sequenced

Ronald Bailey | May 6, 2010

A team of genetic researchers led by Svante Pääbo from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany is publishing today in the journal Science the results of their effort to sequence the genome of the extinct Neanderthal lineage. It turns out that many of us whose ancestors hail from Europe or Asia carry genes from Neanderthals. As the press release describing the study explains:

A unique scientific task lasting four years has been completed: a team of researchers led by Svante Pääbo, Director of the Department of Evolutionary Genetics at the Max Planck Institute for Evolutionary Anthropology in Leipzig, is publishing an initial version of the Neandertal genome sequence in the current
issue of the journal Science.

This is an unprecedented scientific achievement: only ten years after the decoding of the present-day Homo sapiens genome, researchers have managed to do something similar for an extinct hominid that was the closest relative of modern humans. "The comparison of these two genetic sequences enables us to find out where our genome differs from that of our closest relative," says Svante Pääbo.

The Neandertal sequence presented is based on the analysis of over one billion DNA fragments taken from several Neandertal bones found in Croatia, Russia and Spain, as well as from the original Neandertal found in Germany. From the DNA fragments present in the bones the Leipzig researchers developed ways to distinguish true Neandertal DNA from the DNA of microbes that have lived in the bones over the last 40,000 years. Enough DNA fragments were retrieved to account for over 60 percent of the entire Neandertal genome.

An initial comparison of the two sequences has brought some exciting discoveries to light. Contrary to the assumption of many researchers, it would appear that some Neandertals and early modern humans interbred.

According to the researchers’ calculations, between one and four percent of the DNA of many humans living today originate from the Neandertal. "Those of us who live outside Africa carry a little Neandertal DNA in us," says Svante Pääbo. Previous tests carried out on the DNA of Neandertal mitochondria, which represents just a tiny part of the whole genome, had not found any evidence of such interbreeding or "admixture".

For the purpose of the analysis the researchers also sequenced five present day human genomes of European, Asian and African origin and compared them with the Neandertal. To their surprise they found that the Neandertal is slightly more closely related to modern humans from outside Africa than to Africans, suggesting some contribution of Neandertal DNA to the genomes of present-day non-Africans. Interestingly, Neandertals show the same relationship with all humans outside Africa, whether they are from Europe, East Asia or Melanesia. This is puzzling, as no Neandertal remains have been so far found in East Asia. They lived in Europe and Western Asia.

The researchers offer a plausible explanation for this finding. Svante Pääbo: "Neandertals probably mixed with early modern humans before Homo sapiens split into different groups in Europe and Asia." This could have occurred in the Middle East between 100,000 and 50,000 years ago before the human population spread across East Asia. It is known from archaeological findings in the Middle East that Neandertals and modern humans overlapped in time in this region.

Apart from the question as to whether Neandertals and Homo sapiens mixed, the researchers are highly interested in discovering genes that distinguish modern humans from their closest relative and may have given the modern humans certain advantages over the course of evolution.

By comparing Neandertal and modern human genomes, the scientists identified several genes that may have played an important role in modern human evolution. For example, they found genes related to cognitive functions, metabolism and the development of cranial features, the collar bone and the rib cage. However, more detailed analysis needs to be carried out to enable conclusions to be drawn on the actual influence of these genes.

I will mention that my 23andMe genotype scan indicates my maternal haplogroup is U5a2a which arose some 40,000 years ago and were among the first homo sapiens colonizers of ice age Europe.

If you're interested, go here for my column on what rights Neanderthals might claim should we ever succeed in using cloning technologies to bring them back.

Body-by-Guinness

  • Guest
Cognitive Biases Guide
« Reply #88 on: May 18, 2010, 05:21:33 AM »
I've only started sorting through this, but it appears to be an interesting list of the ways humans go about fooling themselves.

http://www.scribd.com/documents/30548590/Cognitive-Biases-A-Visual-Study-Guide-by-the-Royal-Society-of-Account-Planning

Body-by-Guinness

  • Guest
Universal, Common Ancestro
« Reply #89 on: May 18, 2010, 07:00:06 AM »
Second post.

First Large-Scale Formal Quantitative Test Confirms Darwin's Theory of Universal Common Ancestry

ScienceDaily (May 17, 2010) — More than 150 years ago, Darwin proposed the theory of universal common ancestry (UCA), linking all forms of life by a shared genetic heritage from single-celled microorganisms to humans. Until now, the theory that makes ladybugs, oak trees, champagne yeast and humans distant relatives has remained beyond the scope of a formal test. Now, a Brandeis biochemist reports in Nature the results of the first large scale, quantitative test of the famous theory that underpins modern evolutionary biology.

The results of the study confirm that Darwin had it right all along. In his 1859 book, On the Origin of Species, the British naturalist proposed that, "all the organic beings which have ever lived on this earth have descended from some one primordial form." Over the last century and a half, qualitative evidence for this theory has steadily grown, in the numerous, surprising transitional forms found in the fossil record, for example, and in the identification of sweeping fundamental biological similarities at the molecular level.

Still, rumblings among some evolutionary biologists have recently emerged questioning whether the evolutionary relationships among living organisms are best described by a single "family tree" or rather by multiple, interconnected trees -- a "web of life." Recent molecular evidence indicates that primordial life may have undergone rampant horizontal gene transfer, which occurs frequently today when single-celled organisms swap genes using mechanisms other than usual organismal reproduction. In that case, some scientists argue, early evolutionary relationships were web-like, making it possible that life sprang up independently from many ancestors.

According to biochemist Douglas Theobald, it doesn't really matter. "Let's say life originated independently multiple times, which UCA allows is possible," said Theobald. "If so, the theory holds that a bottleneck occurred in evolution, with descendants of only one of the independent origins surviving until the present. Alternatively, separate populations could have merged, by exchanging enough genes over time to become a single species that eventually was ancestral to us all. Either way, all of life would still be genetically related."

Harnessing powerful computational tools and applying Bayesian statistics, Theobald found that the evidence overwhelmingly supports UCA, regardless of horizontal gene transfer or multiple origins of life. Theobald said UCA is millions of times more probable than any theory of multiple independent ancestries.

"There have been major advances in biology over the last decade, with our ability to test Darwin's theory in a way never before possible," said Theobald. "The number of genetic sequences of individual organisms doubles every three years, and our computational power is much stronger now than it was even a few years ago."

While other scientists have previously examined common ancestry more narrowly, for example, among only vertebrates, Theobald is the first to formally test Darwin's theory across all three domains of life. The three domains include diverse life forms such as the Eukarya (organisms, including humans, yeast, and plants, whose cells have a DNA-containing nucleus) as well as Bacteria and Archaea (two distinct groups of unicellular microorganisms whose DNA floats around in the cell instead of in a nucleus).

Theobald studied a set of 23 universally conserved, essential proteins found in all known organisms. He chose to study four representative organisms from each of the three domains of life. For example, he researched the genetic links found among these proteins in archaeal microorganisms that produce marsh gas and methane in cows and the human gut; in fruit flies, humans, round worms, and baker's yeast; and in bacteria like E. coli and the pathogen that causes tuberculosis.

Theobald's study rests on several simple assumptions about how the diversity of modern proteins arose. First, he assumed that genetic copies of a protein can be multiplied during reproduction, such as when one parent gives a copy of one of their genes to several of their children. Second, he assumed that a process of replication and mutation over the eons may modify these proteins from their ancestral versions. These two factors, then, should have created the differences in the modern versions of these proteins we see throughout life today. Lastly, he assumed that genetic changes in one species don't affect mutations in another species -- for example, genetic mutations in kangaroos don't affect those in humans.

What Theobald did not assume, however, was how far back these processes go in linking organisms genealogically. It is clear, say, that these processes are able to link the shared proteins found in all humans to each other genetically. But do the processes in these assumptions link humans to other animals? Do these processes link animals to other eukaryotes? Do these processes link eukaryotes to the other domains of life, bacteria and archaea? The answer to each of these questions turns out to be a resounding yes.

Just what did this universal common ancestor look like and where did it live? Theobald's study doesn't answer this question. Nevertheless, he speculated, "to us, it would most likely look like some sort of froth, perhaps living at the edge of the ocean, or deep in the ocean on a geothermal vent. At the molecular level, I'm sure it would have looked as complex and beautiful as modern life."

http://www.sciencedaily.com/releases/2010/05/100512131513.htm

Freki

  • Power User
  • ***
  • Posts: 513
    • View Profile
How the parasitic worm has turned
« Reply #90 on: June 15, 2010, 06:37:52 AM »
Just knowing there is an interaction could lead to some good medical breakthroughs.  I would like to do without the worms....the yuck factor alone....... :-D


How the parasitic worm has turned
June 14, 2010
(PhysOrg.com) -- Parasites in the gut such as whipworm have an essential role in developing a healthy immune system, University of Manchester scientists have found.

 
It has long been known that microbes in the gut help to develop a healthy immune system, hence the rise in popularity of probiotic yoghurts that encourage 'friendly' bacteria. But new research by Professors Richard Grencis and Ian Roberts shows that larger organisms such as parasitic worms are also essential in maintaining our bodily 'ecosystem'.
Professor Roberts, whose work is published in Science, explains: "It is like a three-legged stool - the microbes, worms and immune system regulate each other.
"The worms have been with us throughout our evolution and their presence, along with bacteria, in the ecosystem of the gut is important in the development of a functional immune system."
Professor Grencis adds: "If you look at the incidence of parasitic worm infection and compare it to the incidence of auto-immune disease and allergy, where the body's immune system over-reacts and causes damage, they have little overlap. Clean places in the West, where parasites are eradicated, see problems caused by overactive immune systems. In the developing world, there is more parasitic worm infection but less auto-immune and allergic problems.
"We are not suggesting that people deliberately infect themselves with parasitic worms but we are saying that these larger pathogens make things that help our immune system. We have evolved with both the bugs and the worms and there are consequences of that interaction, so they are important to the development of our immune system."
Whipworm, also known as Trichuris, is a very common type of parasitic worm and infects many species of animals including millions of humans. It has also been with us and animals throughout evolution. The parasites live in the large intestine, the very site containing the bulk of the intestinal bacteria.
Ads by Google
Kill Skin Parasites Now - Stop the Itching, Biting & Stinging Kill Parasites & Heal the Sores. - Shop.QBased.com/Skin-Parasites
 
Heavy infections of whipworm can cause bloody diarrhoea, with long-standing blood loss leading to iron-deficiency anaemia, and even rectal prolapse. But light infections have relatively few symptoms.
Professors Grencis and Roberts and their team at Manchester's Faculty of Life Sciences investigated the establishment of Trichuris and found it is initiated by an interaction between gut bacteria and the parasite.
They further found that a broad range of gut bacteria were able to induce parasite hatching. In the case of Escherichia coli (E-coli), bacteria bound to specific sites on the egg and rapidly induce parasite hatching. With E-coli, hatching involved specific bacterial cell-surface structures known as fimbriae, which the bacteria normally use to attach to cells of the gut wall.
Importantly, the work also showed that the presence of worms and bacteria altered the immune responses in a way that is likely to protect ourselves, the bacteria and the worms.
Intestinal roundworm parasites are one of the most common types of infection worldwide, although in humans increased hygiene has reduced infection in many countries. High level infections by these parasites can cause disease, but the natural situation is the presence of relatively low levels of infection. The team's work suggests that in addition to bacterial microflora, the natural state of affairs of our intestines may well be the presence of larger organisms, the parasitic roundworms, and that complex and subtle interactions between these different types of organism have evolved to provide an efficient and beneficial ecosystem for all concerned.
Professor Roberts says: "The host uses its immune system to regulate the damage caused by the bacteria and the worms. If the pathogens are missing, the immune system may not give the right response."
Professor Grencis adds: "The gut and its inhabitants should be considered a complex ecosystem, not only involving bacteria but also parasites, not just sitting together but interacting."
More information: 'Exploitation of the Intestinal Microflora by the Parasitic Nematode Trichuris muris', Science.
Provided by University of Manchester (news : web)

Body-by-Guinness

  • Guest
Stomach Virus: New Meaning
« Reply #91 on: July 17, 2010, 06:59:12 PM »
A Viral Wonderland in the Human Gut
by Gisela Telis on July 14, 2010 1:39 PM | Permanent Link | 0 Comments
Email Print| More
PREVIOUS ARTICLE NEXT ARTICLE
ENLARGE IMAGE
 
Distinctive signature. Each person's gut carries a different collection of bacteria-infecting viruses that may benefit their hosts, researchers report. The viruses contain DNA fragments whose functions (above) include cell repair and food processing.
Credit: A. Reyes et al., Nature, 466 (15 July 2010)

Snowflakes haven't cornered the market on uniqueness. Researchers report that human guts harbor viruses as unique as the people they inhabit; the viral lineup differs even between identical twins. The discovery offers a first glimpse at the previously unknown viruses and their surprisingly friendly relationships with their hosts.

Microbiologists have known since the late 19th century that human intestines are a crowded and complicated place. Our bacterial denizens outnumber our cells, and many help break down foods and fight off pathogens. For the past decade, microbiologist Jeffrey Gordon of Washington University in St Louis has been mapping the gut's microbial landscape. His studies have linked intestinal bacteria to obesity and have shown that families tend to share their microbial makeup. But scientists hadn't yet explored whether phages—viruses that infect bacteria—were part of this shared community.

Led by graduate student Alejandro Reyes, Gordon's team analyzed fecal samples from four sets of Missouri-born female identical twins and their mothers. The researchers collected and purified the poop three times over the course of a year—to better track the microbial community's changes over time—and then sequenced the viral DNA, or viromes, the poop contained. Only 20% of the viromes matched existing databases; the rest came from previously unknown viruses. And each woman's set of viromes was distinctive, differing even from the viromes of her identical twin, the researchers report in the 15 July issue of Nature. Unlike their bacterial profiles, which overlapped by about 50% (significantly more than between strangers), identical twins had no more viruses in common than did unrelated people.

Equally surprising, Gordon says, was the communities' consistency: the viral makeup changed less than 5% over the course of the year, and the viromes of the most abundant phages changed less than 1%. Rapidly changing viromes would have signaled an "arms race" in which threatened bacteria were adapting to survive phage attacks, and the phages were adapting to avoid bacterial defenses. "The fact that the viromes didn't change," says Gordon, "suggests this is a temperate environment" in which the bacteria and their phages coexist in peace.

That may be because the viruses are actually helping the bacteria. When the viruses latch onto gut bacteria, they take some of their host's genetic material and can change it or move it to other hosts, bringing new and potentially advantageous functions to the bugs. The researchers found that many of the genes the phages carry and transfer are beneficial to the bacteria; some may help them repair cell walls, for example. In return, the bacteria, which don't die from the infections, provide an improved cellular factory to make new viruses.

The researchers don't know where the viruses come from or what causes viromes to differ so dramatically from person to person. But their data indicate that there is a huge diversity of these viruses, and that could explain why even closely related people can harbor very different populations.

Gordon says that understanding the details of the phage-bacteria relationship could help gauge the health of a patient's gut community, because the phages are sensitive to changes in their hosts. But "we still have a lot to learn about viruses" before we can expect any practical applications, says microbiologist Edward DeLong of the Massachusetts Institute of Technology in Cambridge. "This is just a first peek," he says, "but it's a remarkable one. It's the first high-resolution picture of the bacterial-viral dynamic in the human ecosystem, in a huge part of our own ecology that remains terra incognita."

http://news.sciencemag.org/sciencenow/2010/07/a-viral-wonderland-in-the-human-.html

Body-by-Guinness

  • Guest
Well This Goes Without Saying. . . .
« Reply #92 on: November 05, 2010, 11:31:24 AM »
By COURTNEY HUTCHISON
ABC News Medical Unit
Ozzy Osbourne Is a Genetic Mutant
Gene Variants Let the Part-Neanderthal Rocker Party Hard Into His 60s
  Nov. 3, 2010

Despite a lifetime of hard partying, heavy metal rocker
Ozzy Osbourne is alive and kicking at 61, and he may
have his genes to thank for it. Now that the "Full
Osbourne Genome" has been sequenced, the truth is
out: the former lead singer of Black Sabbath is a
genetic mutant.

The musician has several gene variants that "we've
never seen before," said geneticist Nathaniel Pearson,
who sequenced the rocker's genome, including
variants that could impact how Osbourne's body
absorbs methamphetamines and other recreational
drugs.

"I've always said that at the end of the world there will
be roaches, Ozzy and Keith Richards," Osbourne's
wife Sharon Osbourne, said at Friday's conference.
"He's going to outlive us all. That fascinated me --
how his body can endure so much."

Osbourne's resilience also piqued the interest of
Knome, Inc., a genomics company that began
sequencing the "full Ozzy genome" last July.

"Why not Ozzy?" Jorge Conde, co-founder and chief
executive of Cambridge, Mass.-based Knome, told
ABCnews.com.

Conde said the company was interested in exploring
the genome of someone as musically talented as
Osbourne. Of course, trying to figure out if good
genes had anything to do with Osbourne's ability to
handle his "aggressive" lifestyle was also a major
draw for researchers, he said.

The results of Knome's sequencing were discussed
on stage last Friday at this year'sTEDMED conference
in San Diego, with Sharon Osbourne, Pearson, and
Ozzy Osbourne all weighing in on what Osbourne's
genes can mean for medicine.

 Uncovering the Ozzy Genome

  Osbourne initially was skeptical about the project, he
wrote Oct. 24 in his Sunday Times of London column,
"The Wisdom of Oz," but soon came around to the
idea of offering his genetic code to science.

"I was curious ... given the swimming pools of booze
I've guzzled over the years -- not to mention all of the
cocaine, morphine, sleeping pills, cough syrup, LSD,
Rohypnol ... you name it -- there's really no plausible
medical reason why I should still be alive. Maybe my
DNA could say why," he wrote in his column.

Not surprisingly, the most notable differences in
Osbourne's genes had to do with how he processes
drugs and alcohol. Genes connected to addiction,
alcoholism and the absorption of marijuana, opiates
and methamphetamines all had unique variations in
Osbourne, a few of which Knome geneticists had
never seen before.

"He had a change on the regulatory region of the
ADH4 gene, a gene associated with alcoholism, that
we've never seen before," Conde told ABCnews.com.
"He has an increased predisposition for alcohol
dependence of something like six times higher. He
also had a slight increased risk for cocaine addiction,
but he dismissed that. He said that if anyone has
done as much cocaine he had, they would have been
hooked."

The Prince of Darkness also a 2.6-times increased
chance for hallucinations associated with marijuana,
though Osbourne said he wouldn't know if that were
true because he so rarely smoked marijuana without
other drugs also in his system.

Ironically, Osbourne's genes suggest that he is a
slow metabolizer of coffee, meaning that he would be
more affected by caffeine.

"Turns out that Ozzy's kryptonite is caffeine," Conde
said.

Conde and Pearson particularly were interested in
looking at Osbourne's nervous system and nervous
function, given the musician's lifestyle and his recent
experience of Parkinson's-like tremors.

They found a functioning change in his TTN gene,
which is associated with a number of things in the n
ervous system, including deafness and Parkinson's.

"Here's a guy who's rocking heavy metal for decades
and he can still hear," Conde said. "It would be
interesting to know if this gene may impact that. His
Parkinsonian tremor -- it's hard to know if that is
from his genes or from years of hard living."

And of course, there's the fact that Osbourne had
Neanderthal genes in him.

"People thought that [Neanderthals] had no
descendents today, but they do," Pearson said at the
conference. "In east Asia and Europe, a lot of us have a
little Neanderthal ancestry. We found a sliver of the
genes in Ozzy. We also looked at [Knome's] founder,
George Chruch, and he has about three times as
much as Ozzy does."

To which Sharon Osbourne replied: "I'd like to meet
him."

 Learning From Our Favorite
Neanderthal Rocker

  While genomics have come a long way since the
first full human genome was sequenced in 2003,
interpreting what gene variants mean still involves a
lot of guesswork.

"We can read the code, but it takes additional
research to decipher what is means," Conde said.

In other words, geneticists know which traits are
associated with certain genes, but not how a mutation
on that gene will affect someone. By sequencing those
who seem to show unique traits, such as Osbourne's
ability to remain relatively healthy despite heavy drug
 and alcohol abuse, geneticists hope to learn more
about how deviations in certain genes create specific
traits, susceptibility to disease and reactions to
substances.

"What interests me are people who have done
something extraordinary with no clear reason as to
why," Conde said.

For his next celebrity genome, he would like to pick
somebody on the far extreme of intelligence such as
Stephen Hawking. Or he might stick with rock-lifestyle
resilience and get Keith Richards, as Sharon
Osbourne suggested.

TEDMED is a yearly conference dedicated to
increasing innovation in the medical realm "from
personal health to public health, devices to design
and Hollywood to the hospital," the website said.

http://abcnews.go.com/Health/Wellness/genetic-mutations-ozzy-osbourne-party-hard/story?id=12032552&page=1

Body-by-Guinness

  • Guest
Retrovirues & Schizophrenia, I
« Reply #93 on: November 12, 2010, 10:22:55 AM »
The Insanity Virus

11.08.2010
Schizophrenia has long been blamed on bad genes or even bad parents. Wrong, says a growing group of psychiatrists. The real culprit, they claim, is a virus that lives entwined in every person's DNA.

by Douglas Fox
Steven and David Elmore were born identical twins, but their first days in this world could not have been more different. David came home from the hospital after a week. Steven, born four minutes later, stayed behind in the ICU. For a month he hovered near death in an incubator, wracked with fever from what doctors called a dangerous viral infection. Even after Steven recovered, he lagged behind his twin. He lay awake but rarely cried. When his mother smiled at him, he stared back with blank eyes rather than mirroring her smiles as David did. And for several years after the boys began walking, it was Steven who often lost his balance, falling against tables or smashing his lip.

Those early differences might have faded into distant memory, but they gained new significance in light of the twins’ subsequent lives. By the time Steven entered grade school, it appeared that he had hit his stride. The twins seemed to have equalized into the genetic carbon copies that they were: They wore the same shoulder-length, sandy-blond hair. They were both B+ students. They played basketball with the same friends. Steven Elmore had seemingly overcome his rough start. But then, at the age of 17, he began hearing voices.

The voices called from passing cars as Steven drove to work. They ridiculed his failure to find a girlfriend. Rolling up the car windows and blasting the radio did nothing to silence them. Other voices pursued Steven at home. Three voices called through the windows of his house: two angry men and one woman who begged the men to stop arguing. Another voice thrummed out of the stereo speakers, giving a running commentary on the songs of Steely Dan or Led Zeppelin, which Steven played at night after work. His nerves frayed and he broke down. Within weeks his outbursts landed him in a psychiatric hospital, where doctors determined he had schizophrenia.

The story of Steven and his twin reflects a long-standing mystery in schizophrenia, one of the most common mental diseases on earth, affecting about 1 percent of humanity. For a long time schizophrenia was commonly blamed on cold mothers. More recently it has been attributed to bad genes. Yet many key facts seem to contradict both interpretations.

Schizophrenia is usually diagnosed between the ages of 15 and 25, but the person who becomes schizophrenic is sometimes recalled to have been different as a child or a toddler—more forgetful or shy or clumsy. Studies of family videos confirm this. Even more puzzling is the so-called birth-month effect: People born in winter or early spring are more likely than others to become schizophrenic later in life. It is a small increase, just 5 to 8 percent, but it is remarkably consistent, showing up in 250 studies. That same pattern is seen in people with bipolar disorder or multiple sclerosis.

“The birth-month effect is one of the most clearly established facts about schizophrenia,” says Fuller Torrey, director of the Stanley Medical Research Institute in Chevy Chase, Maryland. “It’s difficult to explain by genes, and it’s certainly difficult to explain by bad mothers.”

The facts of schizophrenia are so peculiar, in fact, that they have led Torrey and a growing number of other scientists to abandon the traditional explanations of the disease and embrace a startling alternative. Schizophrenia, they say, does not begin as a psychological disease. Schizophrenia begins with an infection.

The idea has sparked skepticism, but after decades of hunting, Torrey and his colleagues think they have finally found the infectious agent. You might call it an insanity virus. If Torrey is right, the culprit that triggers a lifetime of hallucinations—that tore apart the lives of writer Jack Kerouac, mathematician John Nash, and millions of others—is a virus that all of us carry in our bodies. “Some people laugh about the infection hypothesis,” says Urs Meyer, a neuroimmunologist at the Swiss Federal Institute of Technology in Zurich. “But the impact that it has on researchers is much, much, much more than it was five years ago. And my prediction would be that it will gain even more impact in the future.”

The implications are enormous. Torrey, Meyer, and others hold out hope that they can address the root cause of schizophrenia, perhaps even decades before the delusions begin. The first clinical trials of drug treatments are already under way. The results could lead to meaningful new treatments not only for schizophrenia but also for bipolar disorder and multiple sclerosis. Beyond that, the insanity virus (if such it proves) may challenge our basic views of human evolution, blurring the line between “us” and “them,” between pathogen and host.

+++
 

Rhoda Torrey and her brother Fuller,
who would go on to research shizophrenia.

Courtesy E. Fuller Torrey

Torrey’s connection to schizophrenia began in 1957. As summer drew to a close that year, his younger sister, Rhoda, grew agitated. She stood on the lawn of the family home in upstate New York, looking into the distance. She rambled as she spoke. “The British,” she said. “The British are coming.” Just days before Rhoda should have started college, she was given a diagnosis of schizophrenia. Doctors told the grieving family that dysfunctional household relationships had caused her meltdown. Because his father was no longer alive, it was Torrey, then in college, who shouldered much of the emotional burden.

Torrey, now 72, develops a troubled expression behind his steel-rimmed glasses as he remembers those years. “Schizophrenia was badly neglected,” he says.

In 1970 Torrey arrived at the National Institute of Mental Health in Washington, D.C., having finished his training in psychiatric medicine. At the time, psychiatry remained under the thrall of Freudian psychoanalysis, an approach that offered little to people like Rhoda. Torrey began looking for research opportunities in schizophrenia. The more he learned, the more his views diverged from those of mainstream psychiatry.

A simple neurological exam showed Torrey that schizophrenics suffered from more than just mental disturbances. They often had trouble doing standard inebriation tests, like walking a straight line heel to toe. If Torrey simultaneously touched their face and hand while their eyes were closed, they often did not register being touched in two places. Schizophrenics also showed signs of inflammation in their infection-fighting white blood cells. “If you look at the blood of people with schizophrenia,” Torrey says, “there are too many odd-looking lymphocytes, the kind that you find in mononucleosis.” And when he performed CAT scans on pairs of identical twins with and without the disease—including Steven and David Elmore—he saw that schizophrenics’ brains had less tissue and larger fluid-filled ventricles.

Subsequent studies confirmed those oddities. Many schizophrenics show chronic inflammation and lose brain tissue over time, and these changes correlate with the severity of their symptoms. These things “convinced me that this is a brain disease,” Torrey says, “not a psychological problem.”

By the 1980s he began working with Robert Yolken, an infectious-diseases specialist at Johns Hopkins University in Baltimore, to search for a pathogen that could account for these symptoms. The two researchers found that schizophrenics often carried antibodies for toxoplasma, a parasite spread by house cats; Epstein-Barr virus, which causes mononucleosis; and cytomegalovirus. These people had clearly been exposed to those infectious agents at some point, but Torrey and Yolken never found the pathogens themselves in the patients’ bodies. The infection always seemed to have happened years before.

Torrey wondered if the moment of infection might in fact have occurred during early childhood. If schizophrenia was sparked by a disease that was more common during winter and early spring, that could explain the birth-month effect. “The psychiatrists thought I was psychotic myself,” Torrey says. “Some of them still do.”

Better prenatal care or vaccinations could prevent the infections that put people on a path to schizophrenia, and early treatment might prevent psychosis from developing two decades later.
While Torrey and Yolken were chasing their theory, another scientist unwittingly entered the fray. Hervé Perron, then a graduate student at Grenoble University in France, dropped his Ph.D. project in 1987 to pursue something more challenging and controversial: He wanted to learn if new ideas about retroviruses—a type of virus that converts RNA into DNA—could be relevant to multiple sclerosis.

Robert Gallo, the director of the Institute of Human Virology at the University of Maryland School of Medicine and co discoverer of HIV, had speculated that a virus might trigger the paralytic brain lesions in MS. People had already looked at the herpes virus (HHV-6), cytomegalovirus, Epstein-Barr virus, and the retroviruses HTLV-1 and HTLV-2 as possible causes of the disease. But they always came up empty-handed.

Perron learned from their failures. “I decided that I should not have an a priori idea of what I would find,” he says. Rather than looking for one virus, as others had done, he tried to detect any retrovirus, whether or not it was known to science. He extracted fluids from the spinal columns of MS patients and tested for an enzyme, called reverse transcriptase, that is carried by all retroviruses. Sure enough, Perron saw faint traces of retroviral activity. Soon he obtained fuzzy electron microscope images of the retrovirus itself.

His discovery was intriguing but far from conclusive. After confirming his find was not a fluke, Perron needed to sequence its genes. He moved to the National Center for Scientific Research in Lyon, France, where he labored days, nights, and weekends. He cultured countless cells from people with MS to grow enough of his mystery virus for sequencing. MS is an incurable disease, so Perron had to do his research in a Level 3 biohazard lab. Working in this airtight catacomb, he lived his life in masks, gloves, and disposable scrubs.

After eight years of research, Perron finally completed his retrovirus’s gene sequence. What he found on that day in 1997 no one could have predicted; it instantly explained why so many others had failed before him. We imagine viruses as mariners, sailing from person to person across oceans of saliva, snot, or semen—but Perron’s bug was a homebody. It lives permanently in the human body at the very deepest level: inside our DNA. After years slaving away in a biohazard lab, Perron realized that everyone already carried the virus that causes multiple sclerosis.

Other scientists had previously glimpsed Perron’s retrovirus without fully grasping its significance. In the 1970s biologists studying pregnant baboons were shocked as they looked at electron microscope images of the placenta. They saw spherical retroviruses oozing from the cells of seemingly healthy animals. They soon found the virus in healthy humans, too. So began a strange chapter in evolutionary biology.

+++
Viruses like influenza or measles kill cells when they infect them. But when retroviruses like HIV infect a cell, they often let the cell live and splice their genes into its DNA. When the cell divides, both of its progeny carry the retrovirus’s genetic code in their DNA.

In the past few years, geneticists have pieced together an account of how Perron’s retrovirus entered our DNA. Sixty million years ago, a lemurlike animal—an early ancestor of humans and monkeys—contracted an infection. It may not have made the lemur ill, but the retrovirus spread into the animal’s testes (or perhaps its ovaries), and once there, it struck the jackpot: It slipped inside one of the rare germ line cells that produce sperm and eggs. When the lemur reproduced, that retrovirus rode into the next generation aboard the lucky sperm and then moved on from generation to generation, nestled in the DNA. “It’s a rare, random event,” says Robert Belshaw, an evolutionary biologist at the University of Oxford in England. “Over the last 100 million years, there have been only maybe 50 times when a retrovirus has gotten into our genome and proliferated.”

But such genetic intrusions stick around a very long time, so humans are chockablock full of these embedded, or endogenous, retroviruses. Our DNA carries dozens of copies of Perron’s virus, now called human endogenous retrovirus W, or HERV-W, at specific addresses on chromosomes 6 and 7.

If our DNA were an airplane carry-on bag (and essentially it is), it would be bursting at the seams. We lug around 100,000 retro virus sequences inside us; all told, genetic parasites related to viruses account for more than 40 percent of all human DNA. Our body works hard to silence its viral stowaways by tying up those stretches of DNA in tight stacks of proteins, but sometimes they slip out. Now and then endogenous retroviruses switch on and start manufacturing proteins. They assemble themselves like Lego blocks into bulbous retroviral particles, which ooze from the cells producing them.


Body-by-Guinness

  • Guest
Retrovirues & Schizophrenia, II
« Reply #94 on: November 12, 2010, 10:23:20 AM »
Endogenous retroviruses were long considered genetic fossils, incapable of doing anything interesting. But since Perron’s revelation, at least a dozen studies have found that HERV-W is active in people with MS.

By the time Perron made his discovery, Torrey and Yolken had spent about 15 years looking for a pathogen that causes schizophrenia. They found lots of antibodies but never the bug itself. Then Håkan Karlsson, who was a postdoctoral fellow in Yolken’s lab, became interested in studies showing that retroviruses sometimes triggered psychosis in AIDS patients. The team wondered if other retroviruses might cause these symptoms in separate diseases such as schizophrenia. So they used an experiment, similar to Perron’s, that would detect any retrovirus (by finding sequences encoding reverse transcriptase enzyme)—even if it was one that had never been catalogued before. In 2001 they nabbed a possible culprit. It turned out to be HERV-W.

Several other studies have since found similar active elements of HERV-W in the blood or brain fluids of people with schizophrenia. One, published by Perron in 2008, found HERV-W in the blood of 49 percent of people with schizophrenia, compared with just 4 percent of healthy people. “The more HERV-W they had,” Perron says, “the more inflammation they had.” He now sees HERV-W as key to understanding many cases of both MS and schizophrenia. “I’ve been doubting for so many years,” he says. “I’m convinced now.”

Torrey, Yolken, and Sarven Sabunciyan, an epigeneticist at Johns Hopkins, are working to understand how endogenous retroviruses can wreak their havoc. Much of their research revolves around the contents of a nondescript brick building near Washington, D.C. This building, owned by the Stanley Medical Research Institute, maintains the world’s largest library of schizophrenic and bipolar brains. Inside are hundreds of cadaver brains (donated to science by the deceased), numbered 1 through 653. Each brain is split into right and left hemispheres, one half frozen at about –103 degrees Fahrenheit, the other chilled in formaldehyde. Jacuzzi-size freezers fill the rooms. The roar of their fans cuts through the air as Torrey’s team examines the brains to pinpoint where and when HERV-W awakens into schizophrenia.

New high-speed DNA sequencing is making the job possible. In a cramped room at Johns Hopkins Medical Center, a machine the size of a refrigerator hums 24/7 to read gene sequences from samples. Every few minutes the machine’s electric eye scans a digital image of a stamp-size glass plate. Fixed to that plate are 300 million magnetic beads, and attached to each bead is a single molecule of DNA, which the machine is sequencing. In a week the machine churns out the equivalent of six human genomes—enough raw data to fill 40 computer hard drives.

Torrey’s younger sister, Rhoda, stood on the lawn of the family home in upstate New York, looking into the distance. “The British,” she said. “The British are coming.” Just days before Rhoda should have started college, she was given a diagnosis of schizophrenia.
The hard part starts when those sequences arrive at Sabunciyan’s desk. “We got these data right around New Year’s 2009,” Sabunciyan said one day last August as he scrolled through a file containing 2 billion letters of genetic code, equivalent to 2,000 John Grisham novels composed just of the letters G, A, T, and C (making the plot a great deal more confusing). “We’re still looking at it.”

Sabunciyan has found that an unexpectedly large amount of the RNA produced in the brain—about 5 percent—comes from seemingly “junk” DNA, which includes endogenous retroviruses. RNA is a messenger of DNA, a step in the path to making proteins, so its presence could mean that viral proteins are being manufactured in the body more frequently than had been thought.

Through this research, a rough account is emerging of how HERV-W could trigger diseases like schizophrenia, bipolar disorder, and MS. Although the body works hard to keep its ERVs under tight control, infections around the time of birth destabilize this tense standoff. Scribbled onto the marker board in Yolken’s office is a list of infections that are now known to awaken HERV-W—including herpes, toxoplasma, cytomegalovirus, and a dozen others. The HERV-W viruses that pour into the newborn’s blood and brain fluid during these infections contain proteins that may enrage the infant immune system. White blood cells vomit forth inflammatory molecules called cytokines, attracting more immune cells like riot police to a prison break. The scene turns toxic.

In one experiment, Perron isolated HERV-W virus from people with MS and injected it into mice. The mice became clumsy, then paralyzed, then died of brain hemorrhages. But if Perron depleted the mice of immune cells known as T cells, the animals survived their encounter with HERV-W. It was an extreme experiment, but to Perron it made an important point. Whether people develop MS or schizophrenia may depend on how their immune system responds to HERV-W, he says. In MS the immune system directly attacks and kills brain cells, causing paralysis. In schizophrenia it may be that inflammation damages neurons indirectly by overstimulating them. “The neuron is discharging neurotransmitters, being excited by these inflammatory signals,” Perron says. “This is when you develop hallucinations, delusions, paranoia, and hyper-suicidal tendencies.”

The first, pivotal infection by toxoplasmosis or influenza (and subsequent flaring up of HERV-W) might happen shortly before or after birth. That would explain the birth-month effect: Flu infections happen more often in winter. The initial infection could then set off a lifelong pattern in which later infections reawaken HERV-W, causing more inflammation and eventually symptoms. This process explains why schizophrenics gradually lose brain tissue. It explains why the disease waxes and wanes like a chronic infection. And it could explain why some schizophrenics suffer their first psychosis after a mysterious, monolike illness.

+++
The infection theory could also explain what little we know of the genetics of schizophrenia. One might expect that the disease would be associated with genes controlling our synapses or neurotransmitters. Three major studies published last year in the journal Nature tell a different story. They instead implicate immune genes called human leukocyte antigens (HLAs), which are central to our body’s ability to detect invading pathogens. “That makes a lot of sense,” Yolken says. “The response to an infectious agent may be why person A gets schizophrenia and person B doesn’t.”

Gene studies have failed to provide simple explanations for ailments like schizophrenia and MS. Torrey’s theory may explain why. Genes may come into play only in conjunction with certain environmental kicks. Our genome’s thousands of parasites might provide part of that kick.

“The ‘genes’ that can respond to environmental triggers or toxic pathogens are the dark side of the genome,” Perron says. Retroviruses, including HIV, are known to be awakened by inflammation—possibly the result of infection, cigarette smoke, or pollutants in drinking water. (This stress response may be written into these parasites’ basic evolutionary strategy, since stressed hosts may be more likely to spread or contract infections.) The era of writing off endogenous retroviruses and other seemingly inert parts of the genome as genetic fossils is drawing to an end, Perron says. “It’s not completely junk DNA, it’s not dead DNA,” he asserts. “It’s an incredible source of interaction with the environment.” Those interactions may trigger disease in ways that we are only just beginning to imagine.

Torrey’s sister has had a tough go of it. Schizophrenia treatments were limited when she fell ill. Early on she received electroshock therapy and insulin shock therapy, in which doctors induced a coma by lowering her blood sugar level. Rhoda Torrey has spent 40 years in state hospitals. The disease has left only one part of her untouched: Her memory of her brief life before becoming ill—of school dances and sleepovers half a century ago—remains as clear as ever.

Steven Elmore was more fortunate. Drug therapy was widely available when he fell ill, and although he still hears voices from time to time, he has done well. Now 50 years old, he is married, cares for an adopted son and stepson, and works full time. He has avoided common drug side effects like diabetes, although his medications initially caused him to gain 40 pounds.

Torrey and Yolken hope to add a new, more hopeful chapter to this story. Yolken’s wife, Faith Dickerson, is a clinical psychologist at Sheppard Pratt Health System in Baltimore. She is running a clinical trial to examine whether adding an anti-infective agent called artemisinin to the drugs that patients are already taking can lessen the symptoms of schizophrenia. The drug would hit HERV-W indirectly by tamping down the infections that awaken it. “If we can treat the toxoplasmosis,” Torrey says, “presumably we can get a better outcome than by treating [neurotransmitter] abnormalities that have occurred 14 steps down the line, which is what we’re doing now.”

Looking ahead, better prenatal care or vaccinations could prevent the first, early infections that put some people on a path to schizophrenia. For high-risk babies who do get sick, early treatment might prevent psychosis from developing two decades later. Recent work by Urs Meyer, the neuroimmunologist, and his colleague Joram Feldon at the Swiss Federal Institute of Technology drives this point home. When they injected pregnant mice with RNA molecules mimicking viral infections, the pups grew up to resemble schizophrenic adults. The animals’ memory and learning were impaired, they overreacted to startling noises, and their brain atrophied. But this March, Meyer and Feldon reported that treating the baby mice with antipsychotic drugs prevented them from developing some of these abnormalities as adults.

Perron has founded a biotech start-up —GeNeuro, in Geneva, Switzerland—to develop treatments targeting HERV-W. The company has created an antibody that neutralizes a primary viral protein, and it works in lab mice with MS. “We have terrific effects,” Perron says. “In animals that have demyelinating brain lesions induced by these HERV envelope proteins, we see a dramatic stop to this process when we inject this antibody.” He is scheduled to begin a Phase 1 clinical trial in people with MS near the end of this year. A clinical trial with schizophrenics might follow in 2011.

Even after all that, many medical experts still question how much human disease can be traced to viral invasions that took place millions of years ago. If the upcoming human trials work as well as the animal experiments, the questions may be silenced—and so may the voices of schizophrenia.

http://discovermagazine.com/2010/jun/03-the-insanity-virus/

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
NY Times: This is your brain on metaphors
« Reply #95 on: November 19, 2010, 08:02:17 AM »
Despite rumors to the contrary, there are many ways in which the human brain isn’t all that fancy. Let’s compare it to the nervous system of a fruit fly. Both are made up of cells, of course, with neurons playing particularly important roles. Now one might expect that a neuron from a human will differ dramatically from one from a fly. Maybe the human’s will have especially ornate ways of communicating with other neurons, making use of unique “neurotransmitter” messengers. Maybe compared to the lowly fly neuron, human neurons are bigger, more complex, in some way can run faster and jump higher.

We study hard to get admitted to a top college to get a good job to get into the nursing home of our choice. Gophers don’t do that.
.But no. Look at neurons from the two species under a microscope and they look the same. They have the same electrical properties, many of the same neurotransmitters, the same protein channels that allow ions to flow in and out, as well as a remarkably high number of genes in common. Neurons are the same basic building blocks in both species.

So where’s the difference? It’s numbers — humans have roughly one million neurons for each one in a fly. And out of a human’s 100 billion neurons emerge some pretty remarkable things. With enough quantity, you generate quality.


Erin Schell
 Neuroscientists understand the structural bases of some of these qualities. Take language, that uniquely human behavior. Underlining it are structures unique to the human brain — regions like “Broca’s area,” which specializes in language production. Then there’s the brain’s “extrapyramidal system,” which is involved in fine motor control. The complexity of the human version allows us to do something that, say, a polar bear, could never accomplish — sufficiently independent movement of digits to play a trill on the piano, for instance. Particularly striking is the human frontal cortex. While occurring in all mammals, the human version is proportionately bigger and denser in its wiring. And what is the frontal cortex good for? Emotional regulation, gratification postponement, executive decision-making, long-term planning. We study hard in high school to get admitted to a top college to get into grad school to get a good job to get into the nursing home of our choice. Gophers don’t do that.

There’s another domain of unique human skills, and neuroscientists are learning a bit about how the brain pulls it off.

Consider the following from J. Ruth Gendler’s wonderful “The Book of Qualities,” a collection of “character sketches” of different qualities, emotions and attributes:

Anxiety is secretive. He does not trust anyone, not even his friends, Worry, Terror, Doubt and Panic … He likes to visit me late at night when I am alone and exhausted. I have never slept with him, but he kissed me on the forehead once, and I had a headache for two years …

Or:

Compassion speaks with a slight accent. She was a vulnerable child, miserable in school, cold, shy … In ninth grade she was befriended by Courage. Courage lent Compassion bright sweaters, explained the slang, showed her how to play volleyball.

What is Gendler going on about? We know, and feel pleasure triggered by her unlikely juxtapositions. Despair has stopped listening to music. Anger sharpens kitchen knives at the local supermarket. Beauty wears a gold shawl and sells seven kinds of honey at the flea market. Longing studies archeology.

Symbols, metaphors, analogies, parables, synecdoche, figures of speech: we understand them. We understand that a captain wants more than just hands when he orders all of them on deck. We understand that Kafka’s “Metamorphosis” isn’t really about a cockroach. If we are of a certain theological ilk, we see bread and wine intertwined with body and blood. We grasp that the right piece of cloth can represent a nation and its values, and that setting fire to such a flag is a highly charged act. We can learn that a certain combination of sounds put together by Tchaikovsky represents Napoleon getting his butt kicked just outside Moscow. And that the name “Napoleon,” in this case, represents thousands and thousands of soldiers dying cold and hungry, far from home.

And we even understand that June isn’t literally busting out all over. It would seem that doing this would be hard enough to cause a brainstorm. So where did this facility with symbolism come from? It strikes me that the human brain has evolved a necessary shortcut for doing so, and with some major implications.

A single part of the brain processes both physical and psychic pain.
.Consider an animal (including a human) that has started eating some rotten, fetid, disgusting food. As a result, neurons in an area of the brain called the insula will activate. Gustatory disgust. Smell the same awful food, and the insula activates as well. Think about what might count as a disgusting food (say, taking a bite out of a struggling cockroach). Same thing.

Now read in the newspaper about a saintly old widow who had her home foreclosed by a sleazy mortgage company, her medical insurance canceled on flimsy grounds, and got a lousy, exploitative offer at the pawn shop where she tried to hock her kidney dialysis machine. You sit there thinking, those bastards, those people are scum, they’re worse than maggots, they make me want to puke … and your insula activates. Think about something shameful and rotten that you once did … same thing. Not only does the insula “do” sensory disgust; it does moral disgust as well. Because the two are so viscerally similar. When we evolved the capacity to be disgusted by moral failures, we didn’t evolve a new brain region to handle it. Instead, the insula expanded its portfolio.

Or consider pain. Somebody pokes your big left toe with a pin. Spinal reflexes cause you to instantly jerk your foot back just as they would in, say, a frog. Evolutionarily ancient regions activate in the brain as well, telling you about things like the intensity of the pain, or whether it’s a sharp localized pain or a diffuse burning one. But then there’s a fancier, more recently evolved brain region in the frontal cortex called the anterior cingulate that’s involved in the subjective, evaluative response to the pain. A piranha has just bitten you? That’s a disaster. The shoes you bought are a size too small? Well, not as much of a disaster.

Now instead, watch your beloved being poked with the pin. And your anterior cingulate will activate, as if it were you in pain. There’s a neurotransmitter called Substance P that is involved in the nuts and bolts circuitry of pain perception. Administer a drug that blocks the actions of Substance P to people who are clinically depressed, and they often feel better, feel less of the world’s agonies. When humans evolved the ability to be wrenched with feeling the pain of others, where was it going to process it? It got crammed into the anterior cingulate. And thus it “does” both physical and psychic pain.

Another truly interesting domain in which the brain confuses the literal and metaphorical is cleanliness. In a remarkable study, Chen-Bo Zhong of the University of Toronto and Katie Liljenquist of Northwestern University demonstrated how the brain has trouble distinguishing between being a dirty scoundrel and being in need of a bath. Volunteers were asked to recall either a moral or immoral act in their past. Afterward, as a token of appreciation, Zhong and Liljenquist offered the volunteers a choice between the gift of a pencil or of a package of antiseptic wipes. And the folks who had just wallowed in their ethical failures were more likely to go for the wipes. In the next study, volunteers were told to recall an immoral act of theirs. Afterward, subjects either did or did not have the opportunity to clean their hands. Those who were able to wash were less likely to respond to a request for help (that the experimenters had set up) that came shortly afterward. Apparently, Lady Macbeth and Pontius Pilate weren’t the only ones to metaphorically absolve their sins by washing their hands.

This potential to manipulate behavior by exploiting the brain’s literal-metaphorical confusions about hygiene and health is also shown in a study by Mark Landau and Daniel Sullivan of the University of Kansas and Jeff Greenberg of the University of Arizona. Subjects either did or didn’t read an article about the health risks of airborne bacteria. All then read a history article that used imagery of a nation as a living organism with statements like, “Following the Civil War, the United States underwent a growth spurt.” Those who read about scary bacteria before thinking about the U.S. as an organism were then more likely to express negative views about immigration.

Another example of how the brain links the literal and the metaphorical comes from a study by Lawrence Williams of the University of Colorado and John Bargh of Yale. Volunteers would meet one of the experimenters, believing that they would be starting the experiment shortly. In reality, the experiment began when the experimenter, seemingly struggling with an armful of folders, asks the volunteer to briefly hold their coffee. As the key experimental manipulation, the coffee was either hot or iced. Subjects then read a description of some individual, and those who had held the warmer cup tended to rate the individual as having a warmer personality, with no change in ratings of other attributes.

Another brilliant study by Bargh and colleagues concerned haptic sensations (I had to look the word up — haptic: related to the sense of touch). Volunteers were asked to evaluate the resumes of supposed job applicants where, as the critical variable, the resume was attached to a clipboard of one of two different weights. Subjects who evaluated the candidate while holding the heavier clipboard tended to judge candidates to be more serious, with the weight of the clipboard having no effect on how congenial the applicant was judged. After all, we say things like “weighty matter” or “gravity of a situation.”

What are we to make of the brain processing literal and metaphorical versions of a concept in the same brain region? Or that our neural circuitry doesn’t cleanly differentiate between the real and the symbolic? What are the consequences of the fact that evolution is a tinkerer and not an inventor, and has duct-taped metaphors and symbols to whichever pre-existing brain areas provided the closest fit?

Jonathan Haidt, of the University of Virginia, has shown how viscera and emotion often drive our decisionmaking, with conscious cognition mopping up afterward, trying to come up with rationalizations for that gut decision. The viscera that can influence moral decisionmaking and the brain’s confusion about the literalness of symbols can have enormous consequences. Part of the emotional contagion of the genocide of Tutsis in Rwanda arose from the fact that when militant Hutu propagandists called for the eradication of the Tutsi, they iconically referred to them as “cockroaches.” Get someone to the point where his insula activates at the mention of an entire people, and he’s primed to join the bloodletting.

Related
More From The Stone
Read previous contributions to this series.
.But if the brain confusing reality and literalness with metaphor and symbol can have adverse consequences, the opposite can occur as well. At one juncture just before the birth of a free South Africa, Nelson Mandela entered secret negotiations with an Afrikaans general with death squad blood all over his hands, a man critical to the peace process because he led a large, well-armed Afrikaans resistance group. They met in Mandela’s house, the general anticipating tense negotiations across a conference table. Instead, Mandela led him to the warm, homey living room, sat beside him on a comfy couch, and spoke to him in Afrikaans. And the resistance melted away.

This neural confusion about the literal versus the metaphorical gives symbols enormous power, including the power to make peace. The political scientist and game theorist Robert Axelrod of the University of Michigan has emphasized this point in thinking about conflict resolution. For example, in a world of sheer rationality where the brain didn’t confuse reality with symbols, bringing peace to Israel and Palestine would revolve around things like water rights, placement of borders, and the extent of militarization allowed to Palestinian police. Instead, argues Axelrod, “mutual symbolic concessions” of no material benefit will ultimately make all the difference. He quotes a Hamas leader who says that for the process of peace to go forward, Israel must apologize for the forced Palestinians exile in 1948. And he quotes a senior Israeli official saying that for progress to be made, Palestinians need to first acknowledge Israel’s right to exist and to get their anti-Semitic garbage out of their textbooks.

Hope for true peace in the Middle East didn’t come with the news of a trade agreement being signed. It was when President Hosni Mubarak of Egypt and King Hussein of Jordan attended the funeral of the murdered Israeli prime minister Yitzhak Rabin. That same hope came to the Northern Irish, not when ex-Unionist demagogues and ex-I.R.A. gunmen served in a government together, but when those officials publicly commiserated about each other’s family misfortunes, or exchanged anniversary gifts. And famously, for South Africans, it came not with successful negotiations about land reapportionment, but when black South Africa embraced rugby and Afrikaans rugby jocks sang the A.N.C. national anthem.

Nelson Mandela was wrong when he advised, “Don’t talk to their minds; talk to their hearts.” He meant talk to their insulas and cingulate cortices and all those other confused brain regions, because that confusion could help make for a better world.

(Robert Sapolsky’s essay is the subject of this week’s forum discussion among the humanists and scientists at On the Human, a project of the National Humanities Center.)



--------------------------------------------------------------------------------

Body-by-Guinness

  • Guest
Religion & Cooperative Endeavors
« Reply #96 on: December 01, 2010, 09:11:18 AM »
Hmm, as an almost atheist who often falls into altruistic behaviors, I'm not sure where I fit in this picture:

http://reason.com/archives/2010/11/30/the-eleventh-commandment-punis
Reason Magazine


The Eleventh Commandment: Punish Free Riders

Religion and the evolutionary origin of cooperation

Ronald Bailey | November 30, 2010

Two of the deep puzzles in human evolution are religion and cooperation between genetically unrelated strangers. In recent years, many researchers have come to believe the two phenomena are intimately linked. If people believe they are being watched and judged by an omnipresent supernatural entity, they may be more willing to perform emotionally binding and costly rituals to signal commitment to a group. The same sense of being watched may also encourage people to be helpful to others—even when there is no obvious reproductive payoff. In other words: Science suggests that God—and His followers—hate free riders.

A 2007 study by University of British Columbia psychologists Azim Shariff and Ara Norenzayan found that players in an anonymous economic game were more generous if they were primed with religious concepts before beginning play. In the case, the subjects participated in the dictator game in which they get to anonymously divvy up $10 between themselves and an unknown individual. The researchers assigned players into three groups. One group was primed with religious concepts by having them unscramble 10 five-word sentences, dropping an extraneous word from each to create a grammatical four-word sentence. For example, “dessert divine was fork the’’ would become ‘‘the dessert was divine.” The religious words were spirit, divine, God, sacred, and prophet. A second group was primed with words connoting secular moral institutions, e.g., civic, jury, court, police, and contract. The third group unscrambled sentences containing neutral words. So what did they find?

Earlier studies using the dictator game consistently found that subjects in general behaved selfishly by taking most of the money for themselves. In this case, players in the neutral game offered an average of $2.56 to other players. However, players who had been primed with religious concepts offered an average of $4.56. Interestingly, players primed with secular moral concepts offered $4.44, nearly as much as players exposed to religious primes. Self-reported belief in God was not a good predictor of generosity in the neutral prime version of the game; it seems believers needed reminders to be more generous.

But how do the invisible omnipresent gods encourage generosity to strangers? Of course, the gods can reward believers for good behavior, but they also punish them for bad behavior. It is how this aspect of religious belief affects cooperation that a team of researchers led by University of London psychologist Ryan McKay attempt to probe in a study released last week, “Wrath of God: Religious primes and punishment.”

One of the chief fears of people who want to cooperate is that they will be chumps who are taken advantage of by free riders. Earlier research using public goods economic games found that cooperation was considerably enhanced if players had an opportunity to punish free riders. In these games, players can invest in a common pool which then grows and is divvied up among all the players. Free riders, however, can make more money by refusing to invest and yet get a share of the growing pool. Research shows that cooperation breaks down completely when such free riders cannot be punished by other players. But when other players can pay to reduce the holdings of free riders, they begin to play fairly and cooperation dramatically increases.

In the new study, McKay and his colleagues sought to find out if religious priming promotes costly punishment of unfair behavior. In this experiment, one player could choose between splitting a pot of money evenly between herself and a second player or she could choose another option in which the split was about nine to one. If the second player believed the choice was unfair, she could punish the first player by spending a portion of her allocation to reduce the take of the first player at a rate of three to one, e.g., if she spent 50, the first player would lose 150. The players were subliminally primed by words flashing on a computer screen. Divided into four groups, one group was exposed to religious words, another to punishment words, the third to punishment and religious words, and the fourth to neutral words. Afterwards, players were asked about their religious beliefs and if they had donated to a religious organization in the past year.

The results? “Our study reveals that for those who financially support religious institutions, subliminal religious messages strongly increase the costly punishment of unfair behavior, even when such punishment is to their individual material disadvantage,” says McKay in a press release describing the research. Subliminal religious priming did not have a significant effect on other players.

So why does religious priming induce committed believers to punish unfair behavior? The researchers suggest two possibilities. The first is that religious primes trigger the idea that one is being watched by the gods. “In this case primed participants punish unfair behaviors because they sense that not doing so will damage their standing in the eyes of a supernatural agent,” they speculate. The second hypothesis is that religious primes “activate cultural norms pertaining to fairness and its enforcement and occasion behavior consistent with those norms.” McKay and his colleagues acknowledge that religious primes might actually invoke both mechanisms. In either case, while the gods may punish uncooperative sinners, their work is considerably enhanced if believers go out of their way to punish sinners too.

These studies do bolster the idea that ancestral belief in supernatural entities enhanced group cooperation, enabling believers to out-compete other groups. As Shariff and Norenzayan observe, “If the cultural spread of supernatural moralizing agents expanded the circle of cooperation to unrelated strangers, it may well have allowed small groups to grow into large-scale societies, from the early towns of Jericho and Ur to the metropolises of today.”

Ronald Bailey is Reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
NYT: Smiles
« Reply #97 on: January 25, 2011, 05:57:29 AM »
In the middle of a phone call four years ago, Paula Niedenthal began to wonder what it really means to smile. The call came from a Russian reporter, who was interviewing Dr. Niedenthal about her research on facial expressions.

“At the end he said, ‘So you are American?’ ” Dr. Niedenthal recalled.
Indeed, she is, although she was then living in France, where she had taken a post at Blaise Pascal University.

“So you know,” the Russian reporter informed her, “that American smiles are all false, and French smiles are all true.”

“Wow, it’s so interesting that you say that,” Dr. Niedenthal said diplomatically. Meanwhile, she was imagining what it would have been like to spend most of her life surrounded by fake smiles.

“I suddenly became interested in how people make these kinds of errors,” Dr. Niedenthal said. But finding the source of the error would require knowing what smiles really are — where they come from and how people process them. And despite the fact that smiling is one of the most common things that we humans do, Dr. Niedenthal found science’s explanation for it to be weak.

“I think it’s pretty messed up,” she said. “I think we don’t know very much, actually, and it’s something I want to take on.”

To that end, Dr. Niedenthal and her colleagues have surveyed a wide range of studies, from brain scans to cultural observations, to build a new scientific model of the smile. They believe they can account not only for the source of smiles, but how people perceive them. In a recent issue of the journal Behavioral and Brain Sciences, they argue that smiles are not simply the expression of an internal feeling. Smiles in fact are only the most visible part of an intimate melding between two minds.

“It’s an impressive, sophisticated analysis,” said Adam Galinsky, a social psychologist at Northwestern University.

Psychologists have studied smiles carefully for decades, but mostly from the outside. When the zygomaticus major muscles in our cheeks contract, they draw up the corners of our mouths. But there’s much more to a smile than that.

“A smile is not this floating thing, like a Cheshire Cat,” said Dr. Niedenthal. “It’s attached to a body.” Sometimes the lips open to reveal teeth; sometimes they stay sealed. Sometimes the eyes crinkle. The chin rises with some smiles, and drops in others.

Cataloging these variations is an important first step, said Dr. Niedenthal, but it can’t deliver an answer to the enigma of smiles. “People like to make dictionaries of the facial muscles to make a particular gesture, but there’s no depth to that approach,” she said.

Some researchers have tried to move deeper, to understand the states of mind that produce smiles. We think of them as signifying happiness, and indeed, researchers do find that the more intensely people contract their zygomaticus major muscles, the happier they say they feel. But this is far from an iron law. The same muscles sometimes contract when people are feeling sadness or disgust, for example.

The link between feelings and faces is even more mysterious. Why should any feeling cause us to curl up our mouths, after all? This is a question that Darwin pondered for years. An important clue, he said, is found in the faces of apes, which draw up their mouths as well. These expressions, Darwin argued, were also smiles. In other words, Mona Lisa inherited her endlessly intriguing smile from the grinning common ancestor she shared with chimpanzees.

Primatologists have been able to sort smiles into a few categories, and Dr. Niedenthal thinks that human smiles should be classified in the same way. Chimpanzees sometimes smile from pleasure, as when baby chimps play with each other. but chimpanzees also smile when they’re trying to strengthen a social bond with another chimpanzee.

Dr. Niedenthal thinks that some human smiles fall into these categories as well. What’s more, they may be distinguished by certain expressions. An embarrassed smile is often accompanied by a lowered chin, for example, while a smile of greeting often comes with raised eyebrows.

Chimpanzees sometimes smile not for pleasure or for a social bond, but for power. A dominant chimpanzee will grin and show its teeth. Dr. Niedenthal argues that humans flash a power grin as well — often raising their chin so as to look down at others.

“ ‘You’re an idiot, I’m better than you’—that’s what we mean by a dominant smile,” said Dr. Niedenthal.

====================

Page 2 of 2)



But making a particular facial expression is just the first step of a smile. Dr. Niedenthal argues that how another person interprets the smile is equally important. In her model, the brain can use three different means to distinguish a smile from some other expression.

One way people recognize smiles is comparing the geometry of a person’s face to a standard smile. A second way is thinking about the situation in which someone is making an expression, judging if it’s the sort where a smile would be expected.
But most importantly, Dr. Niedenthal argues, people recognize smiles by mimicking them. When a smiling person locks eyes with another person, the viewer unknowingly mimics a smile as well. In their new paper, Dr. Niedenthal and her colleagues point to a number of studies indicating that this imitation activates many of the same regions of the brain that are active in the smiler.

A happy smile, for example, is accompanied by activity in the brain’s reward circuits, and looking at a happy smile can excite those circuits as well. Mimicking a friendly smile produces a different pattern of brain activity. It activates a region of the brain called the orbitofrontal cortex, which distinguishes feelings for people with whom we have a close relationship from others. The orbitofrontal cortex becomes active when parents see their own babies smile, for example, but not other babies.

If Dr. Niedenthal’s model is correct, then studies of dominant smiles should reveal different patterns of brain activity. Certain regions associated with negative emotions should become active.

Embodying smiles not only lets people recognize smiles, Dr. Niedenthal argues. It also lets them recognize false smiles. When they unconsciously mimic a false smile, they don’t experience the same brain activity as an authentic one. The mismatch lets them know something’s wrong.

Other experts on facial expressions applaud Dr. Niedenthal’s new model, but a number of them also think that parts of it require fine-tuning. “Her model fits really well along the horizontal dimension, but I have my doubts about the vertical,” said Dr. Galinsky. He questions whether people observing a dominant smile would experience the feeling of power themselves. In fact, he points out, in such encounters, people tend to avoid eye contact, which Dr. Niedenthal says is central to her model.

Dr. Niedenthal herself is now testing the predictions of the model with her colleagues. In one study, she and her colleagues are testing the idea that mimicry lets people recognize authentic smiles. They showed pictures of smiling people to a group of students. Some of the smiles were genuine and others were fake. The students could readily tell the difference between them.

Then Dr. Niedenthal and her colleagues asked the students to place a pencil between their lips. This simple action engaged muscles that could otherwise produce a smile. Unable to mimic the faces they saw, the students had a much harder time telling which smiles were real and which were fake.

The scientists then ran a variation on the experiment on another group of students. They showed the same faces to the second group, but had them imagine the smiling faces belonged to salesclerks in a shoe store. In some cases the salesclerks had just sold the students a pair of shoes — in which they might well have a genuine smile of satisfaction. In other trials, they imagined that the salesclerks were trying to sell them a pair of shoes — in which case they might be trying to woo the customer with a fake smile.

In reality, the scientists use a combination of real and fake smiles for both groups of salesclerks. When the students were free to mimic the smiles, their judgments were not affected by what the salesclerk was doing.

But if the students put a pencil in their mouth, they could no longer rely on their mimicry. Instead, they tended to believe that the salesclerks who were trying to sell them shoes were faking their smiles — even when their smiles were genuine. Likewise, they tended to say that the salesclerks who had finished the sale were smiling for real, even when they weren’t. In other words, they were forced to rely on the circumstances of the smile, rather than the smile itself.

Dr. Niedenthal and her colleagues have also been testing the importance of eye contact for smiles. They had students look at a series of portraits, like the “Laughing Cavalier” by the 17th-century artist Frans Hals. In some portraits the subject looked away from the viewer, while in others, the gaze was eye to eye. In some trials, the students looked at the paintings with bars masking the eyes.

The participants rated how emotional the impact of the painting was. Dr. Niedenthal and her colleagues found, as they had predicted, that people felt a bigger emotional impact when the eyes were unmasked than when they were masked. The smile was identical in each painting, but it was not enough on its own. What’s more, the differences were greater when the portrait face was making direct eye contact with the viewer.

Dr. Niedenthal suspects that she and other psychologists are just starting to learn secrets about smiles that artists figured out centuries ago. It may even be possible someday to understand why Mona Lisa’s smile is so powerful. “I would say the reason it was so successful is because you achieve eye contact with her,” said Dr. Niedenthal, “and so the fact that the meaning of her smile is complicated is doubly communicated, because your own simulation of it is mysterious and difficult.”

Body-by-Guinness

  • Guest
Connectomics
« Reply #98 on: April 12, 2011, 06:22:20 PM »
Connectomics
Published by Steven Novella under Neuroscience
Comments: 9
There are approximately 100 billion neurons in the adult human brain. Each neuron makes thousands of connections to other neurons, resulting in an approximate 150 trillion connections in the human brain. The pattern of those connections is largely responsible for the functionality of the brain – everything we sense, feel, think, and do. Neuroscientists are attempting to map those connections – in an effort known as connectomics. (Just as genomics is the effort to map the genome, and proteomics is mapping all the proteins that make up an organism.)
This is no small task. No matter how you look at it, 150 trillion is a lot of connections. One research group working on this project is a team led by Thomas Mrsic-Flogel at the University College London. They recently published a paper in Nature in which they map some of the connections in the mouse visual cortex.
What they did was to first determine the function of specific areas and neurons in the mouse visual cortex in living mice. For example, they determined which orientation they are sensitive to. In the visual cortex different neurons respond to different orientations (vertical vs horizontal, for example). Once they mapped the directional function of the neurons they then mapped the connections between those neurons in vitro (after removing the brain). They found that neurons made more connections to other neurons with the same directional response, rather than neurons with sensitivity to different (orthogonal) directions.
The techniques used allowed them to make a map of connections in part of the mouse visual cortex and correlate the pattern of those connections to the functionality of that cortex. The resulting connectomics map is still partial and crude, but it is a step in the direction of reproducing the connections in the brain.
One way to think about these kinds of techniques is that they promise to take us a level deeper in our understanding of brain anatomy. At present we have mapped the mammalian, and specifically human, brain to the point that we can identify specific regions of the brain and link them to some specific function. For the more complex areas of the brain we are still refining our map of these brain modules and the networks they form.
To give an example of where we are with this, clinical neurologists are often able to predict where a stroke is located simply by the neurological exam. We can correlate specific deficits with known brain structures, and the availability of MRI scanning means that we get rapid and precise feedback on our accuracy. We are very good at localizing deficits of strength, sensation, vision, and also many higher cortical functions like language, calculations, visuo-spatial reasoning, performing learned motor tasks, and others.
But we are still a long way from being able to reproduce the connections in the brain in fine detail – say, with sufficient accuracy to produce a virtual brain in a computer simulation (even putting aside the question of computing power). And that is exactly the goal of connectomics.
Along the way these research efforts will increase our knowledge of brain anatomy and function, as we learn exactly how different brain regions connect to each other and correlate them with specific functions. Neuroscientists are still picking the low-hanging fruit, such as mapping the visual cortex, which has some straightforward organization that correlates with concepts that are easy to identify and understand – like mapping to an actual layout of the visual field, and to specific features of vision such as contrast and orientation.
For more abstract areas of the brain, like those that are involved with planning, making decision, directing our attention, feeling as if we are inside our own bodies, etc. connectomics is likely to be more challenging. Right now we are mainly using fMRI scans for these kinds of studies, which has been very successful, but does not produce a fine map of connections (more of a brain region map). Also, the more abstract the function the more difficult it will be to use mice or other animals as subjects, and when using humans you cannot use certain techniques, like removing the brain and slicing it up (at least not on living subjects).
The utility of this kind of research is a better understanding of brain function, and all that flows from that. We cannot anticipate all the potential benefits, and the most fruitful outcome may derive from knowledge we are not even aware we are missing.
This also plays into the research efforts to create a virtual representation of the human brain, complete with all the connections. This is one pathway to artificial intelligence. Estimates vary, but it seems like we will have the computer power sometime this century to create a virtual human brain that can function in real time, and then, of course, become progressively faster.
I should note that the connections among neurons in the brain are not the only feature that contributes to brain function. The astrocytes and other “support” cells also contribute to brain function. There is also a biochemical level to brain function – the availability of specific neurotransmitters, for example. So even if we could completely reproduce the neuronal connections in the brain, there are other layers of complexity superimposed upon this.
 
In any case, this is fascinating research and it will be nice to see how it progresses over the next few decades.

http://theness.com/neurologicablog/?p=3096

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69392
    • View Profile
WSJ: One language mother of all others?
« Reply #99 on: April 14, 2011, 03:35:38 PM »
By GAUTAM NAIK
The world's 6,000 or so modern languages may have all descended from a single ancestral tongue spoken by early African humans between 50,000 and 70,000 years ago, a new study suggests.

The finding, published Thursday in the journal Science, could help explain how the first spoken language emerged, spread and contributed to the evolutionary success of the human species.

Quentin Atkinson, an evolutionary psychologist at the University of Auckland in New Zealand and author of the study, found that the first migrating populations leaving Africa laid the groundwork for all the world's cultures by taking their single language with them—the mother of all mother tongues.

"It was the catalyst that spurred the human expansion that we all are a product of," Dr. Atkinson said.

About 50,000 years ago—the exact timeline is debated—there was a sudden and marked shift in how modern humans behaved. They began to create cave art and bone artifacts and developed far more sophisticated hunting tools. Many experts argue that this unusual spurt in creative activity was likely caused by a key innovation: complex language, which enabled abstract thought. The work done by Dr. Atkinson supports this notion.

His research is based on phonemes, distinct units of sound such as vowels, consonants and tones, and an idea borrowed from population genetics known as "the founder effect." That principle holds that when a very small number of individuals break off from a larger population, there is a gradual loss of genetic variation and complexity in the breakaway group.

Dr. Atkinson figured that if a similar founder effect could be discerned in phonemes, it would support the idea that modern verbal communication originated on that continent and only then expanded elsewhere.

In an analysis of 504 world languages, Dr. Atkinson found that, on average, dialects with the most phonemes are spoken in Africa, while those with the fewest phonemes are spoken in South America and on tropical islands in the Pacific.

The study also found that the pattern of phoneme usage globally mirrors the pattern of human genetic diversity, which also declined as modern humans set up colonies elsewhere. Today, areas such as sub-Saharan Africa that have hosted human life for millennia still use far more phonemes in their languages than more recently colonized regions do.

"It's a wonderful contribution and another piece of the mosaic" supporting the out-of-Africa hypothesis, said Ekkehard Wolff, professor emeritus of African Languages and Linguistics at the University of Leipzig in Germany, who read the paper.

Dr. Atkinson's findings are consistent with the prevailing view of the origin of modern humans, known as the "out of Africa" hypothesis. Bolstered by recent genetic evidence, it says that modern humans emerged in Africa alone, about 200,000 years ago. Then, about 50,000 to 70,000 years ago, a small number of them moved out and colonized the rest of the world, becoming the ancestors of all non-African populations on the planet.

The origin of early languages is fuzzier. Truly ancient languages haven't left empirical evidence that scientists can study. And many linguists believe it is hard to say anything definitive about languages prior to 8,000 years ago, as their relationships would have become jumbled over the millennia.

But the latest Science paper "and our own observations suggest that it is possible to detect an arrow of time" underlying proto-human languages spoken more than 8,000 years ago, said Murray Gell-Mann of the Santa Fe Institute in New Mexico, who read the Science paper and supports it. The "arrow of time" is based on the notion that it is possible to use data from modern languages to trace their origins back 10,000 years or even further.

Dr. Gell-Mann, a Nobel Prize-winning physicist with a keen interest in historical linguistics, is co-founder of a project known as Evolution of Human Languages. He concedes that his "arrow of time" view is a minority one.

Only humans have the biological capacity to communicate with a rich language based on symbols and rules, enabling us to pass on cultural ideas to future generations. Without language, culture as we know it wouldn't exist, so scientists are keen to pin down where it sprang from.

Dr. Atkinson's approach has its limits. Genes change slowly, over many generations, while the diversity of phonemes amid a population group can change rapidly as language evolves. While distance from Africa can explain as much as 85% of the genetic diversity of populations, a similar distance measurement can explain only 19% of the variation in phonemic diversity. Dr. Atkinson said the measure is still statistically significant.

Another theory of the origin of modern humans, known as the multiregional hypothesis, holds that earlier forms of humans originated in Africa and then slowly developed their anatomically modern form in every area of the Old World. This scenario implies that several variants of modern human language could have emerged somewhat independently in different locations, rather than solely in Africa.

Early migrants from Africa probably had to battle significant odds. A founder effect on a breakaway human population tends to reduce its size, genetic complexity and fitness. A similar effect could have limited "the size and cultural complexity of societies at the vanguard of the human expansion" out of Africa, the paper notes.

Write to Gautam Naik at gautam.naik@wsj.com