Author Topic: Intelligence and Psychology, Artificial Intelligence  (Read 60447 times)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Intelligence and Psychology, Artificial Intelligence
« on: July 26, 2009, 04:43:46 AM »
Scientists worry machines may outsmart man
By JOHN MARKOFF
Published: July 25, 2009
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.

“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.

The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”

Ken Conley/Willow Garage
« Last Edit: August 10, 2018, 05:37:38 PM by Crafty_Dog »

Body-by-Guinness

  • Guest
Creativity as the Necessary Ingredient
« Reply #1 on: August 20, 2009, 09:39:36 AM »
What Makes A Genius?

By Andrea Kuszewski
Created Aug 20 2009 - 2:26am
What is the difference between "intelligence" and "genius"?  Creativity, of course!

There was an article recently in Scientific American that discussed creativity and the signs in children that were precursors to creative achievement in adulthood. The authors cite some work done by Michigan State University researchers Robert and Michele Root-Bernstein, a collaboration of physiologist and theater instructor, who presented their findings at an annual meeting of the APA this past March. Since I research creativity as well as intelligence, I found the points brought up in the article quite intriguing, yet not surprising.

One of the best observations stated in the article regarding achievement was this:
"... most highly creative people are polymaths- they enjoy and excel at a range of challenging activities. For instance, in a survey of scientists at all levels of achievement, the [researchers] found that only about one sixth report engaging in a secondary activity of an artistic or creative nature, such as painting or writing non-scientific prose. In contrast, nearly all Nobel Prize winners in science have at least one other creative activity that they pursue seriously. Creative breadth, the [researchers] argue, is an important but understudied component of genius."

Everyone is fascinated by famous geniuses like Albert Einstein. They speculate as to what made him so unique and brilliant, but no one has been able to identify exactly what "it" is. If you mention "intelligence research", the average person assumes you are speaking of that top 1 or 2%, the IQs over 145, the little kids you see on TV passing out during Spelling Bees, because they are freaking out from the pressure of having to spell antidisestablishmentarianism on a stage before hundreds of on-lookers.

But the reality is, most intelligence researchers don't focus on the top 1 or 2%, they look at the general population, of which the average score is 100, and generally focus their attention on the lower to middle portion of the distribution.

There may be a multitude of reasons why most researchers focus their study on the lower end of the distribution; one I can see is because the correlations between individual abilities measured on IQ tests and the actual overall ability level of the person taking the test are the strongest at that portion of the distribution- those IQ scores of 110 and below.

The point I just made I have made before (which you will recognize if you read any of my pieces on intelligence), so nothing new there. However, what I found especially promising about the work done by the Root-Bernsteins, is that instead of merely trying to analyze IQ scores, they actually looked at the attributes of successful, intelligent, creative people, and figured out what it was they had going for them that other highly intelligent people did not- essentially, what the difference was between "intelligent" and "genius".

(the paper abstracts from the symposium describing their methods can be read here)

Now, some hard-core statistician-types may balk at their methods, screaming, "Case studies are not valid measures of intelligence!" and to a certain degree, they have a point. Yes, they initially looked at case studies of successful individuals, but then they surveyed scientists across multiple fields and found that the highest achievers in their domain (as indicated by earning the Nobel Prize) were skilled in multiple domains, at least one of these considered to be "creative", such as music, art, or non-scientific writing.

We would probably consider most scientists to be intelligent. But are they all geniuses? Do geniuses have the highest IQ scores? Richard Feynman is undeniably considered to be a genius. While his IQ score was *only* around 120-125, he was also an artist and a gifted communicator. Was he less intelligent than someone with an IQ score of 150?

What we are doing here is challenging the very definition of "intelligence". What is it really? An IQ score? Computational ability? Being able to talk your way out of a speeding ticket? Knowing how to handle crisis effectively? Arguing a convincing case before a jury? Well, maybe all of the above.

Many moons ago, Dr Robert Sternberg, now the Dean of Arts and Sciences at Tufts University in Boston, brought this very argument to the psychology community. And, to be honest, it was not exactly welcomed with open arms. He believed that intelligence is comprised of three facets, only one of which is measured on a typical IQ test, including the SAT and the GRE. That is only the first part, analytical ability. The second component is creativity, and the third component is practical ability, or being able to use your analytical skills and your creativity in order to effectively solve novel problems. He called this the Triarchic Theory of Intelligence.

Fast-forwarding to the present, Dr Rex Jung, from the Mind Institute and the University of New Mexico in Alburquerque, published a paper earlier this year showing biochemical support for the Threshold Theory of Creativity (a necessary but sufficient level of intelligence is needed for successful creative achievement). In a nutshell, he found that intelligence (as most people measure it today) is not enough to set a person apart and rise them to the level of genius. Creativity is that essential component that not all intelligent people possess, but geniuses require. Not all creative people are geniuses (thus the Threshold Theory), but in order to reach genius status, creativity is a necessary attribute.

Someone could have an IQ of 170, yet get lost inside of a paper bag, and not have the ability to hold a conversation with anyone other than a dog. That is not my definition of genius. We want to know what made geniuses like Einstein and Feynman so far ahead of their intelligent scientist peers, and the answer to that is creativity.

I am hoping that as more studies come out stating the importance of multi-disciplinary thinking and collaboration across domains for reaching the highest levels of achievements,  that eventually the science community will fully embrace creativity research and see its validity in the study of successful intelligence. As a society, we already recognize the importance of creativity in innovation and in the arts, so let's take it a step further.

Give creativity the "street cred" it deserves as the defining feature that separates mere intelligence from utter genius.

Psychology
ION Publications LLC
Source URL: http://www.scientificblogging.com/rogue_neuron/what_makes_genius

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Bird Intelligence
« Reply #2 on: December 07, 2010, 04:17:59 PM »

Freki

  • Power User
  • ***
  • Posts: 513
    • View Profile
Re: Intelligence of crows
« Reply #3 on: December 08, 2010, 07:54:31 AM »
Crows are amazing

[youtube]http://www.youtube.com/watch?v=NhmZBMuZ6vE[/youtube]

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Re: Intelligence
« Reply #4 on: December 08, 2010, 03:58:06 PM »
I liked that Freki.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Intelligence
« Reply #5 on: December 08, 2010, 04:24:51 PM »
I did too. It reminded me of backpacking in an isolated part of the southwest and having curious ravens surveilling me. They'd circle and study. They'd land behind trees and then stealthily hop on the ground to get a closer look. There was a definite sense of some sentient thought from them, and I'm not one for sentimental anthropomorphism.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Re: Intelligence
« Reply #6 on: December 08, 2010, 07:56:33 PM »
Konrad Lorenz wrote quite often of "jackdaws".  This was translated from German.  Does anyone know if this is another word for crows?  or?

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Intelligence
« Reply #7 on: December 08, 2010, 08:15:41 PM »
Definition of JACKDAW
1
: a common black and gray bird (Corvus monedula) of Eurasia and northern Africa that is related to but smaller than the carrion crow
2
: grackle 1

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Test your power of observation
« Reply #8 on: January 17, 2011, 09:49:13 AM »

Vicbowling

  • Newbie
  • *
  • Posts: 19
    • View Profile
Re: Intelligence
« Reply #9 on: January 18, 2011, 04:34:32 PM »
Very interesting article but I found myself less disturbed by the Terminator-esque prediction of the future, and a little concerned with the fact that the doctor was completely fine with the A.I projecting "human" emotion so he WOULDN'T have to... anyone else find flaw in that?!

soundproofing materials

« Last Edit: January 18, 2011, 04:55:00 PM by Vicbowling »

bigdog

  • Power User
  • ***
  • Posts: 2321
    • View Profile
Private Intel firm
« Reply #10 on: January 24, 2011, 02:47:54 AM »
http://www.stltoday.com/news/national/article_59308dcd-3092-5280-92fb-898f569504e4.html

Ousted CIA agent runs his own private operation
With U.S. funding cut, he relies on donations to fund his 'operatives' in Pakistan and Afghanistan.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Re: Intelligence
« Reply #11 on: January 24, 2011, 04:49:25 AM »
BD:

That is a different kind of intelligence  :lol:  May I ask you to please post that on the "Intel Matters" thread on the P&R forum?

Thank you,

bigdog

  • Power User
  • ***
  • Posts: 2321
    • View Profile
Re: Intelligence
« Reply #12 on: January 24, 2011, 07:46:58 AM »
Whether it "matters" or not, it appears mine was lacking!  Sorry about that, Guro!

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Re: Intelligence
« Reply #13 on: January 25, 2011, 05:42:59 AM »
No worries BD; in this context the term "intelligence" was ambiguous. :-)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Eh tu Watson; Kurzweil's singularity
« Reply #14 on: February 12, 2011, 05:15:48 AM »
Computer beats best humans at Jepoardy

http://wattsupwiththat.com/2011/02/10/worth-watching-watson/
===========
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

 

On the show (you can find the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

 

Kurzweil then demonstrated the computer, which he built himself—a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.

 

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

 

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

 

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.

 

True? True.

 

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

 

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

 

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

 

The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

 

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.

 

The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":

 

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

 

The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."

 

By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1—and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.

 

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."

 

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?

 

Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting—you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

 

As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

 

Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond—the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.

 

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."

 

Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity—never say he's not conservative—at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. 

« Last Edit: February 12, 2011, 05:24:51 AM by Crafty_Dog »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Kurzweil 2
« Reply #15 on: February 12, 2011, 05:25:45 AM »


The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

 

Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

 

In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. 

 

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading—the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters—handed out pamphlets. An android chatted with visitors in one corner.

 

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.

 

For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced inNature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. 

 

Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable—rather like the heat death of the universe—is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."

 

Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

 

But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

 

It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity—that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion."

 

Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.

 

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies—HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.

 

Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude.

 

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being—in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness—a machine with no ghost in it? And how would we know?

 

Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?

 

Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.

 

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."

 

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

 

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."

 

This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?)

 

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."

 

In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.

 

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.

 

Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

 

But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

 

Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.

 

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.

 

But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
WSJ: Watson
« Reply #17 on: March 14, 2011, 10:58:22 AM »


By STEPHEN BAKER
In the weeks since IBM's computer, Watson, thrashed two flesh-and-blood champions in the quiz show "Jeopardy!," human intelligence has been punching back—at least on blogs and opinion pages. Watson doesn't "know" anything, experts say. It doesn't laugh at jokes, cannot carry on a conversation, has no sense of self, and commits bloopers no human would consider. (Toronto, a U.S. city?) What's more, it's horribly inefficient, requiring a roomful of computers to match what we carry between our ears. And it probably would not have won without its inhuman speed on the buzzer.

This is all enough to make you feel reinvigorated to be human. But focusing on Watson's shortcomings misses the point. It risks distracting people from the transformation that Watson all but announced on its "Jeopardy!" debut: These question-answering machines will soon be working alongside us in offices and laboratories, and forcing us to make adjustments in what we learn and how we think. Watson is an early sighting of a highly disruptive force.

The key is to regard these computers not as human wannabes but rather as powerful tools, ones that can handle jobs currently held by people. The "intelligence" of the tools matters little. What counts is the information they deliver.

In our history of making tools, we have long adjusted to the disruptions they cause. Imagine an Italian town in the 17th century. Perhaps there's one man who has a special sense for the weather. Let's call him Luigi. Using his magnificent brain, he picks up on signals—changes in the wind, certain odors, perhaps the flight paths of birds or noises coming from the barn. And he spreads word through the town that rain will be coming in two days, or that a cold front might freeze the crops. Luigi is a valuable member of society.

Along comes a traveling vendor who carries a new instrument invented in 1643 by Evangelista Torricelli. It's a barometer, and it predicts the weather about as well as Luigi. It's certainly not as smart as him, if it can be called smart at all. It has no sense of self, is deaf to the animals in the barn, blind to the flight patterns of birds. Yet it comes up with valuable information.

In a world with barometers, Luigi and similar weather savants must find other work for their fabulous minds. Perhaps using the new tool, they can deepen their analysis of weather patterns, keep careful records and then draw conclusions about optimal farming techniques. They might become consultants. Maybe some of them drop out of the weather business altogether. The new tool creates both displacement and economic opportunity. It forces people to reconsider how they use their heads.

The same is true of Watson and the coming generation of question-answering machines. We can carry on interesting discussions about how "smart" they are or aren't, but that's academic. They make sense of complex questions in English and fetch answers, scoring each one for the machines' level of confidence in it. When asked if Watson can "think," David Ferrucci, IBM's chief scientist on the "Jeopardy!" team, responds: "Can a submarine swim?"

As these computers make their way into law offices, pharmaceutical labs and hospitals, people who currently make a living by answering questions must adjust. They'll have to add value in ways that machines cannot. This raises questions not just for individuals but for entire societies. How do we educate students for a labor market in which machines answer a growing percentage of the questions? How do we create curricula for uniquely human skills, such as generating original ideas, cracking jokes, carrying on meaningful dialogue? How can such lessons be scored and standardized?

These are the challenges before us. They're similar, in a sense, to what we've been facing with globalization. Again we will find ourselves grappling with a new colleague and competitor. This time around, it's a machine. We should scrutinize that tool, focusing on the questions it fails to answer. Its struggles represent a road map for our own cognitive migration. We must go where computers like Watson cannot.

Mr. Baker is the author of "Final Jeopardy—Man vs. Machine and the Quest to Know Everything" (Houghton Mifflin Harcourt, 2011).


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Are people getting dumber?
« Reply #19 on: February 27, 2012, 07:29:47 PM »


http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1

Look at who we have as president. Look at those who think if we'd just tax the rich more, all the economic badness would go away. Stupid is growing and California is ground zero for it's spread.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Electrical Brain Stimulation
« Reply #20 on: May 15, 2012, 10:31:05 AM »




http://theweek.com/article/index/226196/how-electrical-brain-stimulation-can-change-the-way-we-think
ESSAY
How electrical brain stimulation can change the way we think
After my brain was jolted, says Sally Adee, I had a near-spiritual experience
PUBLISHED MARCH 30, 2012, AT 10:01 AM

Researchers have found that "transcranial direct current stimulation" can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.   Photo: Adrianna Williams/Corbis
HAVE YOU EVER wanted to take a vacation from your own head? You could do it easily enough with liberal applications of alcohol or hallucinogens, but that's not the kind of vacation I'm talking about. What if you could take a very specific vacation only from the stuff that makes it painful to be you: the sneering inner monologue that insists you're not capable enough or smart enough or pretty enough, or whatever hideous narrative rides you. Now that would be a vacation. You'd still be you, but you'd be able to navigate the world without the emotional baggage that now drags on your every decision. Can you imagine what that would feel like?

Late last year, I got the chance to find out, in the course of investigating a story for New Scientist about how researchers are using neurofeedback and electrical brain stimulation to accelerate learning. What I found was that electricity might be the most powerful drug I've ever used in my life.

It used to be just plain old chemistry that had neuroscientists gnawing their fingernails about the ethics of brain enhancement. As Adderall, Ritalin, and other cognitive enhancing drugs gain widespread acceptance as tools to improve your everyday focus, even the stigma of obtaining them through less-than-legal channels appears to be disappearing. People will overlook a lot of moral gray areas in the quest to juice their brain power.

But until recently, you were out of luck if you wanted to do that without taking drugs that might be addictive, habit-forming, or associated with unfortunate behavioral side effects. Over the past few years, however, it's become increasingly clear that applying an electrical current to your head confers similar benefits.

U.S. military researchers have had great success using "transcranial direct current stimulation" (tDCS) — in which they hook you up to what's essentially a 9-volt battery and let the current flow through your brain. After a few years of lab testing, they've found that tDCS can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.

We don't yet have a commercially available "thinking cap," but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Recently, a group of Oxford neuroscientists released a cautionary statement about the ethics of brain boosting; then the U.K.'s Royal Society released a report that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi–like social divide, where the rich can afford to be smarter and everyone else will be left behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?

After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn't simply about the easy pleasure of undeserved expertise. For me, it was a near-spiritual experience. When a nice neuroscientist named Michael Weisend put the electrodes on me, what defined the experience was not feeling smarter or learning faster: The thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut up.

The experiment I underwent was accelerated marksmanship training, using a training simulation that the military uses. I spent a few hours learning how to shoot a modified M4 close-range assault rifle, first without tDCS and then with. Without it I was terrible, and when you're terrible at something, all you can do is obsess about how terrible you are. And how much you want to stop doing the thing you are terrible at.

Then this happened:

THE 20 MINUTES I spent hitting targets while electricity coursed through my brain were far from transcendent. I only remember feeling like I'd just had an excellent cup of coffee, but without the caffeine jitters. I felt clear-headed and like myself, just sharper. Calmer. Without fear and without doubt. From there on, I just spent the time waiting for a problem to appear so that I could solve it.

It was only when they turned off the current that I grasped what had just happened. Relieved of the minefield of self-doubt that constitutes my basic personality, I was a hell of a shot. And I can't tell you how stunning it was to suddenly understand just how much of a drag that inner cacophony is on my ability to navigate life and basic tasks.

It's possibly the world's biggest cliché that we're our own worst enemies. In yoga, they tell you that you need to learn to get out of your own way. Practices like yoga are meant to help you exhume the person you are without all the geologic layers of narrative and cross talk that are constantly chattering in your brain. I think eventually they just become background noise. We stop hearing them consciously, but believe me, we listen to them just the same.

My brain without self-doubt was a revelation. There was suddenly this incredible silence in my head; I've experienced something close to it during two-hour Iyengar yoga classes, or at the end of a 10k, but the fragile peace in my head would be shattered almost the second I set foot outside the calm of the studio. I had certainly never experienced instant Zen in the frustrating middle of something I was terrible at.

WHAT HAD HAPPENED inside my skull? One theory is that the mild electrical shock may depolarize the neuronal membranes in the part of the brain associated with object recognition, making the cells more excitable and responsive to inputs. Like many other neuroscientists working with tDCS, Weisend thinks this accelerates the formation of new neural pathways during the time that someone practices a skill, making it easier to get into the "zone." The method he was using on me boosted the speed with which wannabe snipers could detect a threat by a factor of 2.3.

Another possibility is that the electrodes somehow reduce activity in the prefrontal cortex — the area of the brain used in critical thought, says psychologist Mihaly Csikszentmihalyi of Claremont Graduate University in California. And critical thought, some neuroscientists believe, is muted during periods of intense Zen-like concentration. It sounds counterintuitive, but silencing self-critical thoughts might allow more automatic processes to take hold, which would in turn produce that effortless feeling of flow.

With the electrodes on, my constant self-criticism virtually disappeared, I hit every one of the targets, and there were no unpleasant side effects afterwards. The bewitching silence of the tDCS lasted, gradually diminishing over a period of about three days. The inevitable return of self-doubt and inattention was disheartening, to say the least.

I HOPE YOU can sympathize with me when I tell you that the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes. I also started to have a lot of questions. Who was I apart from the angry bitter gnomes that populate my mind and drive me to failure because I'm too scared to try? And where did those voices come from? Some of them are personal history, like the caustically dismissive 7th grade science teacher who advised me to become a waitress. Some of them are societal, like the hateful lady-mag voices that bully me every time I look in a mirror. An invisible narrative informs all my waking decisions in ways I can't even keep track of.

What would a world look like in which we all wore little tDCS headbands that would keep us in a primed, confident state, free of all doubts and fears? I'd wear one at all times and have two in my backpack ready in case something happened to the first one.

I think the ethical questions we should be asking about tDCS are much more subtle than the ones we've been asking about cognitive enhancement. Because how you define "cognitive enhancement" frames the debate about its ethics.

If you told me tDCS would allow someone to study twice as fast for the bar exam, I might be a little leery because now I have visions of rich daddies paying for Junior's thinking cap. Neuroscientists like Roy Hamilton have termed this kind of application "cosmetic neuroscience," which implies a kind of "First World problem" — frivolity.

But now think of a different application — could school-age girls use the zappy cap while studying math to drown out the voices that tell them they can't do math because they're girls? How many studies have found a link between invasive stereotypes and poor test performance?

And then, finally, the main question: What role do doubt and fear play in our lives if their eradication actually causes so many improvements? Do we make more ethical decisions when we listen to our inner voices of self-doubt or when we're freed from them? If we all wore these caps, would the world be a better place?

And if tDCS headwear were to become widespread, would the same 20 minutes with a 2 milliamp current always deliver the same effects, or would you need to up your dose like you do with some other drugs?

Because, to steal a great point from an online commenter, pretty soon, a 9-volt battery may no longer be enough.


©2012 by Sally Adee, reprinted by permission of New Scientist. The full article can be found at NewScientist.com.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
The Hazards of Confidence
« Reply #21 on: May 26, 2012, 12:41:41 PM »
October 19, 2011

Don’t Blink! The Hazards of Confidence
By DANIEL KAHNEMAN

Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army’s Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.

One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again.

A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another’s shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over.

As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man’s true nature revealed itself in sharp relief.

After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers’ leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers’ leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw.

Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: “This one will never make it,” “That fellow is rather mediocre, but should do O.K.” or “He will be a star.” We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, “But of course anything could happen.”

We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.

We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.

I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.

I had discovered my first cognitive fallacy.

Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier’s performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, “What you see is all there is.” We had made up a story from the little we knew but had no way to allow for what we did not know about the individual’s future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like “He will be a star.” The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events — like who was near the wall — largely determined who became a leader. Other events — some of them also random — would determine later success in training and combat.

You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.

The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.

I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.

I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. “When you sell a stock,” I asked him, “who buys it?” He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong.

Most people in the investment business have read Burton Malkiel’s wonderful book “A Random Walk Down Wall Street.” Malkiel’s central idea is that a stock’s price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading.

We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley.

Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold.

To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled “Trading Is Hazardous to Your Wealth,” Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, “Boys Will Be Boys,” they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men.

Of course, there is always someone on the other side of a transaction; in general, it’s a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell “winners,” stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of “smart money” that finance professionals apply to themselves.

Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike.

Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.

More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not — and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses.

Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers’ scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others?

To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.

No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, “not very high” or “performance certainly fluctuates.” It quickly became clear, however, that no one expected the average correlation to be zero.

What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions — and thereby threaten people’s livelihood and self-esteem — are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience.

The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, “I have done very well for the firm, and no one can take that away from me.” I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?

We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness.

True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.

Daniel Kahneman is emeritus professor of psychology and of public affairs at Princeton University and a winner of the 2002 Nobel Prize in Economics. This article is adapted from his book “Thinking, Fast and Slow,” out this month from Farrar, Straus & Giroux.
http://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-confidence.html?pagewanted=all

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Gorillas dismantle snares
« Reply #23 on: August 08, 2012, 09:12:43 PM »
http://www.redorbit.com/news/science/1112661209/young-gorillas-observed-dismantling-poacher-snares/
Young Gorillas Observed Dismantling Poacher Snares
July 23, 2012
 

Juvenile gorillas from the Kuryama group dismantle a snare in Rwanda's Volcanoes National Park Credit: Dian Fossey Gorilla Fund International


In what can only be described as an impassioned effort to save their own kind from the hand of poachers, two juvenile mountain gorillas have been observed searching out and dismantling manmade traps and snares in their Rwandan forest home, according to a group studying the majestic creatures.

Conservationists working for the Dian Fossey Gorilla Fund International were stunned when they saw Dukore and Rwema, two brave young mountain gorillas, destroying a trap, similar to ones that snared and killed a member of their family less than a week before. Bush-meat hunters set thousands of traps throughout the forests of Rwanda, hoping to catch antelope and other species, but sometimes they capture apes as well.

In an interview with Mark Prigg at The Daily Mail, Erika Archibald, a spokesperson for the Gorilla Fund, said that John Ndayambaje, a tracker for the group, was conducting his regular rounds when he spotted a snare. As he bent down to dismantle it, a silverback from the group rushed him and made a grunting noise that is considered a warning call. A few moments later the two youngsters Dukore and Rwema rushed up to the snare and began to dismantle it on their own.
Then, seconds after destroying the one trap, Archibald continued, Ndayambaje witnessed the pair, along with a third juvenile named Tetero, move to another and dismantle that one as well, one that he had not noticed beforehand. He stood there in amazement.

“We have quite a long record of seeing silverbacks dismantle snares,” Archibald told Prigg. “But we had never seen it passed on to youngsters like that.”  And the youngsters moved “with such speed and purpose and such clarity … knowing,” she added. “This is absolutely the first time that we’ve seen juveniles doing that … I don’t know of any other reports in the world of juveniles destroying snares,” Veronica Vecellio, gorilla program coordinator at the Dian Fossey Gorilla Fund’s Karisoke Research Center, told National Geographic.

Every day trackers from the Karisoke center scour the forest for snares, dismantling any they find in order to protect the endangered mountain gorillas, which the International Fund for the Conservation of Nature (IUCN) says face “a very high risk of extinction in the wild.”

Adults generally have enough strength to free themselves from the snares, but juveniles usually do not, and often die as a result of snare-related wounds. Such was the case of an ensnared infant, Ngwino, found too late by Karisoke workers last week. The infant’s shoulder was dislocated during an escape attempt, and gangrene had set in after the ropes cut deep into her leg.

A snare consists of a noose tied to a branch or a bamboo stalk. The rope is pulled downward, bending the branch, and a rock or bent stick is used to hold the noose to the ground, keeping the branch tight. Then vegetation is placed over the noose to camouflage it. When an animal budges the rock or stick, the branch swings upward and the noose closes around the prey, usually the leg, and, depending on the weight of the animal, is hoisted up into the air.

Vecellio said the speed with which everything happened leads her to believe this wasn’t the first time the juveniles had dismantled a trap.

“They were very confident,” she said. “They saw what they had to do, they did it, and then they left.”

Since gorillas in the Kuryama group have been snared before, Vecellio said it is likely that the juveniles know these snares are dangerous. “That’s why they destroyed them.”

“Chimpanzees are always quoted as being the tool users, but I think, when the situation provides itself, gorillas are quite ingenious” too, said veterinarian Mike Cranfield, executive director of the Mountain Gorilla Veterinary Project.
He speculated that the gorillas may have learned how to destroy the traps by watching the Karisoke trackers. “If we could get more of them doing it, it would be great,” he joked.

But Vecellio said it would go against Karisoke center policies and ethos to actively instruct the apes. “We try as much as we can to not interfere with the gorillas. We don’t want to affect their natural behavior.”

Pictures of the incident have gone viral and numerous fans on the Fund’s Facebook page have shared comments cheering for the young silverbacks. Archibald said capturing the interaction was “so touching that I felt everybody with any brains would be touched.”



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
WSJ: Are we really getting smarter?
« Reply #24 on: September 22, 2012, 07:02:02 AM »


Are We Really Getting Smarter?
Americans' IQ scores have risen steadily over the past century. James R. Flynn examines why.
By JAMES R. FLYNN

IQ tests aren't perfect, but they can be useful. If a boy doing badly in class does really well on one, it is worth investigating whether he is being bullied at school or having problems at home. The tests also roughly predict who will succeed at college, though factors like motivation and self-control are at least as important.

 

We are the first of our species to live in a world dominated by hypotheticals and nonverbal symbols.

Advanced nations like the U.S. have experienced massive IQ gains over time (a phenomenon that I first noted in a 1984 study and is now known as the "Flynn Effect"). From the early 1900s to today, Americans have gained three IQ points per decade on both the Stanford-Binet Intelligence Scales and the Wechsler Intelligence Scales. These tests have been around since the early 20th century in some form, though they have been updated over time. Another test, Raven's Progressive Matrices, was invented in 1938, but there are scores for people whose birth dates go back to 1872. It shows gains of five points per decade.

In 1910, scored against today's norms, our ancestors would have had an average IQ of 70 (or 50 if we tested with Raven's). By comparison, our mean IQ today is 130 to 150, depending on the test. Are we geniuses or were they just dense?

These alternatives sparked a wave of skepticism about IQ. How could we claim that the tests were valid when they implied such nonsense? Our ancestors weren't dumb compared with us, of course. They had the same practical intelligence and ability to deal with the everyday world that we do. Where we differ from them is more fundamental: Rising IQ scores show how the modern world, particularly education, has changed the human mind itself and set us apart from our ancestors. They lived in a much simpler world, and most had no formal schooling beyond the sixth grade.

The Raven's test uses images to convey logical relationships. The Wechsler has 10 subtests, some of which do much the same, while others measure traits that intelligent people are likely to pick up over time, such as a large vocabulary and the ability to classify objects.

Modern people do so well on these tests because we are new and peculiar. We are the first of our species to live in a world dominated by categories, hypotheticals, nonverbal symbols and visual images that paint alternative realities. We have evolved to deal with a world that would have been alien to previous generations.

More
Raven's Progressive Matrices are non-verbal multiple choice measures of the general intelligence. In each test item, the subject is asked to identify the missing element that completes a pattern. Click here to take the test.
.
A century ago, people mostly used their minds to manipulate the concrete world for advantage. They wore what I call "utilitarian spectacles." Our minds now tend toward logical analysis of abstract symbols—what I call "scientific spectacles." Today we tend to classify things rather than to be obsessed with their differences. We take the hypothetical seriously and easily discern symbolic relationships.

The mind-set of the past can be seen in interviews between the great psychologist Alexander Luria and residents of rural Russia during the 1920s—people who, like ourselves in 1910, had little formal education.

Luria: What do a fish and crow have in common?

Reply: A fish it lives in water, a crow flies.

Luria: Could you use one word for them both?

Reply: If you called them "animals" that wouldn't be right. A fish isn't an animal, and a crow isn't either. A person can eat a fish but not a crow.

The prescientific person is fixated on differences between things that give them different uses. My father was born in 1885. If you asked him what dogs and rabbits had in common, he would have said, "You use dogs to hunt rabbits." Today a schoolboy would say, "They are both mammals." The latter is the right answer on an IQ test. Today we find it quite natural to classify the world as a prerequisite to understanding it.

Here is another example.

Luria: There are no camels in Germany; the city of B is in Germany; are there camels there or not?

Reply: I don't know, I have never seen German villages. If B is a large city, there should be camels there.

Luria: But what if there aren't any in all of Germany?

Reply: If B is a village, there is probably no room for camels.

The prescientific Russian wasn't about to treat something as important as the existence of camels hypothetically. Resistance to the hypothetical isn't just a state of mind unfriendly to IQ tests. Moral argument is more mature today than a century ago because we take the hypothetical seriously: We can imagine alternate scenarios and put ourselves in the shoes of others.

The following invented examples (not from an IQ test) show how our minds have evolved. All three present a series that implies a relationship; you must discern that relationship and complete the series based on multiple-choice answers:

1. [gun] / [gun] / [bullet] 2. [bow] / [bow] / [blank].

Pictures that represent concrete objects convey the relationship. In 1910, the average person could choose "arrow" as the answer.

1. [square] / [square] / [triangle]. 2. [circle] / [circle] / [blank].

In this question, the relationship is conveyed by shapes, not concrete objects. By 1960, many could choose semicircle as the answer: Just as the square is halved into a triangle, so the circle should be halved.

1. * / & / ? 2. M / B / [blank].

In this question, the relationship is simply that the symbols have nothing in common except that they are the same kind of symbol. That "relationship" transcends the literal appearance of the symbols themselves. By 2010, many could choose "any letter other than M or B" from the list as the answer.

This progression signals a growing ability to cope with formal education, not just in algebra but also in the humanities. Consider the exam questions that schools posed to 14-year-olds in 1910 and 1990. The earlier exams were all about socially valuable information: What were the capitals of the 45 states? Later tests were all about relationships: Why is the capital of many states not the largest city? Rural-dominated state legislatures hated big cities and chose Albany over New York, Harrisburg over Philadelphia, and so forth.

Our lives are utterly different from those led by most Americans before 1910. The average American went to school for less than six years and then worked long hours in factories, shops or agriculture. The only artificial images they saw were drawings or photographs. Aside from basic arithmetic, nonverbal symbols were restricted to musical notation (for an elite) and playing cards. Their minds were focused on ownership, the useful, the beneficial and the harmful.

Widespread secondary education has created a mass clientele for books, plays and the arts. Since 1950, there have been large gains on vocabulary and information subtests, at least for adults. More words mean that more concepts are conveyed. More information means that more connections are perceived. Better analysis of hypothetical situations means more innovation. As the modern mind developed, people performed better not only as technicians and scientists but also as administrators and executives.

A greater pool of those capable of understanding abstractions, more contact with people who enjoy playing with ideas, the enhancement of leisure—all of these developments have benefited society. And they have come about without upgrading the human brain genetically or physiologically. Our mental abilities have grown, simply enough, through a wider acquaintance with the world's possibilities.

—Mr. Flynn is the author of "Are We Getting Smarter? Rising IQ in the 21st Century" (Cambridge University Press).


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

ccp

  • Power User
  • ***
  • Posts: 18360
    • View Profile
altruism
« Reply #30 on: September 28, 2014, 05:09:41 PM »
Extreme altruism
Right on!
Self-sacrifice, it seems, is the biological opposite of psychopathy
Sep 20th 2014 | From the print edition

FLYERS at petrol stations do not normally ask for someone to donate a kidney to an unrelated stranger. That such a poster, in a garage in Indiana, actually did persuade a donor to come forward might seem extraordinary. But extraordinary people such as the respondent to this appeal (those who volunteer to deliver aid by truck in Syria at the moment might also qualify) are sufficiently common to be worth investigating. And in a paper published this week in the Proceedings of the National Academy of Sciences, Abigail Marsh of Georgetown University and her colleagues do just that. Their conclusion is that extreme altruists are at one end of a “caring continuum” which exists in human populations—a continuum that has psychopaths at the other end.

Biology has long struggled with the concept of altruism. There is now reasonable agreement that its purpose is partly to be nice to relatives (with whom one shares genes) and partly to permit the exchanging of favours. But how the brain goes about being altruistic is unknown. Dr Marsh therefore wondered if the brains of extreme altruists might have observable differences from other brains—and, in particular, whether such differences might be the obverse of those seen in psychopaths.

She and her team used two brain-scanning techniques, structural and functional magnetic-resonance imaging (MRI), to study the amygdalas of 39 volunteers, 19 of whom were altruistic kidney donors. (The amygdalas, of which brains have two, one in each hemisphere, are areas of tissue central to the processing of emotion and empathy.) Structural MRI showed that the right amygdalas of altruists were 8.1% larger, on average, than those of people in the control group, though everyone’s left amygdalas were about the same size. That is, indeed, the obverse of what pertains in psychopaths, whose right amygdalas, previous studies have shown, are smaller than those of controls.

Functional MRI yielded similar results. Participants, while lying in a scanner, were shown pictures of men and women wearing fearful, angry or neutral expressions on their faces. Each volunteer went through four consecutive runs of 80 such images, and the fearful images (but not the other sorts) produced much more activity in the right amygdalas of the altruists than they did in those of the control groups, while the left amygdalas showed no such response. That, again, is the obverse of what previous work has shown is true of psychopaths, though in neither case is it clear why only the right amygdala is affected.

Dr Marsh’s result is interesting as much for what it says about psychopathy as for what it says about extreme altruism. Some biologists regard psychopathy as adaptive. They argue that if a psychopath can bully non-psychopaths into giving him what he wants, he will be at a reproductive advantage as long as most of the population is not psychopathic. The genes underpinning psychopathy will thus persist, though they can never become ubiquitous because psychopathy works only when there are non-psychopaths to prey on.

In contrast, Dr Marsh’s work suggests that what is going on is more like the way human height varies. Being tall is not a specific adaptation (though lots of research suggests tall people do better, in many ways, than short people do). Rather, tall people (and also short people) are outliers caused by unusual combinations of the many genes that govern height. If Dr Marsh is correct, psychopaths and extreme altruists may be the result of similar, rare combinations of genes underpinning the more normal human propensity to be moderately altruistic.

From the print edition: Science and technology

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Worm mind, robot body
« Reply #31 on: December 15, 2014, 04:41:26 PM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
The Artificial Intelligence Revolution
« Reply #32 on: February 23, 2015, 07:05:06 PM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Ashkenazi intelligence
« Reply #33 on: February 24, 2015, 12:07:16 PM »
Haven't read this yet, posting it here for my future convenience:

http://ieet.org/index.php/IEET/more/pellissier20130620 

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Artificial General Intelligence AGI about to up end the world as we know it
« Reply #36 on: March 12, 2016, 10:35:13 PM »
Game ON: the end of the old economic system is in sight
Posted: 12 Mar 2016 11:23 AM PST
Google is a pioneer in limited artificial general intelligence (aka computers that can learn w/o preprogramming them). One successful example is AlphaGo.  It just beat this Go Grandmaster three times in a row. 
 
 
What makes this win interesting is that AlphaGo didn't win through brute force.  Go is too complicated for that:
...the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.
 
It also didn't win by extensive preprogramming by talented engineers, like IBM's Deep Blue did to win at Chess. 
 
Instead, AlphaGo won this victory by learning how to play the game from scratch using this process:
   No assumptions.  AlphaGo approached the game without any assumptions.  This is called a model-free approach.  This allows it to program itself from scratch, by building complex models human programmers can't understand/match.
   Big Data.  It then learned the game by interacting with a database filled with 30 million games previously played by human beings.  The ability to bootstrap a model from data removes almost all of the need for engineering and programming talent currently needed for big systems.  That's huge.
   Big Sim (by the way, Big Sim will be as well known as Big Data in five years <-- heard it here first). Finally, it applied and honed that learning by playing itself on 50 computers night and day until it became good enough to play a human grandmaster.
The surprise of this victory isn't that it occurred.  Most expected it would, eventually... 
 
Instead, the surprise is how fast it happened.  How fast AlphaGo was able to bootstrap itself to a mastery of the game.  It was fast. Unreasonably fast.
 
However, this victory goes way beyond the game of Go.  It is important because AlphaGo uses a generic technique for learning.  A technique that can be used to master a HUGE range of activities, quickly.  Activities that people get paid for today.
 
This implies the following:
   This technology is going to cut through the global economy like a hot knife through butter.  It learns fast and largely on its own.  It's widely applicable.  It doesn't only master what it has seen, it can innovate.  For example: some of the unheard of moves made by AlphaGo were considered "beautiful" by the Grandmaster it beat. 
   Limited AGI (deep learning in particular) will have the ability to do nearly any job currently being done by human beings -- from lawyers to judges, nurses to doctors, driving to construction -- potentially at a grandmaster's level of capability.  This makes it a buzzsaw.
   Very few people (and I mean very few) will be able to stay ahead of the limited AGI buzzsaw.   It learns so quickly, the fate of people stranded in former factory towns gutted by "free trade" is likely to be the fate of the highest paid technorati.  They simply don't have the capacity to learn fast enough or be creative enough to stay ahead of it.
Have fun,
 
John Robb
 
PS:  Isn't it ironic (or not) that at the very moment in history when we demonstrate a limited AGI (potentially, a tsunami of technological change) the western industrial bureaucratic political system starts to implode due to an inability to deal with the globalization (economic, finance and communications) enabled by the last wave of technological change?

PPS:  This has huge implications for warfare.  I'll write more about those soon.  Laying a foundation for understanding this change first.

DougMacG

  • Power User
  • ***
  • Posts: 18130
    • View Profile
Inherited Intelligence, Charles Murray, Bell Curve
« Reply #37 on: March 23, 2016, 10:02:07 AM »
Also pertains to race and education. 
Charles Murray was co-author of The Bell Curve, a very long scientific book that became a landmine for a small point in it that exposed differences in intelligence between races, therefore author is a racist...  His co-author died about when this was published so he has owned the work over the two decades since it was published.  

Intelligence is 40%-80% inherited, a wide range that is nowhere near zero or 100%.

People tend to marry near their own intelligence making the difference grow rather than equalize over time.  He predicted this would have societal effects that have most certainly become true.

Being called a racist for publishing scientific data is nothing new, but Charles Murray has received more than his share of it.  What he could of or should have done is cover up the real results to fit what people like to hear, like the climate scientists do.  He didn't.

Most recently his work received a public rebuke from the President of Virginia Tech.

His response to that is a bit long but quite a worthwhile read that will save you the time of reading his 3-4 inch thick hardcover book if you haven't already read this important work.

https://www.aei.org/publication/an-open-letter-to-the-virginia-tech-community/

Charles Murray
March 17, 2016 9:00 am

An open letter to the Virginia Tech community

Last week, the president of Virginia Tech, Tim Sands, published an “open letter to the Virginia Tech community” defending lectures delivered by deplorable people like me (I’m speaking on the themes of Coming Apart on March 25). Bravo for President Sands’s defense of intellectual freedom. But I confess that I was not entirely satisfied with his characterization of my work. So I’m writing an open letter of my own.

Dear Virginia Tech community,

Since President Sands has just published an open letter making a serious allegation against me, it seems appropriate to respond. The allegation: “Dr. Murray is well known for his controversial and largely discredited work linking measures of intelligence to heredity, and specifically to race and ethnicity — a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.”

Let me make an allegation of my own. President Sands is unfamiliar either with the actual content of The Bell Curve — the book I wrote with Richard J. Herrnstein to which he alludes — or with the state of knowledge in psychometrics.

The Bell Curve and Charles Murray
I should begin by pointing out that the topic of the The Bell Curve was not race, but, as the book’s subtitle says, “Intelligence and Class Structure in American Life.” Our thesis was that over the last half of the 20th century, American society has become cognitively stratified. At the beginning of the penultimate chapter, Herrnstein and I summarized our message:

Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:
An increasingly isolated cognitive elite.
A merging of the cognitive elite with the affluent.
A deteriorating quality of life for people at the bottom end of the cognitive distribution.
Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. [p. 509].
It is obvious that these conclusions have not been discredited in the twenty-two years since they were written. They may be more accurately described as prescient.

Now to the substance of President Sands’s allegation.

The heritability of intelligence

Richard Herrnstein and I wrote that cognitive ability as measured by IQ tests is heritable, somewhere in the range of 40% to 80% [pp. 105–110], and that heritability tends to rise as people get older. This was not a scientifically controversial statement when we wrote it; that President Sands thinks it has been discredited as of 2016 is amazing.

You needn’t take my word for it. In the wake of the uproar over The Bell Curve, the American Psychological Association (APA) assembled a Task Force on Intelligence consisting of eleven of the most distinguished psychometricians in the United States. Their report, titled “Intelligence: Knowns and Unknowns,” was published in the February 1996 issue of the APA’s peer-reviewed journal, American Psychologist. Regarding the magnitude of heritability (represented by h2), here is the Task Force’s relevant paragraph. For purposes of readability, I have omitted the citations embedded in the original paragraph:

If one simply combines all available correlations in a single analysis, the heritability (h2) works out to about .50 and the between-family variance (c2) to about .25. These overall figures are misleading, however, because most of the relevant studies have been done with children. We now know that the heritability of IQ changes with age: h2 goes up and c2 goes down from infancy to adulthood. In childhood h2 and c2 for IQ are of the order of .45 and .35; by late adolescence h2 is around .75 and c2 is quite low (zero in some studies) [p. 85].
The position we took on heritability was squarely within the consensus state of knowledge. Since The Bell Curve was published, the range of estimates has narrowed somewhat, tending toward modestly higher estimates of heritability.

Intelligence and race

There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones.

There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones. Once again, the state of knowledge about the basics is not particularly controversial. The mean scores for all kinds of mental tests vary by ethnicity. No one familiar with the data disputes that most elemental statement. Regarding the most sensitive difference, between Blacks and Whites, Herrnstein and I followed the usual estimate of one standard deviation (15 IQ points), but pointed out that the magnitude varied depending on the test, sample, and where and how it was administered. What did the APA Task Force conclude? “Although studies using different tests and samples yield a range of results, the Black mean is typically about one standard deviation (about 15 points) below that of Whites. The difference is largest on those tests (verbal or nonverbal) that best represent the general intelligence factor g” [p. 93].

Is the Black/White differential diminishing? In The Bell Curve, we discussed at length the evidence that the Black/White differential has narrowed [pp. 289–295], concluding that “The answer is yes with (as usual) some qualifications.” The Task Force’s treatment of the question paralleled ours, concluding with “[l]arger and more definitive studies are needed before this trend can be regarded as established” [p. 93].

Can the Black/White differential be explained by test bias? In a long discussion [pp. 280–286], Herrnstein and I presented the massive evidence that the predictive validity of mental tests is similar for Blacks and Whites and that cultural bias in the test items or their administration do not explain the Black/White differential. The Task Force’s conclusions regarding predictive validity: “Considered as predictors of future performance, the tests do not seem to be biased against African Americans” [p. 93]. Regarding cultural bias and testing conditions:  “Controlled studies [of these potential sources of bias] have shown, however, that none of them contributes substantially to the Black/White differential under discussion here” [p. 94].

Can the Black/White differential be explained by socioeconomic status? We pointed out that the question has two answers: Statistically controlling for socioeconomic status (SES) narrows the gap. But the gap does not narrow as SES goes up — i.e., measured in standard deviations, the differential between Blacks and Whites with high SES is not narrower than the differential between those with low SES [pp. 286–289]. Here’s the APA Task Force on this topic:

Several considerations suggest that [SES] cannot be the whole explanation. For one thing, the Black/White differential in test scores is not eliminated when groups or individuals are matched for SES. Moreover, the data reviewed in Section 4 suggest that—if we exclude extreme conditions—nutrition and other biological factors that may vary with SES account for relatively little of the variance in such scores [p. 94].
The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong.

And so on. The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong. We deliberately remained well within the mainstream of what was confidently known when we wrote. None of those descriptions have changed much in the subsequent twenty-two years, except to be reinforced as more has been learned. I have no idea what countervailing evidence President Sands could have in mind.

At this point, some readers may be saying to themselves, “But wasn’t The Bell Curve the book that tried to prove blacks were genetically inferior to whites?” I gather that was President Sands’ impression as well. It has no basis in fact. Knowing that people are preoccupied with genes and race (it was always the first topic that came up when we told people we were writing a book about IQ), Herrnstein and I offered a seventeen-page discussion of genes, race, and IQ [pp. 295–311]. The first five pages were devoted to explaining the context of the issue — why, for example, the heritability of IQ among humans does not necessarily mean that differences between groups are also heritable. Four pages were devoted to the technical literature arguing that genes were implicated in the Black/White differential. Eight pages were devoted to arguments that the causes were environmental. Then we wrote:

If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate. [p. 311].
That’s it—the sum total of every wild-eyed claim that The Bell Curve makes about genes and race. There’s nothing else. Herrnstein and I were guilty of refusing to say that the evidence justified a conclusion that the differential had to be entirely environmental. On this issue, I have a minor quibble with the APA Task Force, which wrote “There is not much direct evidence on [a genetic component], but what little there is fails to support the genetic hypothesis” [p. 95]. Actually there was no direct evidence at all as of the mid-1990s, but the Task Force chose not to mention a considerable body of indirect evidence that did in fact support the genetic hypothesis. No matter. The Task Force did not reject the possibility of a genetic component. As of 2016, geneticists are within a few years of knowing the answer for sure, and I am content to wait for their findings.

But I cannot leave the issue of genes without mentioning how strongly Herrnstein and I rejected the importance of whether genes are involved. This passage from The Bell Curve reveals how very, very different the book is from the characterization of it that has become so widespread:

In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of difference are less threatening than genetic ones is natural but illusory.
In any case, you are not going to learn tomorrow that all the cognitive differences between races are 100 percent genetic in origin, because the scientific state of knowledge, unfinished as it is, already gives ample evidence that environment is part of the story. But the evidence eventually may become unequivocal that genes are also part of the story. We are worried that the elite wisdom on this issue, for years almost hysterically in denial about that possibility, will snap too far in the other direction. It is possible to face all the facts on ethnic and race differences on intelligence and not run screaming from the room. That is the essential message [pp. 314-315].
I have been reluctant to spend so much space discussing The Bell Curve’s treatment of race and intelligence because it was such an ancillary topic in the book. Focusing on it in this letter has probably made it sound as if it was as important as President Sands’s open letter implied.

But I had to do it. For two decades, I have had to put up with misrepresentations of The Bell Curve. It is annoying. After so long, when so many of the book’s main arguments have been so dramatically vindicated by events, and when our presentations of the meaning and role of IQ have been so steadily reinforced by subsequent research in the social sciences, not to mention developments in neuroscience and genetics, President Sands’s casual accusation that our work has been “largely discredited” was especially exasperating. The president of a distinguished university should take more care.

It is in that context that I came to the end of President Sands’s indictment, accusing me of promulgating “a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.” At that point, President Sands went beyond the kind of statement that merely reflects his unfamiliarity with The Bell Curve and/or psychometrics. He engaged in intellectual McCarthyism.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Qualia
« Reply #39 on: April 13, 2017, 11:53:51 AM »
http://neurohacker.com/qualia/


My son is intrigued by this.  Any comments?

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Qualia
« Reply #40 on: April 20, 2017, 07:33:10 PM »
http://neurohacker.com/qualia/


My son is intrigued by this.  Any comments?

My money is on this being a ripoff.

ccp

  • Power User
  • ***
  • Posts: 18360
    • View Profile
CD, FWIW I agree with GM
« Reply #41 on: April 22, 2017, 11:03:53 AM »
https://www.theatlantic.com/health/archive/2013/07/the-vitamin-myth-why-we-think-we-need-supplements/277947/

Crafty,
most if not all the supplement sales carry a similar pattern of promotion.   You get someone with a science background who recites biochemical pathways that show a particular substance is involved in some sort of function that is needed for health.   From Vit C to B 12 to cofactor with magnesium and hundreds and probably thousands more

They impress the non scientist with "co factors" long chemical names and site studies that show some relationship to our health.   Then they may state taking large doses of the cofactor or other chemical increases the benefit to our health over just normal doses.  Or they will vary the presentation with claims that the nutrient or chemical has to be taken in a certain way with other substances and then, and lonly then we would all reap some increase benefit to our "prostate health, our cognitive health, our digestive healthy, more energy etc.

Then they find mostly obscure studies by usually second rate or no name researches who are spending grant money , trying to make some sort of name for themselves, or I even suspect are at times making up data for bribes  and then publish the date and their "research" in one of the money making journals that are usually second rate or are not well monitored or peer reviewed (even that process is subject to outright fraud).

So now they cite the impressive sounding biochemistry in order to sound like they understand something that the rest of us do not and they "discovered" this chemical (s) that is found by these usually insignificant if not fraudulent studies to suggest some sort of benefit.   

The chemicals are often obscure from some exotic jungle of far away ocean or island or with some claim of being the only ones who can provide in the proper purity or concentration or mix or other elixir that no one else can duplicate .

If any real scientist or doctor disputes their claim they come back with a vengeance arguing that the doctor or scientist is just threatened by this "cure" that would put the doctor or scientist out of business.

You don't have to take my word for it but the vast majority of these things , if not all , are scams.  They all have similar themes with variations that play over and over again to people who are looking to stay healthy, stay young, get an edge in life, have more sexual prowess ,  remember more , be smarte.

There si billions to be made. 

I hope I don't sound like some condescending doctor who thinks he knows it all.  I don't.  And I know I don't. 
But even on Shark Tank when some entreprenuer came on trying to get the sharks to buy into some sort of supplement they said all the supplements are just a "con".

FWIW I agree with it.




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Re: Intelligence and Psychology
« Reply #42 on: April 22, 2017, 02:19:56 PM »
Thank you!

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Grit
« Reply #46 on: October 09, 2017, 08:31:50 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Mark Cuban on AI
« Reply #47 on: November 06, 2017, 09:02:58 AM »
https://www.marketwatch.com/story/mark-cuban-tells-kyle-bass-ai-will-change-everything-weighs-in-on-bitcoin-2017-11-04?mod=cx_picks&cx_navSource=cx_picks&cx_tag=other&cx_artPos=7#cxrecs_s

There is no way to beat the machines, so you’d better bone up on what makes them tick.

That was the advice of billionaire investor Mark Cuban, who was interviewed by Kyle Bass, founder and principal of Hayman Capital Management, in late October for Real Vision. The interview published Nov. 3.

“I think artificial intelligence is going to change everything, everything, 180 degrees,” said Cuban, who added that changes seen by AI would “dwarf” the advances that have been seen in technology over the last 30 years or more, even the internet.

The owner of the NBA’s Dallas Mavericks and a regular on the TV show “Shark Tank,” said AI is going to displace a lot of jobs, something that will play out fast, over the next 20 to 30 years. He said real estate is one industry that’s likely to get hit hard.

Read: Kyle Bass says this will be the first sign of a bigger market meltdown

“So, the concept of you calling in to make an appointment to have somebody pick up your car to get your oil changed, right — someone will still drive to get your car, but there’s going to be no people in transacting any of it,” he said.

Cuban says he’s trying to learn all he can right now about machine-learning, neural networks, deep learning, writing code and programming language such as Python. Machine language say, “OK, we can take a lot more variables than you can ever think of,” he said.

And AI is seeing big demand when it comes to jobs, he said. “At the high end, we can’t pay enough to get — so when I talk about, within the artificial intelligence realm, there’s a company in China paying million-dollar bonuses to get the best graduates,” said Cuban.

The U.S. is falling badly behind when it comes to AI, with Montreal now the “center of the universe for computer vision. It’s not U.S.-based schools that are dominating any longer in those areas,” he said.

The AI companies

As for companies standing to benefit from AI, Cuban said he “thinks the FANG stocks are going to crush them,” noting that his biggest public holding is Amazon.com Inc. AMZN, +0.83%

“They’re the world’s greatest startups with liquidity. If you look at them as just a public company where you want to see what the P/E ratio is and what the discounted cash value-- you’re never going to get it, right? You’re never going to see it. And if you say Jeff Bezos (chief executive officer of Amazon), Reed Hastings (chief executive officer of Netflix Inc. NFLX, +0.65% ) — those are my 2 biggest holdings,” he said.

Read: 10 wildly successful people and the surprising jobs that kick-started their careers

Cuban said he’s less sold on Apple Inc. AAPL, +1.26% though he said it’s trying to make progress on AI, along with Alphabet Inc. GOOGL, -0.33% and Facebook Inc. FB, +0.15%  . “They’re just nonstop startups. They’re in a war. And you can see the market value accumulating to them because of that,” he said.

But still, they aren’t all owning AI yet, and there’s lots of opportunities for smaller companies, he added..

On digital currencies and ICO

While Bass commented that he has been just a spectator when it comes to blockchain — a decentralized ledger used to record and verify transactions — Cuban said he’s a big fan. But when it comes to bitcoin BTCUSD, -2.74% ethereum and other cryptocurrencies, he said it would be a struggle to see them become real currencies because only a limited number of transactions can be done.

Read: Two ETF sponsors file for funds related to blockchain, bitcoin’s foundational technology

“So, it’s going to be very difficult for it to be a currency when the time and the expense of doing a transaction is 100 times what you can do over a Visa or Mastercard, right?” asked Cuban, adding that really the only value of bitcoin and ethereum is that they are just digital assets that are collectible.

Read: Bitcoin may be staging the biggest challenge yet to gold and silver

“And in this particular case, it’s a brilliant collectible that’s probably more like art than baseball cards, stamps, or coins, right, because there’s a finite amount that are going to be made, right? There are 21.9 million bitcoins that are going to be made,” he said.

Cuban said initial coin offerings — fundraising for new cryptocurrency ventures — “really are an opportunity,” and he has been in involved in UniCoin, which does ETrade and Unikrm, which does legal sports betting for Esports and other sports outside the United States.

Read: What is an ICO?

But he and Bass both commented about how the industry needs regulating, with Bass noting that ICOs have raised $3 billion this year, and $2 billion going into September. While many are “actually going to do well,” so many are just completely stupid and frauds,” he said.

“It’s the dumb ones that are going to get shut down,” agreed Cuban.

One problem: “There’s nobody at the top that has any understanding of it,” he added, referring to the Securities and Exchange Commission.

Cuban ended the interview with some advice on where to invest now. He said for those investors not too knowledgeable about markets, the best bet is a cheap S&P 500 SPX, +0.06%  fund, but that putting 5% in bitcoin or ethereum isn’t a bad idea on the theory that it’s like investing in artwork.

Listen to the whole interview on Real Vision here

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69135
    • View Profile
Intelligence across the generations
« Reply #49 on: March 15, 2018, 10:54:38 PM »