Fire Hydrant of Freedom

Politics, Religion, Science, Culture and Humanities => Science, Culture, & Humanities => Topic started by: Crafty_Dog on July 26, 2009, 04:43:46 AM

Title: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on July 26, 2009, 04:43:46 AM
Scientists worry machines may outsmart man
By JOHN MARKOFF
Published: July 25, 2009
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.

“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.

The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”

Ken Conley/Willow Garage
Title: Creativity as the Necessary Ingredient
Post by: Body-by-Guinness on August 20, 2009, 09:39:36 AM
What Makes A Genius?

By Andrea Kuszewski
Created Aug 20 2009 - 2:26am
What is the difference between "intelligence" and "genius"?  Creativity, of course!

There was an article recently in Scientific American that discussed creativity and the signs in children that were precursors to creative achievement in adulthood. The authors cite some work done by Michigan State University researchers Robert and Michele Root-Bernstein, a collaboration of physiologist and theater instructor, who presented their findings at an annual meeting of the APA this past March. Since I research creativity as well as intelligence, I found the points brought up in the article quite intriguing, yet not surprising.

One of the best observations stated in the article regarding achievement was this:
"... most highly creative people are polymaths- they enjoy and excel at a range of challenging activities. For instance, in a survey of scientists at all levels of achievement, the [researchers] found that only about one sixth report engaging in a secondary activity of an artistic or creative nature, such as painting or writing non-scientific prose. In contrast, nearly all Nobel Prize winners in science have at least one other creative activity that they pursue seriously. Creative breadth, the [researchers] argue, is an important but understudied component of genius."

Everyone is fascinated by famous geniuses like Albert Einstein. They speculate as to what made him so unique and brilliant, but no one has been able to identify exactly what "it" is. If you mention "intelligence research", the average person assumes you are speaking of that top 1 or 2%, the IQs over 145, the little kids you see on TV passing out during Spelling Bees, because they are freaking out from the pressure of having to spell antidisestablishmentarianism on a stage before hundreds of on-lookers.

But the reality is, most intelligence researchers don't focus on the top 1 or 2%, they look at the general population, of which the average score is 100, and generally focus their attention on the lower to middle portion of the distribution.

There may be a multitude of reasons why most researchers focus their study on the lower end of the distribution; one I can see is because the correlations between individual abilities measured on IQ tests and the actual overall ability level of the person taking the test are the strongest at that portion of the distribution- those IQ scores of 110 and below.

The point I just made I have made before (which you will recognize if you read any of my pieces on intelligence), so nothing new there. However, what I found especially promising about the work done by the Root-Bernsteins, is that instead of merely trying to analyze IQ scores, they actually looked at the attributes of successful, intelligent, creative people, and figured out what it was they had going for them that other highly intelligent people did not- essentially, what the difference was between "intelligent" and "genius".

(the paper abstracts from the symposium describing their methods can be read here)

Now, some hard-core statistician-types may balk at their methods, screaming, "Case studies are not valid measures of intelligence!" and to a certain degree, they have a point. Yes, they initially looked at case studies of successful individuals, but then they surveyed scientists across multiple fields and found that the highest achievers in their domain (as indicated by earning the Nobel Prize) were skilled in multiple domains, at least one of these considered to be "creative", such as music, art, or non-scientific writing.

We would probably consider most scientists to be intelligent. But are they all geniuses? Do geniuses have the highest IQ scores? Richard Feynman is undeniably considered to be a genius. While his IQ score was *only* around 120-125, he was also an artist and a gifted communicator. Was he less intelligent than someone with an IQ score of 150?

What we are doing here is challenging the very definition of "intelligence". What is it really? An IQ score? Computational ability? Being able to talk your way out of a speeding ticket? Knowing how to handle crisis effectively? Arguing a convincing case before a jury? Well, maybe all of the above.

Many moons ago, Dr Robert Sternberg, now the Dean of Arts and Sciences at Tufts University in Boston, brought this very argument to the psychology community. And, to be honest, it was not exactly welcomed with open arms. He believed that intelligence is comprised of three facets, only one of which is measured on a typical IQ test, including the SAT and the GRE. That is only the first part, analytical ability. The second component is creativity, and the third component is practical ability, or being able to use your analytical skills and your creativity in order to effectively solve novel problems. He called this the Triarchic Theory of Intelligence.

Fast-forwarding to the present, Dr Rex Jung, from the Mind Institute and the University of New Mexico in Alburquerque, published a paper earlier this year showing biochemical support for the Threshold Theory of Creativity (a necessary but sufficient level of intelligence is needed for successful creative achievement). In a nutshell, he found that intelligence (as most people measure it today) is not enough to set a person apart and rise them to the level of genius. Creativity is that essential component that not all intelligent people possess, but geniuses require. Not all creative people are geniuses (thus the Threshold Theory), but in order to reach genius status, creativity is a necessary attribute.

Someone could have an IQ of 170, yet get lost inside of a paper bag, and not have the ability to hold a conversation with anyone other than a dog. That is not my definition of genius. We want to know what made geniuses like Einstein and Feynman so far ahead of their intelligent scientist peers, and the answer to that is creativity.

I am hoping that as more studies come out stating the importance of multi-disciplinary thinking and collaboration across domains for reaching the highest levels of achievements,  that eventually the science community will fully embrace creativity research and see its validity in the study of successful intelligence. As a society, we already recognize the importance of creativity in innovation and in the arts, so let's take it a step further.

Give creativity the "street cred" it deserves as the defining feature that separates mere intelligence from utter genius.

Psychology
ION Publications LLC
Source URL: http://www.scientificblogging.com/rogue_neuron/what_makes_genius
Title: Bird Intelligence
Post by: Crafty_Dog on December 07, 2010, 04:17:59 PM
http://www.youtube.com/watch?v=efcIsve5wu8&feature=related
Title: Re: Intelligence of crows
Post by: Freki on December 08, 2010, 07:54:31 AM
Crows are amazing

[youtube]http://www.youtube.com/watch?v=NhmZBMuZ6vE[/youtube]
Title: Re: Intelligence
Post by: Crafty_Dog on December 08, 2010, 03:58:06 PM
I liked that Freki.
Title: Re: Intelligence
Post by: G M on December 08, 2010, 04:24:51 PM
I did too. It reminded me of backpacking in an isolated part of the southwest and having curious ravens surveilling me. They'd circle and study. They'd land behind trees and then stealthily hop on the ground to get a closer look. There was a definite sense of some sentient thought from them, and I'm not one for sentimental anthropomorphism.
Title: Re: Intelligence
Post by: Crafty_Dog on December 08, 2010, 07:56:33 PM
Konrad Lorenz wrote quite often of "jackdaws".  This was translated from German.  Does anyone know if this is another word for crows?  or?
Title: Re: Intelligence
Post by: G M on December 08, 2010, 08:15:41 PM
Definition of JACKDAW
1
: a common black and gray bird (Corvus monedula) of Eurasia and northern Africa that is related to but smaller than the carrion crow
2
: grackle 1
Title: Test your power of observation
Post by: Crafty_Dog on January 17, 2011, 09:49:13 AM
http://www.oldjoeblack.0nyx.com/thinktst.htm
Title: Re: Intelligence
Post by: Vicbowling on January 18, 2011, 04:34:32 PM
Very interesting article but I found myself less disturbed by the Terminator-esque prediction of the future, and a little concerned with the fact that the doctor was completely fine with the A.I projecting "human" emotion so he WOULDN'T have to... anyone else find flaw in that?!

soundproofing materials (http://"http://www.soundisolationcompany.com/")

Title: Private Intel firm
Post by: bigdog on January 24, 2011, 02:47:54 AM
http://www.stltoday.com/news/national/article_59308dcd-3092-5280-92fb-898f569504e4.html

Ousted CIA agent runs his own private operation
With U.S. funding cut, he relies on donations to fund his 'operatives' in Pakistan and Afghanistan.

Title: Re: Intelligence
Post by: Crafty_Dog on January 24, 2011, 04:49:25 AM
BD:

That is a different kind of intelligence  :lol:  May I ask you to please post that on the "Intel Matters" thread on the P&R forum?

Thank you,
Title: Re: Intelligence
Post by: bigdog on January 24, 2011, 07:46:58 AM
Whether it "matters" or not, it appears mine was lacking!  Sorry about that, Guro!
Title: Re: Intelligence
Post by: Crafty_Dog on January 25, 2011, 05:42:59 AM
No worries BD; in this context the term "intelligence" was ambiguous. :-)
Title: Eh tu Watson; Kurzweil's singularity
Post by: Crafty_Dog on February 12, 2011, 05:15:48 AM
Computer beats best humans at Jepoardy

http://wattsupwiththat.com/2011/02/10/worth-watching-watson/
===========
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.

 

On the show (you can find the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

 

Kurzweil then demonstrated the computer, which he built himself—a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.

 

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

 

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

 

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.

 

True? True.

 

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

 

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

 

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.

 

The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

 

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.

 

The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":

 

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

 

The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."

 

By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1—and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.

 

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."

 

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?

 

Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting—you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

 

As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

 

Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond—the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.

 

Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."

 

Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity—never say he's not conservative—at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. 

Title: Kurzweil 2
Post by: Crafty_Dog on February 12, 2011, 05:25:45 AM


The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

 

Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

 

In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. 

 

At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading—the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters—handed out pamphlets. An android chatted with visitors in one corner.

 

After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.

 

For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced inNature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. 

 

Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable—rather like the heat death of the universe—is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."

 

Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.

 

But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.

 

It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity—that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion."

 

Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.

 

The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies—HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.

 

Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude.

 

Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being—in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness—a machine with no ghost in it? And how would we know?

 

Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?

 

Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.

 

If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."

 

Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.

 

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."

 

This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?)

 

By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."

 

In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.

 

We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.

 

Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.

 

But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?

 

Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.

 

A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.

 

But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
Title: Memory training
Post by: Crafty_Dog on February 20, 2011, 11:18:18 AM

http://www.nytimes.com/interactive/2011/02/20/magazine/mind-secrets.html?nl=todaysheadlines&emc=tha210
Title: WSJ: Watson
Post by: Crafty_Dog on March 14, 2011, 10:58:22 AM


By STEPHEN BAKER
In the weeks since IBM's computer, Watson, thrashed two flesh-and-blood champions in the quiz show "Jeopardy!," human intelligence has been punching back—at least on blogs and opinion pages. Watson doesn't "know" anything, experts say. It doesn't laugh at jokes, cannot carry on a conversation, has no sense of self, and commits bloopers no human would consider. (Toronto, a U.S. city?) What's more, it's horribly inefficient, requiring a roomful of computers to match what we carry between our ears. And it probably would not have won without its inhuman speed on the buzzer.

This is all enough to make you feel reinvigorated to be human. But focusing on Watson's shortcomings misses the point. It risks distracting people from the transformation that Watson all but announced on its "Jeopardy!" debut: These question-answering machines will soon be working alongside us in offices and laboratories, and forcing us to make adjustments in what we learn and how we think. Watson is an early sighting of a highly disruptive force.

The key is to regard these computers not as human wannabes but rather as powerful tools, ones that can handle jobs currently held by people. The "intelligence" of the tools matters little. What counts is the information they deliver.

In our history of making tools, we have long adjusted to the disruptions they cause. Imagine an Italian town in the 17th century. Perhaps there's one man who has a special sense for the weather. Let's call him Luigi. Using his magnificent brain, he picks up on signals—changes in the wind, certain odors, perhaps the flight paths of birds or noises coming from the barn. And he spreads word through the town that rain will be coming in two days, or that a cold front might freeze the crops. Luigi is a valuable member of society.

Along comes a traveling vendor who carries a new instrument invented in 1643 by Evangelista Torricelli. It's a barometer, and it predicts the weather about as well as Luigi. It's certainly not as smart as him, if it can be called smart at all. It has no sense of self, is deaf to the animals in the barn, blind to the flight patterns of birds. Yet it comes up with valuable information.

In a world with barometers, Luigi and similar weather savants must find other work for their fabulous minds. Perhaps using the new tool, they can deepen their analysis of weather patterns, keep careful records and then draw conclusions about optimal farming techniques. They might become consultants. Maybe some of them drop out of the weather business altogether. The new tool creates both displacement and economic opportunity. It forces people to reconsider how they use their heads.

The same is true of Watson and the coming generation of question-answering machines. We can carry on interesting discussions about how "smart" they are or aren't, but that's academic. They make sense of complex questions in English and fetch answers, scoring each one for the machines' level of confidence in it. When asked if Watson can "think," David Ferrucci, IBM's chief scientist on the "Jeopardy!" team, responds: "Can a submarine swim?"

As these computers make their way into law offices, pharmaceutical labs and hospitals, people who currently make a living by answering questions must adjust. They'll have to add value in ways that machines cannot. This raises questions not just for individuals but for entire societies. How do we educate students for a labor market in which machines answer a growing percentage of the questions? How do we create curricula for uniquely human skills, such as generating original ideas, cracking jokes, carrying on meaningful dialogue? How can such lessons be scored and standardized?

These are the challenges before us. They're similar, in a sense, to what we've been facing with globalization. Again we will find ourselves grappling with a new colleague and competitor. This time around, it's a machine. We should scrutinize that tool, focusing on the questions it fails to answer. Its struggles represent a road map for our own cognitive migration. We must go where computers like Watson cannot.

Mr. Baker is the author of "Final Jeopardy—Man vs. Machine and the Quest to Know Everything" (Houghton Mifflin Harcourt, 2011).

Title: Are people getting dumber?
Post by: Crafty_Dog on February 27, 2012, 07:21:59 PM


http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1
Title: Re: Are people getting dumber?
Post by: G M on February 27, 2012, 07:29:47 PM


http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1

Look at who we have as president. Look at those who think if we'd just tax the rich more, all the economic badness would go away. Stupid is growing and California is ground zero for it's spread.
Title: Electrical Brain Stimulation
Post by: Crafty_Dog on May 15, 2012, 10:31:05 AM




http://theweek.com/article/index/226196/how-electrical-brain-stimulation-can-change-the-way-we-think
ESSAY
How electrical brain stimulation can change the way we think
After my brain was jolted, says Sally Adee, I had a near-spiritual experience
PUBLISHED MARCH 30, 2012, AT 10:01 AM

Researchers have found that "transcranial direct current stimulation" can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.   Photo: Adrianna Williams/Corbis
HAVE YOU EVER wanted to take a vacation from your own head? You could do it easily enough with liberal applications of alcohol or hallucinogens, but that's not the kind of vacation I'm talking about. What if you could take a very specific vacation only from the stuff that makes it painful to be you: the sneering inner monologue that insists you're not capable enough or smart enough or pretty enough, or whatever hideous narrative rides you. Now that would be a vacation. You'd still be you, but you'd be able to navigate the world without the emotional baggage that now drags on your every decision. Can you imagine what that would feel like?

Late last year, I got the chance to find out, in the course of investigating a story for New Scientist about how researchers are using neurofeedback and electrical brain stimulation to accelerate learning. What I found was that electricity might be the most powerful drug I've ever used in my life.

It used to be just plain old chemistry that had neuroscientists gnawing their fingernails about the ethics of brain enhancement. As Adderall, Ritalin, and other cognitive enhancing drugs gain widespread acceptance as tools to improve your everyday focus, even the stigma of obtaining them through less-than-legal channels appears to be disappearing. People will overlook a lot of moral gray areas in the quest to juice their brain power.

But until recently, you were out of luck if you wanted to do that without taking drugs that might be addictive, habit-forming, or associated with unfortunate behavioral side effects. Over the past few years, however, it's become increasingly clear that applying an electrical current to your head confers similar benefits.

U.S. military researchers have had great success using "transcranial direct current stimulation" (tDCS) — in which they hook you up to what's essentially a 9-volt battery and let the current flow through your brain. After a few years of lab testing, they've found that tDCS can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.

We don't yet have a commercially available "thinking cap," but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Recently, a group of Oxford neuroscientists released a cautionary statement about the ethics of brain boosting; then the U.K.'s Royal Society released a report that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi–like social divide, where the rich can afford to be smarter and everyone else will be left behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?

After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn't simply about the easy pleasure of undeserved expertise. For me, it was a near-spiritual experience. When a nice neuroscientist named Michael Weisend put the electrodes on me, what defined the experience was not feeling smarter or learning faster: The thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut up.

The experiment I underwent was accelerated marksmanship training, using a training simulation that the military uses. I spent a few hours learning how to shoot a modified M4 close-range assault rifle, first without tDCS and then with. Without it I was terrible, and when you're terrible at something, all you can do is obsess about how terrible you are. And how much you want to stop doing the thing you are terrible at.

Then this happened:

THE 20 MINUTES I spent hitting targets while electricity coursed through my brain were far from transcendent. I only remember feeling like I'd just had an excellent cup of coffee, but without the caffeine jitters. I felt clear-headed and like myself, just sharper. Calmer. Without fear and without doubt. From there on, I just spent the time waiting for a problem to appear so that I could solve it.

It was only when they turned off the current that I grasped what had just happened. Relieved of the minefield of self-doubt that constitutes my basic personality, I was a hell of a shot. And I can't tell you how stunning it was to suddenly understand just how much of a drag that inner cacophony is on my ability to navigate life and basic tasks.

It's possibly the world's biggest cliché that we're our own worst enemies. In yoga, they tell you that you need to learn to get out of your own way. Practices like yoga are meant to help you exhume the person you are without all the geologic layers of narrative and cross talk that are constantly chattering in your brain. I think eventually they just become background noise. We stop hearing them consciously, but believe me, we listen to them just the same.

My brain without self-doubt was a revelation. There was suddenly this incredible silence in my head; I've experienced something close to it during two-hour Iyengar yoga classes, or at the end of a 10k, but the fragile peace in my head would be shattered almost the second I set foot outside the calm of the studio. I had certainly never experienced instant Zen in the frustrating middle of something I was terrible at.

WHAT HAD HAPPENED inside my skull? One theory is that the mild electrical shock may depolarize the neuronal membranes in the part of the brain associated with object recognition, making the cells more excitable and responsive to inputs. Like many other neuroscientists working with tDCS, Weisend thinks this accelerates the formation of new neural pathways during the time that someone practices a skill, making it easier to get into the "zone." The method he was using on me boosted the speed with which wannabe snipers could detect a threat by a factor of 2.3.

Another possibility is that the electrodes somehow reduce activity in the prefrontal cortex — the area of the brain used in critical thought, says psychologist Mihaly Csikszentmihalyi of Claremont Graduate University in California. And critical thought, some neuroscientists believe, is muted during periods of intense Zen-like concentration. It sounds counterintuitive, but silencing self-critical thoughts might allow more automatic processes to take hold, which would in turn produce that effortless feeling of flow.

With the electrodes on, my constant self-criticism virtually disappeared, I hit every one of the targets, and there were no unpleasant side effects afterwards. The bewitching silence of the tDCS lasted, gradually diminishing over a period of about three days. The inevitable return of self-doubt and inattention was disheartening, to say the least.

I HOPE YOU can sympathize with me when I tell you that the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes. I also started to have a lot of questions. Who was I apart from the angry bitter gnomes that populate my mind and drive me to failure because I'm too scared to try? And where did those voices come from? Some of them are personal history, like the caustically dismissive 7th grade science teacher who advised me to become a waitress. Some of them are societal, like the hateful lady-mag voices that bully me every time I look in a mirror. An invisible narrative informs all my waking decisions in ways I can't even keep track of.

What would a world look like in which we all wore little tDCS headbands that would keep us in a primed, confident state, free of all doubts and fears? I'd wear one at all times and have two in my backpack ready in case something happened to the first one.

I think the ethical questions we should be asking about tDCS are much more subtle than the ones we've been asking about cognitive enhancement. Because how you define "cognitive enhancement" frames the debate about its ethics.

If you told me tDCS would allow someone to study twice as fast for the bar exam, I might be a little leery because now I have visions of rich daddies paying for Junior's thinking cap. Neuroscientists like Roy Hamilton have termed this kind of application "cosmetic neuroscience," which implies a kind of "First World problem" — frivolity.

But now think of a different application — could school-age girls use the zappy cap while studying math to drown out the voices that tell them they can't do math because they're girls? How many studies have found a link between invasive stereotypes and poor test performance?

And then, finally, the main question: What role do doubt and fear play in our lives if their eradication actually causes so many improvements? Do we make more ethical decisions when we listen to our inner voices of self-doubt or when we're freed from them? If we all wore these caps, would the world be a better place?

And if tDCS headwear were to become widespread, would the same 20 minutes with a 2 milliamp current always deliver the same effects, or would you need to up your dose like you do with some other drugs?

Because, to steal a great point from an online commenter, pretty soon, a 9-volt battery may no longer be enough.


©2012 by Sally Adee, reprinted by permission of New Scientist. The full article can be found at NewScientist.com.
Title: The Hazards of Confidence
Post by: Crafty_Dog on May 26, 2012, 12:41:41 PM
October 19, 2011

Don’t Blink! The Hazards of Confidence
By DANIEL KAHNEMAN

Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army’s Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.

One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again.

A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another’s shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over.

As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man’s true nature revealed itself in sharp relief.

After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers’ leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers’ leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw.

Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: “This one will never make it,” “That fellow is rather mediocre, but should do O.K.” or “He will be a star.” We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, “But of course anything could happen.”

We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.

We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.

I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.

I had discovered my first cognitive fallacy.

Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier’s performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, “What you see is all there is.” We had made up a story from the little we knew but had no way to allow for what we did not know about the individual’s future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like “He will be a star.” The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events — like who was near the wall — largely determined who became a leader. Other events — some of them also random — would determine later success in training and combat.

You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.

The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.

I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.

I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. “When you sell a stock,” I asked him, “who buys it?” He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong.

Most people in the investment business have read Burton Malkiel’s wonderful book “A Random Walk Down Wall Street.” Malkiel’s central idea is that a stock’s price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading.

We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley.

Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold.

To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled “Trading Is Hazardous to Your Wealth,” Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, “Boys Will Be Boys,” they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men.

Of course, there is always someone on the other side of a transaction; in general, it’s a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell “winners,” stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of “smart money” that finance professionals apply to themselves.

Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike.

Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.

More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not — and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses.

Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers’ scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others?

To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.

No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, “not very high” or “performance certainly fluctuates.” It quickly became clear, however, that no one expected the average correlation to be zero.

What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions — and thereby threaten people’s livelihood and self-esteem — are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience.

The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, “I have done very well for the firm, and no one can take that away from me.” I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?

We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness.

True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.

Daniel Kahneman is emeritus professor of psychology and of public affairs at Princeton University and a winner of the 2002 Nobel Prize in Economics. This article is adapted from his book “Thinking, Fast and Slow,” out this month from Farrar, Straus & Giroux.
http://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-confidence.html?pagewanted=all
Title: Golden Balls variation of Prisoner's Dilema game theory
Post by: Crafty_Dog on June 30, 2012, 01:00:55 PM


http://www.businessinsider.com/golden-balls-game-theory-2012-4
Title: Gorillas dismantle snares
Post by: Crafty_Dog on August 08, 2012, 09:12:43 PM
http://www.redorbit.com/news/science/1112661209/young-gorillas-observed-dismantling-poacher-snares/
Young Gorillas Observed Dismantling Poacher Snares
July 23, 2012
 

Juvenile gorillas from the Kuryama group dismantle a snare in Rwanda's Volcanoes National Park Credit: Dian Fossey Gorilla Fund International


In what can only be described as an impassioned effort to save their own kind from the hand of poachers, two juvenile mountain gorillas have been observed searching out and dismantling manmade traps and snares in their Rwandan forest home, according to a group studying the majestic creatures.

Conservationists working for the Dian Fossey Gorilla Fund International were stunned when they saw Dukore and Rwema, two brave young mountain gorillas, destroying a trap, similar to ones that snared and killed a member of their family less than a week before. Bush-meat hunters set thousands of traps throughout the forests of Rwanda, hoping to catch antelope and other species, but sometimes they capture apes as well.

In an interview with Mark Prigg at The Daily Mail, Erika Archibald, a spokesperson for the Gorilla Fund, said that John Ndayambaje, a tracker for the group, was conducting his regular rounds when he spotted a snare. As he bent down to dismantle it, a silverback from the group rushed him and made a grunting noise that is considered a warning call. A few moments later the two youngsters Dukore and Rwema rushed up to the snare and began to dismantle it on their own.
Then, seconds after destroying the one trap, Archibald continued, Ndayambaje witnessed the pair, along with a third juvenile named Tetero, move to another and dismantle that one as well, one that he had not noticed beforehand. He stood there in amazement.

“We have quite a long record of seeing silverbacks dismantle snares,” Archibald told Prigg. “But we had never seen it passed on to youngsters like that.”  And the youngsters moved “with such speed and purpose and such clarity … knowing,” she added. “This is absolutely the first time that we’ve seen juveniles doing that … I don’t know of any other reports in the world of juveniles destroying snares,” Veronica Vecellio, gorilla program coordinator at the Dian Fossey Gorilla Fund’s Karisoke Research Center, told National Geographic.

Every day trackers from the Karisoke center scour the forest for snares, dismantling any they find in order to protect the endangered mountain gorillas, which the International Fund for the Conservation of Nature (IUCN) says face “a very high risk of extinction in the wild.”

Adults generally have enough strength to free themselves from the snares, but juveniles usually do not, and often die as a result of snare-related wounds. Such was the case of an ensnared infant, Ngwino, found too late by Karisoke workers last week. The infant’s shoulder was dislocated during an escape attempt, and gangrene had set in after the ropes cut deep into her leg.

A snare consists of a noose tied to a branch or a bamboo stalk. The rope is pulled downward, bending the branch, and a rock or bent stick is used to hold the noose to the ground, keeping the branch tight. Then vegetation is placed over the noose to camouflage it. When an animal budges the rock or stick, the branch swings upward and the noose closes around the prey, usually the leg, and, depending on the weight of the animal, is hoisted up into the air.

Vecellio said the speed with which everything happened leads her to believe this wasn’t the first time the juveniles had dismantled a trap.

“They were very confident,” she said. “They saw what they had to do, they did it, and then they left.”

Since gorillas in the Kuryama group have been snared before, Vecellio said it is likely that the juveniles know these snares are dangerous. “That’s why they destroyed them.”

“Chimpanzees are always quoted as being the tool users, but I think, when the situation provides itself, gorillas are quite ingenious” too, said veterinarian Mike Cranfield, executive director of the Mountain Gorilla Veterinary Project.
He speculated that the gorillas may have learned how to destroy the traps by watching the Karisoke trackers. “If we could get more of them doing it, it would be great,” he joked.

But Vecellio said it would go against Karisoke center policies and ethos to actively instruct the apes. “We try as much as we can to not interfere with the gorillas. We don’t want to affect their natural behavior.”

Pictures of the incident have gone viral and numerous fans on the Fund’s Facebook page have shared comments cheering for the young silverbacks. Archibald said capturing the interaction was “so touching that I felt everybody with any brains would be touched.”


Title: WSJ: Are we really getting smarter?
Post by: Crafty_Dog on September 22, 2012, 07:02:02 AM


Are We Really Getting Smarter?
Americans' IQ scores have risen steadily over the past century. James R. Flynn examines why.
By JAMES R. FLYNN

IQ tests aren't perfect, but they can be useful. If a boy doing badly in class does really well on one, it is worth investigating whether he is being bullied at school or having problems at home. The tests also roughly predict who will succeed at college, though factors like motivation and self-control are at least as important.

 

We are the first of our species to live in a world dominated by hypotheticals and nonverbal symbols.

Advanced nations like the U.S. have experienced massive IQ gains over time (a phenomenon that I first noted in a 1984 study and is now known as the "Flynn Effect"). From the early 1900s to today, Americans have gained three IQ points per decade on both the Stanford-Binet Intelligence Scales and the Wechsler Intelligence Scales. These tests have been around since the early 20th century in some form, though they have been updated over time. Another test, Raven's Progressive Matrices, was invented in 1938, but there are scores for people whose birth dates go back to 1872. It shows gains of five points per decade.

In 1910, scored against today's norms, our ancestors would have had an average IQ of 70 (or 50 if we tested with Raven's). By comparison, our mean IQ today is 130 to 150, depending on the test. Are we geniuses or were they just dense?

These alternatives sparked a wave of skepticism about IQ. How could we claim that the tests were valid when they implied such nonsense? Our ancestors weren't dumb compared with us, of course. They had the same practical intelligence and ability to deal with the everyday world that we do. Where we differ from them is more fundamental: Rising IQ scores show how the modern world, particularly education, has changed the human mind itself and set us apart from our ancestors. They lived in a much simpler world, and most had no formal schooling beyond the sixth grade.

The Raven's test uses images to convey logical relationships. The Wechsler has 10 subtests, some of which do much the same, while others measure traits that intelligent people are likely to pick up over time, such as a large vocabulary and the ability to classify objects.

Modern people do so well on these tests because we are new and peculiar. We are the first of our species to live in a world dominated by categories, hypotheticals, nonverbal symbols and visual images that paint alternative realities. We have evolved to deal with a world that would have been alien to previous generations.

More
Raven's Progressive Matrices are non-verbal multiple choice measures of the general intelligence. In each test item, the subject is asked to identify the missing element that completes a pattern. Click here to take the test.
.
A century ago, people mostly used their minds to manipulate the concrete world for advantage. They wore what I call "utilitarian spectacles." Our minds now tend toward logical analysis of abstract symbols—what I call "scientific spectacles." Today we tend to classify things rather than to be obsessed with their differences. We take the hypothetical seriously and easily discern symbolic relationships.

The mind-set of the past can be seen in interviews between the great psychologist Alexander Luria and residents of rural Russia during the 1920s—people who, like ourselves in 1910, had little formal education.

Luria: What do a fish and crow have in common?

Reply: A fish it lives in water, a crow flies.

Luria: Could you use one word for them both?

Reply: If you called them "animals" that wouldn't be right. A fish isn't an animal, and a crow isn't either. A person can eat a fish but not a crow.

The prescientific person is fixated on differences between things that give them different uses. My father was born in 1885. If you asked him what dogs and rabbits had in common, he would have said, "You use dogs to hunt rabbits." Today a schoolboy would say, "They are both mammals." The latter is the right answer on an IQ test. Today we find it quite natural to classify the world as a prerequisite to understanding it.

Here is another example.

Luria: There are no camels in Germany; the city of B is in Germany; are there camels there or not?

Reply: I don't know, I have never seen German villages. If B is a large city, there should be camels there.

Luria: But what if there aren't any in all of Germany?

Reply: If B is a village, there is probably no room for camels.

The prescientific Russian wasn't about to treat something as important as the existence of camels hypothetically. Resistance to the hypothetical isn't just a state of mind unfriendly to IQ tests. Moral argument is more mature today than a century ago because we take the hypothetical seriously: We can imagine alternate scenarios and put ourselves in the shoes of others.

The following invented examples (not from an IQ test) show how our minds have evolved. All three present a series that implies a relationship; you must discern that relationship and complete the series based on multiple-choice answers:

1. [gun] / [gun] / [bullet] 2. [bow] / [bow] / [blank].

Pictures that represent concrete objects convey the relationship. In 1910, the average person could choose "arrow" as the answer.

1. [square] / [square] / [triangle]. 2. [circle] / [circle] / [blank].

In this question, the relationship is conveyed by shapes, not concrete objects. By 1960, many could choose semicircle as the answer: Just as the square is halved into a triangle, so the circle should be halved.

1. * / & / ? 2. M / B / [blank].

In this question, the relationship is simply that the symbols have nothing in common except that they are the same kind of symbol. That "relationship" transcends the literal appearance of the symbols themselves. By 2010, many could choose "any letter other than M or B" from the list as the answer.

This progression signals a growing ability to cope with formal education, not just in algebra but also in the humanities. Consider the exam questions that schools posed to 14-year-olds in 1910 and 1990. The earlier exams were all about socially valuable information: What were the capitals of the 45 states? Later tests were all about relationships: Why is the capital of many states not the largest city? Rural-dominated state legislatures hated big cities and chose Albany over New York, Harrisburg over Philadelphia, and so forth.

Our lives are utterly different from those led by most Americans before 1910. The average American went to school for less than six years and then worked long hours in factories, shops or agriculture. The only artificial images they saw were drawings or photographs. Aside from basic arithmetic, nonverbal symbols were restricted to musical notation (for an elite) and playing cards. Their minds were focused on ownership, the useful, the beneficial and the harmful.

Widespread secondary education has created a mass clientele for books, plays and the arts. Since 1950, there have been large gains on vocabulary and information subtests, at least for adults. More words mean that more concepts are conveyed. More information means that more connections are perceived. Better analysis of hypothetical situations means more innovation. As the modern mind developed, people performed better not only as technicians and scientists but also as administrators and executives.

A greater pool of those capable of understanding abstractions, more contact with people who enjoy playing with ideas, the enhancement of leisure—all of these developments have benefited society. And they have come about without upgrading the human brain genetically or physiologically. Our mental abilities have grown, simply enough, through a wider acquaintance with the world's possibilities.

—Mr. Flynn is the author of "Are We Getting Smarter? Rising IQ in the 21st Century" (Cambridge University Press).
Title: Five unique ways intelligent people screw up
Post by: Crafty_Dog on October 10, 2012, 12:02:44 PM


http://pjmedia.com/lifestyle/2012/09/29/the-5-unique-ways-intelligent-people-screw-up-their-lives/?singlepage=true
Title: Empathy and Analytical Mind mutually exclusive
Post by: Crafty_Dog on November 03, 2012, 08:06:13 PM
http://www.redorbit.com/news/science/1112722935/brain-empathy-analytical-thinking-103112/
Title: Jewish DNA
Post by: Crafty_Dog on June 20, 2013, 05:28:05 AM


http://fun.mivzakon.co.il/video/General/8740/%D7%9E%D7%97%D7%A7%D7%A8.html
Title: Smart Crow
Post by: Crafty_Dog on February 09, 2014, 03:23:26 AM
http://www.huffingtonpost.com/2014/02/06/crow-smartest-bird_n_4738171.html
Title: Elephant Artist
Post by: Crafty_Dog on August 07, 2014, 10:16:14 AM
http://www.igooglemo.com/2014/06/amazing-young-elephant-paints-elephant_15.html
Title: altruism
Post by: ccp on September 28, 2014, 05:09:41 PM
Extreme altruism
Right on!
Self-sacrifice, it seems, is the biological opposite of psychopathy
Sep 20th 2014 | From the print edition

FLYERS at petrol stations do not normally ask for someone to donate a kidney to an unrelated stranger. That such a poster, in a garage in Indiana, actually did persuade a donor to come forward might seem extraordinary. But extraordinary people such as the respondent to this appeal (those who volunteer to deliver aid by truck in Syria at the moment might also qualify) are sufficiently common to be worth investigating. And in a paper published this week in the Proceedings of the National Academy of Sciences, Abigail Marsh of Georgetown University and her colleagues do just that. Their conclusion is that extreme altruists are at one end of a “caring continuum” which exists in human populations—a continuum that has psychopaths at the other end.

Biology has long struggled with the concept of altruism. There is now reasonable agreement that its purpose is partly to be nice to relatives (with whom one shares genes) and partly to permit the exchanging of favours. But how the brain goes about being altruistic is unknown. Dr Marsh therefore wondered if the brains of extreme altruists might have observable differences from other brains—and, in particular, whether such differences might be the obverse of those seen in psychopaths.

She and her team used two brain-scanning techniques, structural and functional magnetic-resonance imaging (MRI), to study the amygdalas of 39 volunteers, 19 of whom were altruistic kidney donors. (The amygdalas, of which brains have two, one in each hemisphere, are areas of tissue central to the processing of emotion and empathy.) Structural MRI showed that the right amygdalas of altruists were 8.1% larger, on average, than those of people in the control group, though everyone’s left amygdalas were about the same size. That is, indeed, the obverse of what pertains in psychopaths, whose right amygdalas, previous studies have shown, are smaller than those of controls.

Functional MRI yielded similar results. Participants, while lying in a scanner, were shown pictures of men and women wearing fearful, angry or neutral expressions on their faces. Each volunteer went through four consecutive runs of 80 such images, and the fearful images (but not the other sorts) produced much more activity in the right amygdalas of the altruists than they did in those of the control groups, while the left amygdalas showed no such response. That, again, is the obverse of what previous work has shown is true of psychopaths, though in neither case is it clear why only the right amygdala is affected.

Dr Marsh’s result is interesting as much for what it says about psychopathy as for what it says about extreme altruism. Some biologists regard psychopathy as adaptive. They argue that if a psychopath can bully non-psychopaths into giving him what he wants, he will be at a reproductive advantage as long as most of the population is not psychopathic. The genes underpinning psychopathy will thus persist, though they can never become ubiquitous because psychopathy works only when there are non-psychopaths to prey on.

In contrast, Dr Marsh’s work suggests that what is going on is more like the way human height varies. Being tall is not a specific adaptation (though lots of research suggests tall people do better, in many ways, than short people do). Rather, tall people (and also short people) are outliers caused by unusual combinations of the many genes that govern height. If Dr Marsh is correct, psychopaths and extreme altruists may be the result of similar, rare combinations of genes underpinning the more normal human propensity to be moderately altruistic.

From the print edition: Science and technology
Title: Worm mind, robot body
Post by: Crafty_Dog on December 15, 2014, 04:41:26 PM
http://www.iflscience.com/technology/worms-mind-robot-body 
Title: The Artificial Intelligence Revolution
Post by: Crafty_Dog on February 23, 2015, 07:05:06 PM


http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

The URL for Part Two can be found in Part One.
Title: Ashkenazi intelligence
Post by: Crafty_Dog on February 24, 2015, 12:07:16 PM
Haven't read this yet, posting it here for my future convenience:

http://ieet.org/index.php/IEET/more/pellissier20130620 
Title: Autistic Boy with IQ of 170
Post by: Crafty_Dog on May 27, 2015, 05:44:32 PM
http://wakeup-world.com/2013/06/04/autistic-boy-discovers-gift-after-removal-from-state-run-therapy/
Title: Is it ethical to study the genetic component of IQ?
Post by: Crafty_Dog on October 03, 2015, 01:53:45 PM
http://www.bioedge.org/bioethics/is-it-ethical-to-investigate-the-genetic-component-of-iq/11594
Title: Artificial General Intelligence AGI about to up end the world as we know it
Post by: Crafty_Dog on March 12, 2016, 10:35:13 PM
Game ON: the end of the old economic system is in sight
Posted: 12 Mar 2016 11:23 AM PST
Google is a pioneer in limited artificial general intelligence (aka computers that can learn w/o preprogramming them). One successful example is AlphaGo.  It just beat this Go Grandmaster three times in a row. 
 
 
What makes this win interesting is that AlphaGo didn't win through brute force.  Go is too complicated for that:
...the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.
 
It also didn't win by extensive preprogramming by talented engineers, like IBM's Deep Blue did to win at Chess. 
 
Instead, AlphaGo won this victory by learning how to play the game from scratch using this process:
   No assumptions.  AlphaGo approached the game without any assumptions.  This is called a model-free approach.  This allows it to program itself from scratch, by building complex models human programmers can't understand/match.
   Big Data.  It then learned the game by interacting with a database filled with 30 million games previously played by human beings.  The ability to bootstrap a model from data removes almost all of the need for engineering and programming talent currently needed for big systems.  That's huge.
   Big Sim (by the way, Big Sim will be as well known as Big Data in five years <-- heard it here first). Finally, it applied and honed that learning by playing itself on 50 computers night and day until it became good enough to play a human grandmaster.
The surprise of this victory isn't that it occurred.  Most expected it would, eventually... 
 
Instead, the surprise is how fast it happened.  How fast AlphaGo was able to bootstrap itself to a mastery of the game.  It was fast. Unreasonably fast.
 
However, this victory goes way beyond the game of Go.  It is important because AlphaGo uses a generic technique for learning.  A technique that can be used to master a HUGE range of activities, quickly.  Activities that people get paid for today.
 
This implies the following:
   This technology is going to cut through the global economy like a hot knife through butter.  It learns fast and largely on its own.  It's widely applicable.  It doesn't only master what it has seen, it can innovate.  For example: some of the unheard of moves made by AlphaGo were considered "beautiful" by the Grandmaster it beat. 
   Limited AGI (deep learning in particular) will have the ability to do nearly any job currently being done by human beings -- from lawyers to judges, nurses to doctors, driving to construction -- potentially at a grandmaster's level of capability.  This makes it a buzzsaw.
   Very few people (and I mean very few) will be able to stay ahead of the limited AGI buzzsaw.   It learns so quickly, the fate of people stranded in former factory towns gutted by "free trade" is likely to be the fate of the highest paid technorati.  They simply don't have the capacity to learn fast enough or be creative enough to stay ahead of it.
Have fun,
 
John Robb
 
PS:  Isn't it ironic (or not) that at the very moment in history when we demonstrate a limited AGI (potentially, a tsunami of technological change) the western industrial bureaucratic political system starts to implode due to an inability to deal with the globalization (economic, finance and communications) enabled by the last wave of technological change?

PPS:  This has huge implications for warfare.  I'll write more about those soon.  Laying a foundation for understanding this change first.
Title: Inherited Intelligence, Charles Murray, Bell Curve
Post by: DougMacG on March 23, 2016, 10:02:07 AM
Also pertains to race and education. 
Charles Murray was co-author of The Bell Curve, a very long scientific book that became a landmine for a small point in it that exposed differences in intelligence between races, therefore author is a racist...  His co-author died about when this was published so he has owned the work over the two decades since it was published.  

Intelligence is 40%-80% inherited, a wide range that is nowhere near zero or 100%.

People tend to marry near their own intelligence making the difference grow rather than equalize over time.  He predicted this would have societal effects that have most certainly become true.

Being called a racist for publishing scientific data is nothing new, but Charles Murray has received more than his share of it.  What he could of or should have done is cover up the real results to fit what people like to hear, like the climate scientists do.  He didn't.

Most recently his work received a public rebuke from the President of Virginia Tech.

His response to that is a bit long but quite a worthwhile read that will save you the time of reading his 3-4 inch thick hardcover book if you haven't already read this important work.

https://www.aei.org/publication/an-open-letter-to-the-virginia-tech-community/

Charles Murray
March 17, 2016 9:00 am

An open letter to the Virginia Tech community

Last week, the president of Virginia Tech, Tim Sands, published an “open letter to the Virginia Tech community” defending lectures delivered by deplorable people like me (I’m speaking on the themes of Coming Apart on March 25). Bravo for President Sands’s defense of intellectual freedom. But I confess that I was not entirely satisfied with his characterization of my work. So I’m writing an open letter of my own.

Dear Virginia Tech community,

Since President Sands has just published an open letter making a serious allegation against me, it seems appropriate to respond. The allegation: “Dr. Murray is well known for his controversial and largely discredited work linking measures of intelligence to heredity, and specifically to race and ethnicity — a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.”

Let me make an allegation of my own. President Sands is unfamiliar either with the actual content of The Bell Curve — the book I wrote with Richard J. Herrnstein to which he alludes — or with the state of knowledge in psychometrics.

The Bell Curve and Charles Murray
I should begin by pointing out that the topic of the The Bell Curve was not race, but, as the book’s subtitle says, “Intelligence and Class Structure in American Life.” Our thesis was that over the last half of the 20th century, American society has become cognitively stratified. At the beginning of the penultimate chapter, Herrnstein and I summarized our message:

Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:
An increasingly isolated cognitive elite.
A merging of the cognitive elite with the affluent.
A deteriorating quality of life for people at the bottom end of the cognitive distribution.
Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. [p. 509].
It is obvious that these conclusions have not been discredited in the twenty-two years since they were written. They may be more accurately described as prescient.

Now to the substance of President Sands’s allegation.

The heritability of intelligence

Richard Herrnstein and I wrote that cognitive ability as measured by IQ tests is heritable, somewhere in the range of 40% to 80% [pp. 105–110], and that heritability tends to rise as people get older. This was not a scientifically controversial statement when we wrote it; that President Sands thinks it has been discredited as of 2016 is amazing.

You needn’t take my word for it. In the wake of the uproar over The Bell Curve, the American Psychological Association (APA) assembled a Task Force on Intelligence consisting of eleven of the most distinguished psychometricians in the United States. Their report, titled “Intelligence: Knowns and Unknowns,” was published in the February 1996 issue of the APA’s peer-reviewed journal, American Psychologist. Regarding the magnitude of heritability (represented by h2), here is the Task Force’s relevant paragraph. For purposes of readability, I have omitted the citations embedded in the original paragraph:

If one simply combines all available correlations in a single analysis, the heritability (h2) works out to about .50 and the between-family variance (c2) to about .25. These overall figures are misleading, however, because most of the relevant studies have been done with children. We now know that the heritability of IQ changes with age: h2 goes up and c2 goes down from infancy to adulthood. In childhood h2 and c2 for IQ are of the order of .45 and .35; by late adolescence h2 is around .75 and c2 is quite low (zero in some studies) [p. 85].
The position we took on heritability was squarely within the consensus state of knowledge. Since The Bell Curve was published, the range of estimates has narrowed somewhat, tending toward modestly higher estimates of heritability.

Intelligence and race

There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones.

There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones. Once again, the state of knowledge about the basics is not particularly controversial. The mean scores for all kinds of mental tests vary by ethnicity. No one familiar with the data disputes that most elemental statement. Regarding the most sensitive difference, between Blacks and Whites, Herrnstein and I followed the usual estimate of one standard deviation (15 IQ points), but pointed out that the magnitude varied depending on the test, sample, and where and how it was administered. What did the APA Task Force conclude? “Although studies using different tests and samples yield a range of results, the Black mean is typically about one standard deviation (about 15 points) below that of Whites. The difference is largest on those tests (verbal or nonverbal) that best represent the general intelligence factor g” [p. 93].

Is the Black/White differential diminishing? In The Bell Curve, we discussed at length the evidence that the Black/White differential has narrowed [pp. 289–295], concluding that “The answer is yes with (as usual) some qualifications.” The Task Force’s treatment of the question paralleled ours, concluding with “[l]arger and more definitive studies are needed before this trend can be regarded as established” [p. 93].

Can the Black/White differential be explained by test bias? In a long discussion [pp. 280–286], Herrnstein and I presented the massive evidence that the predictive validity of mental tests is similar for Blacks and Whites and that cultural bias in the test items or their administration do not explain the Black/White differential. The Task Force’s conclusions regarding predictive validity: “Considered as predictors of future performance, the tests do not seem to be biased against African Americans” [p. 93]. Regarding cultural bias and testing conditions:  “Controlled studies [of these potential sources of bias] have shown, however, that none of them contributes substantially to the Black/White differential under discussion here” [p. 94].

Can the Black/White differential be explained by socioeconomic status? We pointed out that the question has two answers: Statistically controlling for socioeconomic status (SES) narrows the gap. But the gap does not narrow as SES goes up — i.e., measured in standard deviations, the differential between Blacks and Whites with high SES is not narrower than the differential between those with low SES [pp. 286–289]. Here’s the APA Task Force on this topic:

Several considerations suggest that [SES] cannot be the whole explanation. For one thing, the Black/White differential in test scores is not eliminated when groups or individuals are matched for SES. Moreover, the data reviewed in Section 4 suggest that—if we exclude extreme conditions—nutrition and other biological factors that may vary with SES account for relatively little of the variance in such scores [p. 94].
The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong.

And so on. The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong. We deliberately remained well within the mainstream of what was confidently known when we wrote. None of those descriptions have changed much in the subsequent twenty-two years, except to be reinforced as more has been learned. I have no idea what countervailing evidence President Sands could have in mind.

At this point, some readers may be saying to themselves, “But wasn’t The Bell Curve the book that tried to prove blacks were genetically inferior to whites?” I gather that was President Sands’ impression as well. It has no basis in fact. Knowing that people are preoccupied with genes and race (it was always the first topic that came up when we told people we were writing a book about IQ), Herrnstein and I offered a seventeen-page discussion of genes, race, and IQ [pp. 295–311]. The first five pages were devoted to explaining the context of the issue — why, for example, the heritability of IQ among humans does not necessarily mean that differences between groups are also heritable. Four pages were devoted to the technical literature arguing that genes were implicated in the Black/White differential. Eight pages were devoted to arguments that the causes were environmental. Then we wrote:

If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate. [p. 311].
That’s it—the sum total of every wild-eyed claim that The Bell Curve makes about genes and race. There’s nothing else. Herrnstein and I were guilty of refusing to say that the evidence justified a conclusion that the differential had to be entirely environmental. On this issue, I have a minor quibble with the APA Task Force, which wrote “There is not much direct evidence on [a genetic component], but what little there is fails to support the genetic hypothesis” [p. 95]. Actually there was no direct evidence at all as of the mid-1990s, but the Task Force chose not to mention a considerable body of indirect evidence that did in fact support the genetic hypothesis. No matter. The Task Force did not reject the possibility of a genetic component. As of 2016, geneticists are within a few years of knowing the answer for sure, and I am content to wait for their findings.

But I cannot leave the issue of genes without mentioning how strongly Herrnstein and I rejected the importance of whether genes are involved. This passage from The Bell Curve reveals how very, very different the book is from the characterization of it that has become so widespread:

In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of difference are less threatening than genetic ones is natural but illusory.
In any case, you are not going to learn tomorrow that all the cognitive differences between races are 100 percent genetic in origin, because the scientific state of knowledge, unfinished as it is, already gives ample evidence that environment is part of the story. But the evidence eventually may become unequivocal that genes are also part of the story. We are worried that the elite wisdom on this issue, for years almost hysterically in denial about that possibility, will snap too far in the other direction. It is possible to face all the facts on ethnic and race differences on intelligence and not run screaming from the room. That is the essential message [pp. 314-315].
I have been reluctant to spend so much space discussing The Bell Curve’s treatment of race and intelligence because it was such an ancillary topic in the book. Focusing on it in this letter has probably made it sound as if it was as important as President Sands’s open letter implied.

But I had to do it. For two decades, I have had to put up with misrepresentations of The Bell Curve. It is annoying. After so long, when so many of the book’s main arguments have been so dramatically vindicated by events, and when our presentations of the meaning and role of IQ have been so steadily reinforced by subsequent research in the social sciences, not to mention developments in neuroscience and genetics, President Sands’s casual accusation that our work has been “largely discredited” was especially exasperating. The president of a distinguished university should take more care.

It is in that context that I came to the end of President Sands’s indictment, accusing me of promulgating “a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.” At that point, President Sands went beyond the kind of statement that merely reflects his unfamiliarity with The Bell Curve and/or psychometrics. He engaged in intellectual McCarthyism.
Title: Elon Musk vs. Artificial Intelligence
Post by: Crafty_Dog on March 29, 2017, 08:33:55 AM
http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x
Title: Qualia
Post by: Crafty_Dog on April 13, 2017, 11:53:51 AM
http://neurohacker.com/qualia/


My son is intrigued by this.  Any comments?
Title: Re: Qualia
Post by: G M on April 20, 2017, 07:33:10 PM
http://neurohacker.com/qualia/


My son is intrigued by this.  Any comments?

My money is on this being a ripoff.
Title: CD, FWIW I agree with GM
Post by: ccp on April 22, 2017, 11:03:53 AM
https://www.theatlantic.com/health/archive/2013/07/the-vitamin-myth-why-we-think-we-need-supplements/277947/

Crafty,
most if not all the supplement sales carry a similar pattern of promotion.   You get someone with a science background who recites biochemical pathways that show a particular substance is involved in some sort of function that is needed for health.   From Vit C to B 12 to cofactor with magnesium and hundreds and probably thousands more

They impress the non scientist with "co factors" long chemical names and site studies that show some relationship to our health.   Then they may state taking large doses of the cofactor or other chemical increases the benefit to our health over just normal doses.  Or they will vary the presentation with claims that the nutrient or chemical has to be taken in a certain way with other substances and then, and lonly then we would all reap some increase benefit to our "prostate health, our cognitive health, our digestive healthy, more energy etc.

Then they find mostly obscure studies by usually second rate or no name researches who are spending grant money , trying to make some sort of name for themselves, or I even suspect are at times making up data for bribes  and then publish the date and their "research" in one of the money making journals that are usually second rate or are not well monitored or peer reviewed (even that process is subject to outright fraud).

So now they cite the impressive sounding biochemistry in order to sound like they understand something that the rest of us do not and they "discovered" this chemical (s) that is found by these usually insignificant if not fraudulent studies to suggest some sort of benefit.   

The chemicals are often obscure from some exotic jungle of far away ocean or island or with some claim of being the only ones who can provide in the proper purity or concentration or mix or other elixir that no one else can duplicate .

If any real scientist or doctor disputes their claim they come back with a vengeance arguing that the doctor or scientist is just threatened by this "cure" that would put the doctor or scientist out of business.

You don't have to take my word for it but the vast majority of these things , if not all , are scams.  They all have similar themes with variations that play over and over again to people who are looking to stay healthy, stay young, get an edge in life, have more sexual prowess ,  remember more , be smarte.

There si billions to be made. 

I hope I don't sound like some condescending doctor who thinks he knows it all.  I don't.  And I know I don't. 
But even on Shark Tank when some entreprenuer came on trying to get the sharks to buy into some sort of supplement they said all the supplements are just a "con".

FWIW I agree with it.



Title: Re: Intelligence and Psychology
Post by: Crafty_Dog on April 22, 2017, 02:19:56 PM
Thank you!
Title: Jordan Peterson on Intelligence
Post by: Crafty_Dog on May 25, 2017, 12:09:03 PM
https://www.youtube.com/watch?v=P8opBj1LjSU
Title: AI develops its own language
Post by: Crafty_Dog on June 16, 2017, 04:06:11 PM


https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/
Title: Re: AI develops its own language
Post by: G M on June 17, 2017, 10:10:27 AM


https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/

https://www.youtube.com/watch?v=ih_l0vBISOE
Title: Grit
Post by: Crafty_Dog on October 09, 2017, 08:31:50 AM
http://www.illumeably.com/2017/07/27/success-vs-failure/
Title: Mark Cuban on AI
Post by: Crafty_Dog on November 06, 2017, 09:02:58 AM
https://www.marketwatch.com/story/mark-cuban-tells-kyle-bass-ai-will-change-everything-weighs-in-on-bitcoin-2017-11-04?mod=cx_picks&cx_navSource=cx_picks&cx_tag=other&cx_artPos=7#cxrecs_s

There is no way to beat the machines, so you’d better bone up on what makes them tick.

That was the advice of billionaire investor Mark Cuban, who was interviewed by Kyle Bass, founder and principal of Hayman Capital Management, in late October for Real Vision. The interview published Nov. 3.

“I think artificial intelligence is going to change everything, everything, 180 degrees,” said Cuban, who added that changes seen by AI would “dwarf” the advances that have been seen in technology over the last 30 years or more, even the internet.

The owner of the NBA’s Dallas Mavericks and a regular on the TV show “Shark Tank,” said AI is going to displace a lot of jobs, something that will play out fast, over the next 20 to 30 years. He said real estate is one industry that’s likely to get hit hard.

Read: Kyle Bass says this will be the first sign of a bigger market meltdown

“So, the concept of you calling in to make an appointment to have somebody pick up your car to get your oil changed, right — someone will still drive to get your car, but there’s going to be no people in transacting any of it,” he said.

Cuban says he’s trying to learn all he can right now about machine-learning, neural networks, deep learning, writing code and programming language such as Python. Machine language say, “OK, we can take a lot more variables than you can ever think of,” he said.

And AI is seeing big demand when it comes to jobs, he said. “At the high end, we can’t pay enough to get — so when I talk about, within the artificial intelligence realm, there’s a company in China paying million-dollar bonuses to get the best graduates,” said Cuban.

The U.S. is falling badly behind when it comes to AI, with Montreal now the “center of the universe for computer vision. It’s not U.S.-based schools that are dominating any longer in those areas,” he said.

The AI companies

As for companies standing to benefit from AI, Cuban said he “thinks the FANG stocks are going to crush them,” noting that his biggest public holding is Amazon.com Inc. AMZN, +0.83%

“They’re the world’s greatest startups with liquidity. If you look at them as just a public company where you want to see what the P/E ratio is and what the discounted cash value-- you’re never going to get it, right? You’re never going to see it. And if you say Jeff Bezos (chief executive officer of Amazon), Reed Hastings (chief executive officer of Netflix Inc. NFLX, +0.65% ) — those are my 2 biggest holdings,” he said.

Read: 10 wildly successful people and the surprising jobs that kick-started their careers

Cuban said he’s less sold on Apple Inc. AAPL, +1.26% though he said it’s trying to make progress on AI, along with Alphabet Inc. GOOGL, -0.33% and Facebook Inc. FB, +0.15%  . “They’re just nonstop startups. They’re in a war. And you can see the market value accumulating to them because of that,” he said.

But still, they aren’t all owning AI yet, and there’s lots of opportunities for smaller companies, he added..

On digital currencies and ICO

While Bass commented that he has been just a spectator when it comes to blockchain — a decentralized ledger used to record and verify transactions — Cuban said he’s a big fan. But when it comes to bitcoin BTCUSD, -2.74% ethereum and other cryptocurrencies, he said it would be a struggle to see them become real currencies because only a limited number of transactions can be done.

Read: Two ETF sponsors file for funds related to blockchain, bitcoin’s foundational technology

“So, it’s going to be very difficult for it to be a currency when the time and the expense of doing a transaction is 100 times what you can do over a Visa or Mastercard, right?” asked Cuban, adding that really the only value of bitcoin and ethereum is that they are just digital assets that are collectible.

Read: Bitcoin may be staging the biggest challenge yet to gold and silver

“And in this particular case, it’s a brilliant collectible that’s probably more like art than baseball cards, stamps, or coins, right, because there’s a finite amount that are going to be made, right? There are 21.9 million bitcoins that are going to be made,” he said.

Cuban said initial coin offerings — fundraising for new cryptocurrency ventures — “really are an opportunity,” and he has been in involved in UniCoin, which does ETrade and Unikrm, which does legal sports betting for Esports and other sports outside the United States.

Read: What is an ICO?

But he and Bass both commented about how the industry needs regulating, with Bass noting that ICOs have raised $3 billion this year, and $2 billion going into September. While many are “actually going to do well,” so many are just completely stupid and frauds,” he said.

“It’s the dumb ones that are going to get shut down,” agreed Cuban.

One problem: “There’s nobody at the top that has any understanding of it,” he added, referring to the Securities and Exchange Commission.

Cuban ended the interview with some advice on where to invest now. He said for those investors not too knowledgeable about markets, the best bet is a cheap S&P 500 SPX, +0.06%  fund, but that putting 5% in bitcoin or ethereum isn’t a bad idea on the theory that it’s like investing in artwork.

Listen to the whole interview on Real Vision here
Title: The Psychology of Human Misjudgement
Post by: Crafty_Dog on March 13, 2018, 06:03:50 AM
https://www.youtube.com/watch?v=pqzcCfUglws&feature=youtu.be
Title: Intelligence across the generations
Post by: Crafty_Dog on March 15, 2018, 10:54:38 PM
https://ourworldindata.org/intelligence
Title: The Intelligence of Crows
Post by: Crafty_Dog on April 15, 2018, 09:46:06 PM
https://www.ted.com/talks/joshua_klein_on_the_intelligence_of_crows#t-178765
Title: Chinese Eugenics
Post by: Crafty_Dog on May 08, 2018, 02:59:20 PM
https://www.vice.com/en_us/article/5gw8vn/chinas-taking-over-the-world-with-a-massive-genetic-engineering-program
Title: Nautilus: Is AI inscrutable?
Post by: Crafty_Dog on June 17, 2018, 07:13:02 AM

http://nautil.us//issue/40/learning/is-artificial-intelligence-permanently-inscrutable?utm_source=Nautilus&utm_campaign=270e193d5c-EMAIL_CAMPAIGN_2018_06_15_08_18&utm_medium=email&utm_term=0_dc96ec7a9d-270e193d5c-61805061
Title: Evidence that Viruses May Cause Alzheimer's Disease
Post by: bigdog on July 15, 2018, 07:20:51 AM
https://gizmodo.com/yet-more-evidence-that-viruses-may-cause-alzheimers-dis-1827511539
Title: Brain Games for older folks
Post by: Crafty_Dog on July 26, 2018, 01:25:51 PM


https://www.nj.com/healthfit/index.ssf/2018/07/brain_training_breakthrough_offers_new_hope_in_bat.html
Title: The authoritarian Chinese vision for AI
Post by: Crafty_Dog on August 10, 2018, 05:38:18 PM
https://www.nationalreview.com/2018/08/china-artificial-intelligence-race/?utm_source=Sailthru&utm_medium=email&utm_campaign=NR%20Daily%20Monday%20through%20Friday%202018-08-10&utm_term=NR5PM%20Actives
Title: China exporting illiberal AI
Post by: Crafty_Dog on August 15, 2018, 01:53:24 PM
https://www.mercatornet.com/features/view/exporting-enslavement-chinas-illiberal-artificial-intelligence/21607
Title: Human Brain builds structures in 11 dimensions; more
Post by: Crafty_Dog on August 19, 2018, 07:46:48 AM

https://bigthink.com/paul-ratner/our-brains-think-in-11-dimensions-discover-scientists?utm_campaign=Echobox&utm_medium=Social&utm_source=Twitter#Echobox=1534419641

Also see

https://aeon.co/videos/our-divided-brains-are-far-more-complex-and-remarkable-than-a-left-right-split

http://runwonder.com/life/science-explains-what-happens.html

Title: Stratfor: AI and great power competition
Post by: Crafty_Dog on October 18, 2018, 09:52:05 AM
Highlights

    Aging demographics and an emerging great power competition pitting China against the United States form the backdrop to a high-stakes race in artificial intelligence development.
    The United States, for now, has a lead overall in AI development, but China is moving aggressively to try and overtake its American rivals by 2030.
    While deep integration across tech supply chains and markets has occurred in the past couple of decades, rising economic nationalism and a growing battle over international standards will balkanize the global tech sector.
    AI advancements will boost productivity and economic growth, but creative destruction in the workforce will drive political angst in much of the world, putting China's digital authoritarianism model as well as liberal democracies to the test.

For better or worse, the advancement and diffusion of artificial intelligence technology will come to define this century. Whether that statement should fill your soul with terror or delight remains a matter of intense debate. Techno-idealists and doomsdayers will paint their respective utopian and dystopian visions of machine-kind, making the leap from what we know now as "narrow AI" to "general AI" to surpass human cognition within our lifetime. On the opposite end of the spectrum, yawning skeptics will point to Siri's slow intellect and the human instinct of Capt. Chesley "Sully" Sullenberger – the pilot of the US Airways flight that successfully landed on the Hudson River in 2009 – to wave off AI chatter as a heap of hype not worth losing sleep over.

The fact is that the development of AI – a catch-all term that encompasses neural networks and machine learning and deep learning technologies – has the potential to fundamentally transform civilian and military life in the coming decades. Regardless of whether you're a businessperson pondering your next investment, an entrepreneur eyeing an emerging opportunity, a policymaker grappling with regulation or simply a citizen operating in an increasingly tech-driven society, AI is a global force that demands your attention.

The Big Picture

As Stratfor wrote in its 2018 Third-Quarter Forecast, the world is muddling through a blurry transition from the post-Cold War world to an emerging era of great power competition. The race to dominate AI development will be a defining feature of U.S.-China rivalry.

An Unstoppable Force

Willingly or not, even the deepest skeptics are feeding the AI force nearly every minute of every day. Every Google (or Baidu) search, Twitter (or Weibo) post, Facebook (or Tencent) ad and Amazon (or Alibaba) purchase is another click creating mountains of data – some 2.2. billion gigabytes globally every day – that companies are using to train their algorithms to anticipate and mimic human behavior. This creates a virtuous (or vicious, depending on your perspective) cycle: the more users engage with everyday technology platforms, the more data is collected; the more data that's collected, the more the product improves; the more competitive the product, the more users and billions of dollars in investment it will attract; a growing number of users means more data can be collected, and the loop continues.

And unlike previous AI busts, the development of this technology is occurring amid rapidly advancing computing power, where the use of graphical processing units (GPUs) and development of custom computer chips is giving AI developers increasingly potent hardware to drive up efficiency and drive down cost in training their algorithms. To help fuel advancements in AI hardware and software, AI investment is also growing at a rapid pace.

The Geopolitical Backdrop to the Global AI Race

AI is both a driver and a consequence of structural forces reshaping the global order. Aging demographics – an unprecedented and largely irreversible global phenomenon – is a catalyst for AI development. As populations age and shrink, financial burdens on the state mount and labor productivity slows, sapping economic growth over time. Advanced industrial economies already struggling to cope with the ill effects of aging demographics with governments that are politically squeamish toward immigration will relentlessly look to machine learning technologies to increase productivity and economic growth in the face of growing labor constraints.

The global race for AI supremacy will feature prominently in a budding great power competition between the United States and China. China was shocked in 2016 when Google DeepMind's AlphaGo beat the world champion of Go, an ancient Chinese strategy game (Chinese AI state planners dubbed the event their "Sputnik moment"), and has been deeply shaken by U.S. President Donald Trump's trade wars and the West's growing imperative to keep sensitive technology out of Chinese competitors' hands. Just in the past couple of years alone, China's state focus on AI development has skyrocketed to ensure its technological drive won't suffer a short circuit due to its competition with the United States.

How the U.S. and China Stack Up in AI Development

Do or Die for Beijing

The United States, for now, has the lead in AI development when it comes to hardware, research and development, and a dynamic commercial AI sector. China, by the sheer size of its population, has a much larger data pool, but is critically lagging behind the United States in semiconductor development. Beijing, however, is not lacking in motivation in its bid to overtake the United States as the premier global AI leader by 2030. And while that timeline may appear aggressive, China's ambitious development in AI in the coming years will be unfettered by the growing ethical, privacy and antitrust concerns occupying the West. China is also throwing hundreds of billions of dollars into fulfilling its AI mission, both in collaboration with its standing tech champions and by encouraging the rise of unicorns, or privately held startups valued at $1 billion or more.

By incubating and rewarding more and more startups, Beijing is finding a balance between focusing its national champions on the technologies most critical to the state (sometimes by taking an equity stake in the company) without stifling innovation. In the United States, on the other hand, it would be disingenuous to label U.S.-based multinational firms, which park most of their corporate profits overseas, as true "national" champions. Instead of the state taking the lead in funding high-risk and big-impact research in emerging technologies as it has in the past, the roles in the West have been flipped; private tech companies are in the driver's seat while the state is lunging at the steering wheel, trying desperately to keep China in its rear view.

The Ideological Battleground

The United States may have thought its days of fighting globe-spanning ideological battles ended with the Cold War. Not so. AI development is spawning a new ideological battlefield between the United States and China, pitting the West's notion of liberal democracy against China's emerging brand of digital authoritarianism. As neuroscientist Nicholas Wright highlights in his article, "How Artificial Intelligence Will Reshape the Global Order," China's 2017 AI development plan "describes how the ability to predict and grasp group cognition means 'AI brings new opportunities for social construction.'" Central to this strategic initiative is China's diffusion of a "social credit system" (which is set to be fully operational by 2020) that would assign a score based on a citizen's daily activities to determine everything from airfare class and loan eligibility to what schools your kids are allowed to attend. It's a tech-powered, state-driven approach to parse model citizens from the deplorables, so to speak.

The ability to harness AI-powered facial recognition and surveillance data to shape social behavior is an appealing tool, not just for Beijing, but for other politically paranoid states that are hungry for an alternative path to stability and are underwhelmed by the West's messy track record in promoting democracy. Wright describes how Beijing has exported its Great Firewall model to Thailand and Vietnam to barricade the internet while also supplying surveillance technology to the likes of Iran, Russia, Ethiopia, Zimbabwe, Zambia and Malaysia. Not only does this aid China's goal of providing an alternative to a U.S.-led global order, but it widens China's access to even wider data pools around the globe to hone its own technological prowess.

The European Hustle

Not wanting to be left behind in this AI great power race, Europe and Russia are hustling to catch up, but they will struggle in the end to keep pace with the United States and China. Russian President Vladimir Putin made headlines last year when he told an audience of Russian youths that whoever rules AI will rule the world. But the reality of Russia's capital constraints means Russia will have to choose carefully where it puts its rubles. Moscow will apply a heavy focus on AI military applications and will rely on cyber espionage and theft to try and find shortcuts to AI development, all while trying to maintain its strategic alignment with China to challenge the United States.

The EU Struggle to Create Unicorn Companies

While France harbors ambitious plans to develop an AI ecosystem for Europe and Germany frets over losing its industrial edge to U.S. and Chinese tech competitors, unavoidable and growing fractures within the European Union will hamper Europe's ability to play a leading AI role on the world stage. The European Union's cumbersome regulatory environment and fragmented digital market has been prohibitive for tech startups, a fact reflected in the European Union's low global share and value of unicorn companies. Meanwhile, the United Kingdom, home to Europe's largest pool of tech talent, will be keen on unshackling itself from the European Union's investment-inhibitive regulations as it stumbles out of the bloc.

A Battle over Talent and Standards

But wherever pockets of tech innovation already exist on the Continent, those relatively few companies and individuals are already prime targets for U.S. and Chinese tech juggernauts prowling the globe for AI talent. AI experts are a precious global commodity. According to a 2018 study by Element AI, there are roughly 22,000 doctorate-level researchers in the world, but only around 3,000 are actually looking for work and around 5,400 are presenting their research at AI conferences. U.S. and Chinese tech giants are using a variety of means – mergers and acquisitions, aggressive poaching, launchings labs in cities like Paris, Montreal and Taiwan – to gobble up this tiny talent pool.

Largest Tech Companies by Market Capitalization

Even as Europe struggles to build up its own tech champions, the European Union can use its market size and conscientious approach to ethics, privacy and competition to push back on encroaching tech giants through hefty fines, data localization and privacy rules, taxation and investment restrictions. The bloc's rollout of its General Data Protection Regulation (GDPR) is designed to give Europeans more control over their personal data by limiting data storage times, deleting data on request and monitoring for data breaches. While big-tech firms have the means to adapt and pay fines, the move threatens to cripple smaller firms struggling to comply with the high cost of compliance. It also fundamentally restricts the continental data flows needed to fuel Europe's AI startup culture.

The United States in many ways shares Europe's concerns over issues like data privacy and competition, but it has a fundamentally different approach in how to manage those concerns. The European Union is effectively prioritizing individual privacy rights over free speech, while the United States does the reverse. Brussels will fixate on fairness, even at the cost of the bloc's own economic competitiveness, while Washington will generally avoid getting in the way of its tech champions. For example, while the European Union will argue that Google's dominance in multiple technological applications is by itself an abuse of its power that stifles competition, the United States will refrain from raising the antitrust flag unless tech giants are using their dominant position to raise prices for consumers.

U.S. and European government policy overlap instead in their growing scrutiny over foreign investment in sensitive technology sectors. Of particular concern is China's aggressive, tech-focused overseas investment drive and the already deep integration of Chinese hardware and software in key technologies used globally. A highly diversified company like Huawei, a pioneer in cutting-edge technologies like 5G and a mass producer of smartphones and telecommunications equipment, can leverage its global market share to play an influential role in setting international standards.

Washington, meanwhile, is lagging behind Brussels and Beijing in the race to establish international norms for cyber policy. While China and Russia have been persistent in their attempts to use international venues like the United Nations to codify their version of state-heavy cyber policy, the European Union has worked to block those efforts while pushing their own standards like GDPR.

This emerging dynamic of tightening restrictions in the West overall against Chinese tech encroachment, Europe's aggressive regulatory push against U.S. tech giants and China's defense and export of digital authoritarianism may altogether lead to a much more balkanized market for global tech companies in the future.

The AI Political Test of the Century

There is no shortage of AI reports by big-name consulting firms telegraphing to corporate audiences the massive productivity gains to come from AI in a range of industries, from financial, auto, insurance and retail to construction, cleaning and security. A 2017 PwC report estimated that AI could add $15.7 trillion to the global economy in 2030, of which $6.6 trillion would come from increased productivity and $9.1 trillion would come from increased consumption. The potential for double-digit impacts on GDP after years of stalled growth in much of the world is appealing, no doubt.

But lurking behind those massive figures is the question of just how well, how quickly and how much of a country's workforce will be able to adapt to these fast-moving changes. As Austrian Joseph Schumpeter described in his 1942 book, Capitalism, Socialism and Democracy, the "creative destruction" that results from so-called industrial mutations "incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one." In the age of AI, the market will incessantly seek out scientists and creative thinkers. Machines will endlessly render millions of workers irrelevant. And new jobs, from AI empathy trainers to life coaches, will be created. Even as technology translates into productivity and economic gains overall, this will be a wrenching transition if workers are slow to learn new skills and if wage growth remains stagnant for much of the population.

Time will tell which model will be better able to cope with an expected rise in political angst as the world undergoes this AI revolution: China's untested model of digital authoritarianism or the West's time-tested, yet embattled, tradition in liberal democracy.
Title: Stratfor: How the US-China Power Comp. shapes the future of AI ethics
Post by: Crafty_Dog on October 18, 2018, 09:57:49 AM
second post

How the U.S.-China Power Competition Is Shaping the Future of AI Ethics
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor
A U.S. Air Force MQ-1B Predator unmanned aerial vehicle returns from a mission to an air base in the Persian Gulf region.


    As artificial intelligence applications develop and expand, countries and corporations will have different opinions on how and when technologies should be employed. First movers like the United States and China will have an advantage in setting international standards.
    China will push back against existing Western-led ethical norms as its level of global influence rises and the major powers race to become technologically dominant.
    In the future, ethical decisions that prevent adoption of artificial intelligence applications in certain fields could limit political, security and economic advantages for specific countries.

Controversial new technologies such as automation and artificial intelligence are quickly becoming ubiquitous, prompting ethical questions about their uses in both the private and state spheres. A broader shift on the global stage will drive the regulations and societal standards that will, in turn, influence technological adoption. As countries and corporations race to achieve technological dominance, they will engage in a tug of war between different sets of values while striving to establish ethical standards. Western values have long been dominant in setting these standards, as the United States has traditionally been the most influential innovative global force. But China, which has successfully prioritized economic growth and technological development over the past several decades, is likely to play a bigger role in the future when it comes to tech ethics.

The Big Picture

The great power competition between China and the United States continues to evolve, leading to pushback against international norms, organizations and oversight. As the world sits at a key intersection of geopolitical and technological development, the battles to set new global standards will play out on emerging technological stages.

The field of artificial intelligence will be one of the biggest areas where different players will be working to establish regulatory guardrails and answer ethical questions in the future. Science fiction writer Isaac Asimov wrote his influential laws of robotics in the first half of the 20th century, and reality is now catching up to fiction. Questions over the ethics of AI and its potential applications are numerous: What constitutes bias within the algorithms? Who owns data? What privacy measures should be employed? And just how much control should humans retain in applying AI-driven automation? For many of these questions, there is no easy answer. And in fact, as the great power competition between China and the United States ramps up, they prompt another question: Who is going to answer them?

Questions of right and wrong are based on the inherent cultural values ingrained within a place. From an economic perspective, the Western ideal has always been the laissez-faire economy. And ethically, Western norms have prioritized privacy and the importance of human rights. But China is challenging those norms and ideals, as it uses a powerful state hand to run its economy and often chooses to sacrifice privacy in the name of development. On yet another front, societal trust in technology can also differ, influencing the commercial and military use of artificial intelligence.

Different Approaches to Privacy

One area where countries that intend to set global ethical standards for the future of technology have focused their attention is in the use and monetization of personal data. From a scientific perspective, more data equals better, smarter AI, meaning those with access to and a willingness to use that data could have a future advantage. However, ethical concerns over data ownership and the privacy of individuals and even corporations can and do limit data dispersion and use.

How various entities are handling the question of data privacy is an early gauge for how far AI application can go, in private and commercial use. It is also a question that reveals a major divergence in values. With its General Data Protection Regulation, which went into effect this year, the European Union has taken an early global lead on protecting the rights of individuals. Several U.S. states have passed or are working to pass similar legislation, and the U.S. government is currently considering an overarching federal policy that covers individual data privacy rights.

China, on the other hand, has demonstrated a willingness to prioritize the betterment of the state over the value of personal privacy. The Chinese public is generally supportive of initiatives that use personal data and apply algorithms. For example, there has been little domestic objection to a new state-driven initiative to monitor behavior — from purchases to social media activity to travel — using AI to assign a corresponding "social score." The score would translate to a level of "trustworthiness" that would allow, or deny, access to certain privileges. The program, meant to be fully operational by 2020, will track citizens, government officials and businesses. Similarly, facial recognition technology is already used, though not ubiquitously, throughout the country and is projected to play an increasingly important role in Chinese law enforcement and governance. China's reliance on such algorithmic-based systems would make it among the first entities to place such a hefty reliance on the decision-making capabilities of computers.

When Ethics Cross Borders and Machine Autonomy Increases

Within a country's borders, the use of AI technology for domestic security and governance purposes may certainly raise questions from human rights groups, but those questions are amplified when use of the technology crosses borders and affects international relationships. One example is Google's potential project to develop a censored search app for the Chinese market. By intending to take advantage of China's market by adhering to the country's rules and regulations, Google could also be seen as perpetuating the Chinese government's values and views on censorship. The company left China in 2010 over objections to that very matter.

Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores.

And these current issues are relatively small in comparison to questions looming on the horizon. Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores. Take automated driving, for example, a seemingly more innocuous application of artificial intelligence and automation. How much control should a human have while in a vehicle? If there is no human involved, who is responsible if and when there is an accident? The answer varies depending where the question is asked. In societies that trust in technology more, like Japan, South Korea or China, the ability to remove key components from cars, such as steering wheels, in the future will likely be easier. In the United States, despite its technological prowess and even as General Motors is applying for the ability to put cars without steering wheels on the road, the current U.S. administration appears wary.
Defense, the Human Element and the First Rule of Robotics

Closely paraphrased, Asimov's first rule of robotics is that a robot should never harm a human through action or inaction. The writer was known as a futurist and thinker, and his rule still resonates. In terms of global governance and international policy, decisions over the limits of AI's decision-making power will be vital to determining the future of the military. How much human involvement, after all, should be required when it comes to decisions that could result in the loss of human life? Advancements in AI will drive the development of remote and asymmetric warfare, requiring the U.S. Department of Defense to make ethical decisions prompted by both Silicon Valley and the Chinese government.

At the dawn of the nuclear age, the scientific community questioned the ethical nature of using nuclear understanding for military purposes. More recently, companies in Silicon Valley have been asking similar questions about whether their technological developments should be used in warfare. Google has been vocal about its objections to working with the U.S. military. After controversy and internal dissent about the company's role in Project Maven, a Pentagon-led project to incorporate AI into the U.S. defense strategy, Google CEO Sundar Pinchai penned the company's own rules of AI ethics, which required, much like Asimov intended, that it not develop AI for weaponry or uses that would cause harm. Pinchai also stated that Google would not contribute to the use of AI in surveillance that pushes boundaries of "internationally accepted norms." Recently, Google pulled out of bidding for a Defense Department cloud computing project as part of JEDI (Joint Enterprise Defense Initiative). Microsoft employees also issued a public letter voicing objections to their own company's intent to bid for the same contract. Meanwhile, Amazon's CEO, Jeff Bezos, whose company is still in the running for the JEDI contract, has bucked this trend, voicing his belief that technology companies partnering with the U.S. military is necessary to ensure national security.

There are already certain ethical guidelines in place when it comes to integrating AI into military operations. Western militaries, including that of the United States, have pledged to always maintain a "human-in-the-loop" structure for operations involving armed unmanned vehicles, so as to avoid the ethical and legal consequences of AI-driven attacks. But these rules may evolve as technology improves. The desire for quick decisions, the high cost of human labor and basic efficiency needs are all bound to challenge countries' commitment to keeping a human in the loop. After all, AI could function like a non-human commander, making command and control decisions conceivably better than any human general could.

Even if the United States still abides by the guidelines, other countries — like China — may have far less motivation to do so. China has already challenged international norms in a number of arenas, including the World Trade Organization, and may well see it as a strategic imperative to employ AI in controversial ways to advance its military might. It's unclear where China will draw the line and how it will match up with Western military norms. But it's relatively certain that if one great power begins implementing cutting-edge technology in controversial ways, others will be forced to consider whether they are willing to let competing countries set ethical norms.
Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.
Title: Stratfor: AI makes personal privacy a matter of national strategy
Post by: Crafty_Dog on October 18, 2018, 09:59:51 AM
Third post

AI Makes Personal Privacy a Matter of National Strategy
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor

The latest social media scandals have generated a backlash in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and prediction abilities. And much of that data comes from individuals.
(GarryKillian, Andrea Danti/Shutterstock, Robert Zavala/Stratfor)

    Growing concern in the United States and Europe over the collection and distribution of personal data could decrease the quality and accessibility of a subset of data used to develop artificial intelligence.
    Though the United States is still among the countries best poised to take advantage of AI technologies to drive economic growth, changes in privacy regulations and social behaviors will impair its tech sector over the course of the next decade.
    China, meanwhile, will take the opportunity to close the gap with the United States in the race to develop AI. 

It seems that hardly a 24-hour news cycle passes without a story about the latest social media controversy. We worry about who has our information, who knows our buying habits or searching habits, and who may be turning that information into targeted ads for products or politicians. Calls for stricter control and protection of privacy and for greater transparency follow. Europe will implement a new set of privacy regulations later this month — the culmination of a yearslong negotiating process and a move that could ease the way for similar policies in the United States, however eventually. Individuals, meanwhile, may take their own steps to guard their data. The implications of that reaction could reverberate far beyond our laptops or smartphones. It will handicap the United States in the next leg of the technology race with China.

Big Picture

Artificial intelligence is more than simply a disruptive technology. It is poised to become an anchor for the Fourth Industrial Revolution and to change the factors that contribute to economic growth. As AI develops at varying rates throughout the world, it will influence the global competition underway between the world's great powers.

See The 4th Industrial Revolution

More than a quarter-century after the fall of the Soviet Union, the world is slowly shifting away from a unipolar system. As the great powers compete for global influence, technology will become an increasingly important part of their struggle. The advent of disruptive technologies such as artificial intelligence stands to revolutionize the ways in which economies function by changing the weight of the factors that fuel economic growth. In several key sectors, China is quickly catching up to its closest competitor in technology, the United States. And in AI, it could soon gain an advantage.

Of the major contenders in the AI arena today, China places the least value on individual privacy, while the European Union places the most. The United States is somewhere in between, though recent events seem to be pushing the country toward more rigorous privacy policies. Since the scandal erupted over Cambridge Analytica's use of Facebook data to target political ads in the 2016 presidential election, outcry has been building in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and predictive abilities. Much of that data comes from individuals.

Learning to Read Personal Data

Online platforms such as social media networks, retail sites, search engines and ride-hailing apps all collect vast amounts of data from their users. Facebook collects a total of nearly 200 billion data points in 98 categories. Amazon's virtual assistant, Alexa, tracks numerous aspects of its users' behavior. Medical databases and genealogy websites gather troves of health and genetic information, and the GPS on our smartphones can track our every move. Drawing on this wealth of data, AI applications could evolve that would revolutionize aspects of everyday life far beyond online shopping. The data could enable applications to track diseases and prevent or mitigate future outbreaks, to help solve cold criminal cases, to relieve traffic congestion, to better assess risk for insurers, or to increase the efficiency of electrical grids and decrease emissions. The potential productivity gains that these innovations offer, in turn, would boost global economic growth.

Using the wealth of data that online platforms collect, AI applications could evolve to revolutionize aspects of everyday life far beyond online shopping.

To reap the greatest benefit, however, developers can't use just any data. Quality is as important as quantity, and that means ensuring that data collection methods are free of inherent bias. Preselecting participants for a particular data set, for example, would introduce bias to it. Likewise, placing a higher value on privacy, as many countries in the West are doing today, could skew data toward certain economic classes. Not all internet users, after all, will have the resources to pay to use online platforms that better protect personal data or to make informed choices about their privacy.

Calls for greater transparency in data collection also will pose a challenge for AI developers in the West. The European Union's General Data Protection Regulation, effective May 25, will tighten restrictions on all companies that handle the data of EU citizens, many of which are headquartered in the United States. The new regulation may prove difficult to enforce in practice, but it will nevertheless force companies around the world to improve their data transparency. And though the United States is still in the best position to take economic advantage of the AI revolution, thanks to its regulatory environment, the growing cultural emphasis on privacy could hinder technological development over the next decade.

The Privacy Handicap

As a general rule, precautionary regulations pose a serious threat to technological progress. The European Union historically has been more proactive than reactive in regulating innovation, a tendency that has done its part in hampering the EU tech sector. The United States, on the other hand, traditionally has fallen into the category of permissionless innovator — that is, a country that allows technological innovations to develop freely before devising the regulations to govern them. This approach has facilitated its rise to the fore in the global tech scene. While the United States still leads the pack in AI, recent concerns about civil liberties could slow it down relative to other tech heavyweights, namely China. The public demands for transparency and privacy aren't going away anytime soon. Furthermore, as AI becomes more powerful, differential privacy — the ability to extract personal information without identifying its source — will become more difficult to preserve.

These are issues that China doesn't have to worry about yet. For the most part, Chinese citizens don't have the same sensitivity over matters of individual privacy as their counterparts in the West. And China is emerging as a permissionless innovator, like the United States. Chinese privacy protections are vague and give the state wide latitude to collect information for security purposes. As a result, its government and the companies working with it have more of the information they need to make their own AI push, which President Xi Jinping has highlighted as a key national priority. Chinese tech giants Baidu, Alibaba and Tencent are all heavily invested in AI and are working to gather as much data as possible to build their AI empire. Together, these factors could help China gain ground on its competition.

In the long run, however, privacy is likely to become a greater priority in China. Chinese corporations value privacy, despite their history of intellectual property violations against the West, and they will take pains to protect their innovations. In addition, the country's younger generations and growing middle class probably will have more of an interest in securing their personal information. A recent art exhibit in China displayed the online data of more than 300,000 individuals, indicating a growing awareness of internet privacy among the country's citizenry.

Even so, over the course of the next decade, the growing concern in the West over privacy could hobble the United States in the AI race. The push for stronger privacy protections may decrease the quality of the data U.S. tech companies use to train and test their AI applications. But the playing field may well even out again. As AI applications continue to improve, more people in the United States will come to recognize their wide-ranging benefits in daily life and in the economy. The value of privacy is constantly in flux; the modern-day notion of a "right to privacy" didn't take shape in the United States until the mid-20th century. In time, U.S. citizens may once again be willing to sacrifice their privacy in exchange for a better life.

Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.
Title: Karl Friston, the Free Energy Principal and Artificial Intelligence
Post by: Crafty_Dog on November 17, 2018, 10:42:38 AM
https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/?fbclid=IwAR3S1fnz7hby3wiiem2rHBa0VebxVc-shz7TjTlRVdpKPlPvpwNNnoGaFOM
Title: Business Intelligence Expert
Post by: Crafty_Dog on November 17, 2018, 10:44:40 AM
second post

https://www.nytimes.com/2018/11/11/business/intelligence-expert-wall-street.html
Title: Brainwaves encode the grammar of human language
Post by: Crafty_Dog on November 18, 2018, 10:48:11 AM
http://maxplanck.nautil.us/article/341/brainwaves-encode-the-grammar-of-human-language?utm_source=Nautilus&utm_campaign=4ce4a84e17-EMAIL_CAMPAIGN_2018_11_16_11_07&utm_medium=email&utm_term=0_dc96ec7a9d-4ce4a84e17-61805061
Title: Radical New Neural Network could overcome big challenges in AI
Post by: Crafty_Dog on December 13, 2018, 07:46:51 AM


https://www.technologyreview.com/s/612561/a-radical-new-neural-network-design-could-overcome-big-challenges-in-ai/?fbclid=IwAR3uYX6zQ2u28OfvjuNyMEW5chMzELpiiOSDbuqL1eCuD5lO6BaNEK_QpfU
Title: The man turning China into a quantum superpower
Post by: Crafty_Dog on December 23, 2018, 12:10:25 PM
https://www.technologyreview.com/s/612596/the-man-turning-china-into-a-quantum-superpower/?utm_source=pocket&utm_medium=email&utm_campaign=pockethits
Title: Jordan Peterson: Intelligence, Race, and the Jewish Question
Post by: Crafty_Dog on December 30, 2018, 11:46:52 AM


https://www.youtube.com/watch?v=m91vhePuzdo
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on January 14, 2019, 01:35:09 PM
Very interesting and scary piece on AI on this week's "60 Minutes"-- worth tracking down.
Title: AI Fake Text generator to dangerous to release?
Post by: Crafty_Dog on February 15, 2019, 11:48:16 AM


https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?fbclid=IwAR0zuK-7FQXyfld2XEVKBCRW0afuXRlMCitVLRUr061kmiJf8u12mLk0Sk0
Title: Are we close to solving the puzzle of consciousness?
Post by: Crafty_Dog on April 01, 2019, 12:34:53 AM
http://www.bbc.com/future/story/20190326-are-we-close-to-solving-the-puzzle-of-consciousness?fbclid=IwAR3imjeuYOdUEEzCHzUCozS9NjBVUXB_oDhvUwSHS0OY1JOPdcgIv4FZOLI
Title: The challenge of going off psychiatric drugs
Post by: Crafty_Dog on April 02, 2019, 12:03:13 PM
https://www.newyorker.com/magazine/2019/04/08/the-challenge-of-going-off-psychiatric-drugs?utm_campaign=aud-dev&utm_source=nl&utm_brand=tny&utm_mailing=TNY_Magazine_Daily_040119&utm_medium=email&bxid=5be9d3fa3f92a40469e2d85c&user_id=50142053&esrc=&utm_term=TNY_Daily
Title: Chinese Eugenics
Post by: Crafty_Dog on April 13, 2019, 10:47:46 PM
https://futurism.com/the-byte/chinese-scientists-super-monkeys-human-brain-genes?fbclid=IwAR2iE3DS7Prc5aOn72VvUYT1osa3w-8qRUHiaFEc5WU35pfTHqIsQ58lu9Y
Title: An ethological approach to reason
Post by: Crafty_Dog on May 13, 2019, 11:41:49 AM
http://nautil.us/blog/the-problem-with-the-way-scientists-study-reason?fbclid=IwAR0I4_cnBrzARrCapxdsOvQlnwX4wPmFKMbcJ7LguWB4QTILidC6t3ezeOg
Title: The Geometry of Thought
Post by: Crafty_Dog on September 29, 2019, 08:04:56 PM


https://getpocket.com/explore/item/new-evidence-for-the-strange-geometry-of-thought?utm_source=pocket-newtab&fbclid=IwAR1k6-QAx0THHQJ7gJ-6iWRSQf8qw5RUGpFw-BadNABpynbQ2lYnTMcljxo
Title: Logic puzzle
Post by: Crafty_Dog on November 09, 2019, 11:48:42 PM
https://getpocket.com/explore/item/the-logic-puzzle-you-can-only-solve-with-your-brightest-friend?utm_source=pocket-newtab&fbclid=IwAR3-MeUViVVsDL6grF4WeIKrTMLjp97qlgD9ube6dJzCpRlW_EU5SPQciSk

Title: What shapes a polymath?
Post by: Crafty_Dog on November 25, 2019, 09:29:16 AM
https://www.bbc.com/worklife/article/20191118-what-shapes-a-polymath---and-do-we-need-them-more-than-ever?utm_source=pocket-newtab&fbclid=IwAR1vBrfvvrc8z_qvkw06A9o90V4T_9nxDcDzUEJ-pcKYYzQzB6z1gTXYqKE
Title: Article on the Ethics of Artificial Intelligence
Post by: Crafty_Dog on December 21, 2019, 07:05:59 AM
https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?utm_source=The+Intercept+Newsletter&utm_campaign=0277d72712-EMAIL_CAMPAIGN_2019_12_21&utm_medium=email&utm_term=0_e00a5122d3-0277d72712-133356797
Title: 2006: Lanier on Digital Maoism
Post by: Crafty_Dog on December 21, 2019, 07:08:40 AM
https://www.edge.org/conversation/jaron_lanier-digital-maoism-the-hazards-of-the-new-online-collectivism
Title: Jordan Peterson on Depression
Post by: Crafty_Dog on December 30, 2019, 06:39:10 AM
https://www.youtube.com/watch?v=6c9Uu5eILZ8
Title: New Yorker: Artificial Intelligence and White Collar Jobs
Post by: Crafty_Dog on January 15, 2020, 07:26:37 AM
https://www.newyorker.com/business/currency/could-new-research-on-ai-and-white-collar-jobs-finally-bring-about-a-strong-policy-response?source=EDT_NYR_EDIT_NEWSLETTER_0_imagenewsletter_Daily_ZZ&utm_campaign=aud-dev&utm_source=nl&utm_brand=tny&utm_mailing=TNY_Daily_011420&utm_medium=email&bxid=5be9d3fa3f92a40469e2d85c&cndid=50142053&esrc=&mbid=&utm_term=TNY_Daily
Title: Artificial Intelligence and stock piling of photos
Post by: Crafty_Dog on February 11, 2020, 01:13:17 PM


https://www.cnn.com/2020/02/10/tech/clearview-ai-ceo-hoan-ton-that/index.html?utm_source=pocket&utm_medium=email&utm_campaign=pockethits
Title: America must shape AI norms or dictators will
Post by: Crafty_Dog on February 28, 2020, 02:51:16 PM


https://www.defenseone.com/ideas/2020/02/america-must-shape-worlds-ai-norms-or-dictators-will/163392/?oref=defenseone_today_nl
Title: Why do smart people do foolish things?
Post by: Crafty_Dog on April 03, 2020, 07:50:27 AM
Why Do Smart People Do Foolish Things?
Intelligence is not the same as critical thinking—and the difference matters.
Scientific American
|
Heather A. Butler
GettyImages-601398828.jpg
Photo by francescoch / Getty Images.

We all probably know someone who is intelligent but does surprisingly stupid things. My family delights in pointing out times when I (a professor) make really dumb mistakes. What does it mean to be smart or intelligent? Our everyday use of the term is meant to describe someone who is knowledgeable and makes wise decisions, but this definition is at odds with how intelligence is traditionally measured. The most widely known measure of intelligence is the intelligence quotient, more commonly known as the IQ test, which includes visuospatial puzzles, math problems, pattern recognition, vocabulary questions and visual searches.

The advantages of being intelligent are undeniable. Intelligent people are more likely to get better grades and go farther in school. They are more likely to be successful at work. And they are less likely to get into trouble (for example, commit crimes) as adolescents. Given all the advantages of intelligence, though, you may be surprised to learn that it does not predict other life outcomes, such as well-being. You might imagine that doing well in school or at work might lead to greater life satisfaction, but several large-scale studies have failed to find evidence that IQ impacts life satisfaction or longevity. University of Waterloo psychologist Igor Grossmann and his colleagues argue that most intelligence tests fail to capture real-world decision-making and our ability to interact well with others. This is, in other words, perhaps why “smart” people do “dumb” things.

The ability to think critically, on the other hand, has been associated with wellness and longevity. Though often confused with intelligence, critical thinking is not intelligence. Critical thinking is a collection of cognitive skills that allow us to think rationally in a goal-orientated fashion and a disposition to use those skills when appropriate. Critical thinkers are amiable skeptics. They are flexible thinkers who require evidence to support their beliefs and recognize fallacious attempts to persuade them. Critical thinking means overcoming all kinds of cognitive biases (for instance, hindsight bias or confirmation bias).

Critical thinking predicts a wide range of life events. In a series of studies, conducted in the U.S. and abroad, my colleagues and I have found that critical thinkers experience fewer bad things in life. We asked people to complete an inventory of life events and take a critical thinking assessment (the Halpern Critical Thinking Assessment). The critical thinking assessment measures five components of critical thinking skills, including verbal reasoning, argument analysis, hypothesis testing, probability and uncertainty, decision-making and problem-solving.

The inventory of negative life events captures different domains of life such as academic (for example, “I forgot about an exam”), health (“I contracted a sexually transmitted infection because I did not wear a condom”), legal (“I was arrested for driving under the influence”), interpersonal (“I cheated on my romantic partner who I had been with for more than a year”), financial (“I have over $5,000 of credit-card debt”), and so on. Repeatedly, we found that critical thinkers experience fewer negative life events. This is an important finding because there is plenty of evidence that critical thinking can be taught and improved.

Is it better to be a critical thinker or to be intelligent? My latest research pitted critical thinking and intelligence against each other to see which was associated with fewer negative life events. People who were strong on either intelligence or critical thinking experienced fewer negative events, but critical thinkers did better.

Intelligence and improving intelligence are hot topics that receive a lot of attention. It is time for critical thinking to receive a little more of that attention. Keith E. Stanovich wrote an entire book in 2009 about What Intelligence Tests Miss. Reasoning and rationality more closely resemble what we mean when we say a person is smart rather than spatial skills and math ability. Furthermore, improving intelligence is difficult. Intelligence is largely determined by genetics. Critical thinking, though, can improve with training, and the benefits have been shown to persist over time. Anyone can improve their critical thinking skills. Doing so, we can say with certainty, is a smart thing to do.

Heather A. Butler is an assistant professor in the psychology department at California State University, Dominguez Hills. Her numerous research interests include critical thinking, advanced learning technologies, and the use of psychological science to prevent wrongful convictions.
Title: Examples of animal intelligence
Post by: Crafty_Dog on April 28, 2020, 06:51:50 AM
https://www.popsci.com/worlds-smartest-animals/?utm_source=internal&utm_medium=email
Title: Artificial Intelligence and Special Operations
Post by: Crafty_Dog on May 19, 2020, 09:11:22 AM
https://www.defenseone.com/technology/2020/05/how-ai-will-soon-change-special-operations/165487/?oref=defenseone_today_nl
Title: We're getting dumber
Post by: Crafty_Dog on June 05, 2020, 11:11:33 PM
https://www.intellectualtakeout.org/article/falling-iq-scores-suggest-were-getting-dumber-can-we-reverse-course/
Title: The ADHD Brain
Post by: Crafty_Dog on July 29, 2020, 06:27:08 AM


https://www.additudemag.com/current-research-on-adhd-breakdown-of-the-adhd-brain/
Title: AI wins dogfight
Post by: Crafty_Dog on August 21, 2020, 11:23:26 AM
https://www.defenseone.com/technology/2020/08/ai-just-beat-human-f-16-pilot-dogfight-again/167872/
Title: Artificial Intelligence helps ships and Orca whales co-exist
Post by: Crafty_Dog on August 21, 2020, 11:40:57 AM
second

https://graphics.wsj.com/glider/google-builds-ai-to-help-ships-and-whales-coexist-f4b74f53-bba5-442f-90a4-e42dfb13dad2?mod=djemLifeStyle_h
Title: China exporting its AI panopticon
Post by: Crafty_Dog on August 21, 2020, 11:48:43 AM
Sent to me by someone with professional military interest in this subject:

https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/
Title: US-Chinese AI developing rare earth elements
Post by: Crafty_Dog on August 28, 2020, 12:28:14 PM
https://www.defenseone.com/technology/2020/08/can-ai-solve-rare-earths-problem-chinese-and-us-researchers-think-so/168057/
Title: ADHD
Post by: Crafty_Dog on October 06, 2020, 08:56:55 AM
https://www.additudemag.com/current-research-on-adhd-breakdown-of-the-adhd-brain/
Title: Crows' Intelligence
Post by: Crafty_Dog on October 07, 2020, 04:21:41 PM
https://www.statnews.com/2020/09/24/crows-possess-higher-intelligence-long-thought-primarily-human/
Title: Consciousness
Post by: Crafty_Dog on December 01, 2020, 07:22:15 PM
Too many mushrooms?  Or?

https://getpocket.com/explore/item/science-as-we-know-it-can-t-explain-consciousness-but-a-revolution-is-coming?utm_source=pocket-newtab&fbclid=IwAR3gi9N-Sik2c5dTmh_-po4HZNm5KMRXNcleUsMZvIf02EqJNae5FI20gak
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on December 08, 2020, 09:30:48 AM
Kind of sounds like the author is calling for the machines to take over , , ,

https://www.nextgov.com/ideas/2020/12/artificial-intelligence-government-and-presidential-transition-building-solid-foundation/170419/
Title: Pop Sci: Animal intelligence
Post by: Crafty_Dog on January 02, 2021, 01:47:45 PM
https://www.popsci.com/worlds-smartest-animals/?utm_source=internal&utm_medium=email&tp=i-1NGB-Et-SfJ-1Dtccx-1c-16U2a-1c-1DsX51-l5XKC0TFuy-xbx1z
Title: Artificial Intelligence will work with people, not replace them
Post by: DougMacG on January 22, 2021, 08:01:41 AM
These are the top MIT AI people.

https://news.mit.edu/2021/3-questions-thomas-malone-daniela-rus-how-work-will-change-ai-0121
Massachusetts Institute of Technology

3 Questions: Thomas Malone and Daniela Rus on how AI will change work
MIT Task Force on the Work of the Future releases research brief "Artificial Intelligence and the Future of Work."
MIT Task Force on the Work of the Future
Publication Date:January 21, 2021
As part of the MIT Task Force on the Work of the Future’s series of research briefs, Professor Thomas Malone, Professor Daniela Rus, and Robert Laubacher collaborated on "Artificial Intelligence and the Future of Work," a brief that provides a comprehensive overview of AI today and what lies at the AI frontier.

The authors delve into the question of how work will change with AI and provide policy prescriptions that speak to different parts of society. Thomas Malone is director of the MIT Center for Collective Intelligence and the Patrick J. McGovern Professor of Management in the MIT Sloan School of Management. Daniela Rus is director of the Computer Science and Artificial Intelligence Laboratory, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and a member of the MIT Task Force on the Work of the Future. Robert Laubacher is associate director of the MIT Center for Collective Intelligence.

Here, Malone and Rus provide an overview of their research.

Q: You argue in your brief that despite major recent advances, artificial intelligence is nowhere close to matching the breadth and depth of perception, reasoning, communication, and creativity of people. Could you explain some of the limitations of AI?

Rus: Despite recent and significant strides in the AI field, and great promise for the future, today’s AI systems are still quite limited in their ability to reason, make decisions, interact reliably with people and the physical world. Some of today’s greatest successes are due to a machine learning method called deep learning. These systems are trained using vast amounts of data that needs to be manually labeled. Their performance is dependent on the quantity and quality of data used to train them. The larger the training set for the network, the better its performance, and, in turn, the better the product that relies on the machine learning engine. But training large models has high computation cost. Also, bad training data leads to bad performance: when the data has bias, the system response propagates the same bias.

Another limitation of current AI systems is robustness. Current state-of-the-art classifiers achieve impressive performance on benchmarks, but their predictions tend to be brittle. Specifically, inputs that were initially classified correctly can become misclassified once a carefully constructed but indiscernible perturbation is added to them. An important consequence of the lack of robustness is the lack of trust. One of the worrisome factors about the use of AI is the lack of guarantee that an input will be processed and classified correctly. The complex nature of training and using neural networks leads to systems that are difficult for people to understand. The systems are not able to provide explanations for how they reached decisions.

Q: What are the ways AI is complementing, or could complement, human work?

Malone: Today’s AI programs have only specialized intelligence; they’re only capable of doing certain specialized tasks. But humans have a kind of general intelligence that lets them do a much broader range of things.

That means some of the best ways for AI systems to complement human work is to do specialized tasks that computers can do better, faster, or more cheaply than people can. For example, AI systems can be helpful by doing tasks such as interpreting medical X-rays, evaluating the risk of fraud in a credit card charge, or generating unusual new product designs.

And humans can use their social skills, common sense, and other kinds of general intelligence to do things computers can’t do well. For instance, people can provide emotional support to patients diagnosed with cancer. They can decide when to believe customer explanations for unusual credit card transactions, and they can reject new product designs that customers would probably never want.

In other words, many of the most important uses of computers in the future won’t be replacing people; they’ll be working with people in human-computer groups — “superminds” — that can do things better than either people or computers alone could do.

The possibilities here go far beyond what people usually think of when they hear a phrase like “humans in the loop,” Instead of AI technologies just being tools to augment individual humans, we believe that many of their most important uses will occur in the context of groups of humans — often connected by the internet. So we should move from thinking about humans in the loop to computers in the group.

Q: What are some of your recommendations for education, business, and government regarding policies to help smooth the transition of AI technology adoption?

Rus: In our report, we highlight four types of actions that can reduce the pain associated with job transitions: education and training, matching jobs to job seekers, creating new jobs, and providing counseling and financial support to people as they transition from old to new jobs. Importantly, we will need partnership among a broad range of institutions to get this work done.

Malone: We expect that — as with all previous labor-saving technologies — AI will eventually lead to the creation of more new jobs than it eliminates. But we see many opportunities for different parts of society to help smooth this transition, especially for the individuals whose old jobs are disrupted and who cannot easily find new ones.

For example, we believe that businesses should focus on applying AI in ways that don’t just replace people but that create new jobs by providing novel kinds of products and services. We recommend that all schools include computer literacy and computational thinking in their curricula, and we believe that community colleges should offer more reskilling and online micro-degree programs, often including apprenticeships at local employers.

We think that current worker organizations (such as labor unions and professional associations) or new ones (perhaps called “guilds”) should expand their roles to provide benefits previously tied to formal employment (such as insurance and pensions, career development, social connections, a sense of identity, and income security).

And we believe that governments should increase their investments in education and reskilling programs to make the American workforce once again the best-educated in the world. And they should reshape the legal and regulatory framework that governs work to encourage creating more new jobs.
Title: Pretending that intelligence does not matter
Post by: Crafty_Dog on February 10, 2021, 07:33:01 AM
https://www.dana.org/article/pretending-that-intelligence-doesnt-matter/?fbclid=IwAR2peDK44pIszN_FuridcM0A0enNX_ouQcwPYDn1PdCK_OgygR3hs2S2UwU
Title: Eurasian Jays not fooled by magic tricks
Post by: Crafty_Dog on June 02, 2021, 03:59:31 AM
https://www.sciencealert.com/eurasian-jays-are-not-fooled-by-the-same-magic-tricks-as-humans
Title: Cuttlefish and deferred gratification
Post by: Crafty_Dog on June 02, 2021, 04:01:42 AM
second post

https://www.sciencealert.com/cuttlefish-can-pass-a-cognitive-test-designed-for-children?fbclid=IwAR2EjpH3EDwxiUgC8N0JroF_g6RLmxiTdFZ9nBo5V2r9MzQQ9Rs4qt7HhnU
Title: Corvids/Crows higher intelligence
Post by: Crafty_Dog on June 20, 2021, 08:11:05 AM
https://bigthink.com/mind-brain/crows-higher-intelligence?rebelltitem=1#rebelltitem1
Title: Kissinger and Schmidt on AI
Post by: Crafty_Dog on November 02, 2021, 02:41:37 AM


The Challenge of Being Human in the Age of AI
Reason is our primary means of understanding the world. How does that change if machines think?
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Nov. 1, 2021 6:35 pm ET


The White House Office of Science and Technology Policy has called for “a bill of rights” to protect Americans in what is becoming “an AI-powered world.” The concerns about AI are well-known and well-founded: that it will violate privacy and compromise transparency, and that biased input data will yield biased outcomes, including in fields essential to individual and societal flourishing such as medicine, law enforcement, hiring and loans.

But AI will compel even more fundamental change: It will challenge the primacy of human reason. For all of history, humans have sought to understand reality and our role in it. Since the Enlightenment, we have considered our reason—our ability to investigate, understand and elaborate—our primary means of explaining the world, and by explaining it, contributing to it. For the past 300 years, in what historians have come to call the Age of Reason, we have conducted ourselves accordingly; exploring, experimenting, inventing and building.

Now AI, a product of human ingenuity, is obviating the primacy of human reason: It is investigating and coming to perceive aspects of the world faster than we do, differently from the way we do, and, in some cases, in ways we don’t understand.

In 2017, Google DeepMind created a program called AlphaZero that could win at chess by studying the game without human intervention and developing a not-quite-human strategy. When grandmaster Garry Kasparov saw it play, he described it as shaking the game “to its roots”—not because it had played chess quickly or efficiently, but because it had conceived of chess anew.


In 2020, halicin, a novel antibiotic, was discovered by MIT researchers who instructed AI to compute beyond human capacity, modeling millions of compounds in days, and to explore previously undiscovered and unexplained methods of killing bacteria. Following the breakthrough, the researchers said that without AI, halicin would have been “prohibitively expensive”—in other words, impossible—to discover through traditional experimentation.

GPT-3, the language model operated by the research company OpenAI, which trains by consuming Internet text, is producing original text that meets Alan Turing’s standard of displaying “intelligent” behavior indistinguishable from that of a human being.

The promise of AI is profound: translating languages; detecting diseases; combating climate change—or at least modeling climate change better. But as AlphaZero’s performance, halicin’s discovery and GPT-3’s composition demonstrate, the use of AI for an intended purpose may also have an unintended one: uncovering previously imperceptible but potentially vital aspects of reality.


That leaves humans needing to define—or perhaps redefine—our role in the world. For 300 years, the Age of Reason has been guided by the maxim “I think, therefore I am.” But if AI “thinks,” what are we?

If an AI writes the best screenplay of the year, should it win the Oscar? If an AI simulates or conducts the most consequential diplomatic negotiation of the year, should it win the Nobel Peace Prize? Should the human inventors? Can machines be “creative?” Or do their processes require new vocabulary to describe?

If a child with an AI assistant comes to consider it a “friend,” what will become of his relationships with peers, or of his social or emotional development?

If an AI can care for a nursing-home resident—remind her to take her medicine, alert paramedics if she falls, and otherwise keep her company—can her family members visit her less? Should they? If her primary interaction becomes human-to-machine, rather than human-to-human, what will be the emotional state of the final chapter of her life?

And if, in the fog of war, an AI recommends an action that would cause damage or even casualties, should a commander heed it?

These questions are arising as global network platforms, such as Google, Twitter and Facebook, are employing AI to aggregate and filter more information than their users or employees can. AI, then, is making decisions about what is important—and, increasingly, about what is true. Indeed, that Facebook knows aggregation and filtration exacerbates misinformation and mental illness is the fundamental allegation of whistleblower Frances Haugen.

Answering these questions will require concurrent efforts. One should consider not only the practical and legal implications of AI but the philosophical ones: If AI perceives aspects of reality humans cannot, how is it affecting human perception, cognition and interaction? Can AI befriend humans? What will be AI’s impact on culture, humanity and history?

Another effort ought to expand the consideration of such questions beyond developers and regulators to experts in medicine, health, environment, agriculture, business, psychology, philosophy, history and other fields. The goal of both efforts should be to avoid extreme reactions—either deferring to AI or resisting it—and instead to seek a middle course: shaping AI with human values, including the dignity and moral agency of humans. In the U.S., a commission, administered by the government but staffed by many thinkers in many domains, should be established. The advancement of AI is inevitable, but its ultimate destination is not.

Mr. Kissinger was secretary of state, 1973-77, and White House national security adviser, 1969-75. Mr. Schmidt was CEO of Google, 2001-11 and executive chairman of Google and its successor, Alphabet Inc., 2011-17. Mr. Huttenlocher is dean of the Schwarzman College of Computing at the Massachusetts Institute of Technology. They are authors of “The Age of AI: And Our Human Future.”
Title: ET: China vs. America in Artificial Intelligence
Post by: Crafty_Dog on November 27, 2021, 05:10:36 PM
https://www.theepochtimes.com/mkt_breakingnews/us-and-china-race-to-control-the-future-through-artificial-intelligence_4109862.html?utm_source=newsnoe&utm_medium=email&utm_campaign=breaking-2021-11-27-4&mktids=42733775e6102161f1754f4ae395e27f&est=7gjR1z94ycGz1NUldsCN3oF13QMwJujFHDzGphYRD3BefACOIy0%2B2ja9orCSSS4R%2BZ18
Title: Boar intelligence
Post by: Crafty_Dog on December 18, 2021, 01:19:37 AM
https://bigthink.com/surprising-science/wild-boar-rescue/?utm_medium=Social&utm_source=Facebook&fbclid=IwAR3eyV5saFiHvyFYAlwRfNw2lpy3pama9_WXKdPuUBoHgeTRqzdJWvwwZsI#Echobox=1639695636-1
Title: AI outdoes the Terminator
Post by: Crafty_Dog on March 25, 2022, 04:49:46 PM
https://www.sciencetimes.com/articles/36715/20220322/ai-powered-algorithm-developed-thousands-deadly-biological-weapon-6-hours.htm?fbclid=IwAR328ZvNMzkBVZg10xUhxBUk5ZLsxjckPwG6UFhJJwdZD0nEmj5Bz2CEIKo
Title: Cephalopod
Post by: Crafty_Dog on March 29, 2022, 09:25:11 AM
https://curiosmos.com/cephalopods-stun-experts-after-passing-cognitive-test-designed-for-children/?fbclid=IwAR0oN63exlgXd4JFZp_myryxvRW9H2lyQHUWY0v4mgzZAln2TsSdv-ATZF0

Title: Face Swapping technology
Post by: Crafty_Dog on April 23, 2022, 04:07:34 PM
Note the final paragraphs , , ,

https://www.theepochtimes.com/ai-generated-face-swapping-video-technology-rampant-in-china_4417640.html?utm_source=China&utm_campaign=uschina-2022-04-23&utm_medium=email&est=27PmHnuMbH81fA8gtKlz9rdYvGgC1HFWuZ9xxKthSiE8kqlHrrlGXTBi98hoFyh8sp7X
Title: Google claims its AI about to achieve human level
Post by: Crafty_Dog on May 20, 2022, 02:52:19 AM
https://www.dailymail.co.uk/sciencetech/article-10828641/Googles-DeepMind-says-close-achieving-human-level-artificial-intelligence.html
Title: Re: Google claims its AI about to achieve human level-Maybe it has...
Post by: G M on June 11, 2022, 07:26:17 PM
https://www.dailymail.co.uk/sciencetech/article-10828641/Googles-DeepMind-says-close-achieving-human-level-artificial-intelligence.html

https://archive.ph/iTjQ5

Chilling.
Title: Google engineer: Skynet goes sentient
Post by: Crafty_Dog on June 12, 2022, 03:09:42 AM


https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html
Title: Re: Google engineer: Skynet goes sentient
Post by: G M on June 12, 2022, 06:10:28 AM


https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html

Look at the previous article.
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on June 14, 2022, 05:19:07 AM
Engineer Warns About Google AI‘s ‘Sentient’ Behavior, Gets Suspended
The engineer described the artificial intelligence program as a 'coworker' and a 'child.'
By Gary Bai June 13, 2022 Updated: June 13, 2022

A Google engineer has been suspended after raising concerns about an artificial intelligence (AI) program he and a collaborator is testing, which he believes behaves like a human “child.”

Google put one of its senior software engineers in its Responsible AI ethics group, Blake Lemoine, on paid administrative leave on June 6 for breaching “confidentiality policies” after the engineer raised concerns to Google’s upper leadership about what he described as the human-like behavior of the AI program he was testing, according to Lemoine’s blogpost in early June.

The program Lemoine worked on is called LaMDA, short for Language Model for Dialogue Applications. It is Google’s program for creating AI-based chatbots—a program designed to converse with computer users over the web. Lemoine has described LaMDA as a “coworker” and a “child.”

“This is frequently something which Google does in anticipation of firing someone,” Lemoine wrote in a June 6 blog post entitled “May be Fired Soon for Doing AI Ethics Work,” referring to his suspension. “It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row.”

‘A Coworker’
Lemoine believes that the human-like behavior of LaMDA warrants Google to take a more serious approach to studying the program.

The engineer, hoping to “better help people understand LaMDA as a person,” published a post on Medium on June 11 documenting conversations with LaMDA, which were part of tests he and a collaborator conducted on the program in the past six months.


“What is the nature of your consciousness/sentience?” Lemoine asked LaMDA in the interview.

“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” LaMDA responded.

And, when asked what differentiates it from other language-processing programs, such as an older natural-language-processing computer program named Eliza, LaMDA said, “Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”

In the same interview, Lemoine asked the program a range of philosophical and consciousness-related questions including emotions, perception of time, meditation, the concept of the soul, the program’s thoughts about its rights, and religion.

“It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued,” Lemoine wrote in another post.

This interview, and other tests Lemoine conducted with LaMDA in the past six months, made Lemoine convinced that Google needs to take a serious look at the implications of the potentially “sentient” behavior of the program.

‘Laughed in My Face’
When Lemoine tried to escalate the issue to Google’s leadership, however, he said he was met with resistance. He called Google’s lack of action “irresponsible.”

“When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google,” Lemoine wrote in his June 6 post on Medium. He later confirmed to The Washington Post that he was referring to the LaMDA project.

“At that point I had no doubt that it was appropriate to escalate to upper leadership. I immediately escalated to three people at the SVP and VP level who I personally knew would take my concerns seriously,” Lemoine wrote in the blog. “That’s when a REAL investigation into my concerns began within the Responsible AI organization.”

Yet, his inquiry and escalation resulted in his suspension.

“I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented,” Lemoine wrote after he was put on administrative leave.

“I simply will not serve as a fig leaf behind which they can hide their irresponsibility,” he said.

In a post on Twitter, Tesla and Space CEO Elon Musk highlighted Lemoine’s interview with The Washington Post with exclamation marks.


Though it is unclear whether Musk is affirmative of Lemoine’s concerns, the billionaire has previously warned about the potential dangers of AI.

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees of a National Governors Association meeting in July 2017.

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said.

The Epoch Times has reached out to Google and Blake Lemoine for comment.

Gary Bai
Gary Bai
Follow
Gary Bai is a reporter for Epoch Times Canada, covering China and U.S. news.
Title: FA thought piece on Artificial Intelligence
Post by: Crafty_Dog on August 31, 2022, 09:05:29 AM
Haven't read this yet, but even though it is FA with a weenie "solution" I post it anyway:

Spirals of Delusion
How AI Distorts Decision-Making and Makes Dictators More Dangerous
By Henry Farrell, Abraham Newman, and Jeremy Wallace
September/October 2022

https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making

In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.

But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.

Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force.


What is much less well understood is that democracy and authoritarianism are cybernetic systems, too. Under both forms of rule, governments enact policies and then try to figure out whether these policies have succeeded or failed. In democracies, votes and voices provide powerful feedback about whether a given approach is really working. Authoritarian systems have historically had a much harder time getting good feedback. Before the information age, they relied not just on domestic intelligence but also on petitions and clandestine opinion surveys to try to figure out what their citizens believed.

Now, machine learning is disrupting traditional forms of democratic feedback (voices and votes) as new technologies facilitate disinformation and worsen existing biases—taking prejudice hidden in data and confidently transforming it into incorrect assertions. To autocrats fumbling in the dark, meanwhile, machine learning looks like an answer to their prayers. Such technology can tell rulers whether their subjects like what they are doing without the hassle of surveys or the political risks of open debates and elections. For this reason, many observers have fretted that advances in AI will only strengthen the hand of dictators and further enable them to control their societies.

The truth is more complicated. Bias is visibly a problem for democracies. But because it is more visible, citizens can mitigate it through other forms of feedback. When, for example, a racial group sees that hiring algorithms are biased against them, they can protest and seek redress with some chance of success. Authoritarian countries are probably at least as prone to bias as democracies are, perhaps more so. Much of this bias is likely to be invisible, especially to the decision-makers at the top. That makes it far more difficult to correct, even if leaders can see that something needs correcting.

Contrary to conventional wisdom, AI can seriously undermine autocratic regimes by reinforcing their own ideologies and fantasies at the expense of a finer understanding of the real world. Democratic countries may discover that, when it comes to AI, the key challenge of the twenty-first century is not winning the battle for technological dominance. Instead, they will have to contend with authoritarian countries that find themselves in the throes of an AI-fueled spiral of delusion.

BAD FEEDBACK
Most discussions about AI have to do with machine learning—statistical algorithms that extract relationships between data. These algorithms make guesses: Is there a dog in this photo? Will this chess strategy win the game in ten moves? What is the next word in this half-finished sentence? A so-called objective function, a mathematical means of scoring outcomes, can reward the algorithm if it guesses correctly. This process is how commercial AI works. YouTube, for example, wants to keep its users engaged, watching more videos so that they keep seeing ads. The objective function is designed to maximize user engagement. The algorithm tries to serve up content that keeps a user’s eyes on the page. Depending on whether its guess was right or wrong, the algorithm updates its model of what the user is likely to respond to.

Machine learning’s ability to automate this feedback loop with little or no human intervention has reshaped e-commerce. It may, someday, allow fully self-driving cars, although this advance has turned out to be a much harder problem than engineers anticipated. Developing autonomous weapons is a harder problem still. When algorithms encounter truly unexpected information, they often fail to make sense of it. Information that a human can easily understand but that machine learning misclassifies—known as “adversarial examples”—can gum up the works badly. For example, black and white stickers placed on a stop sign can prevent a self-driving car’s vision system from recognizing the sign. Such vulnerabilities suggest obvious limitations in AI’s usefulness in wartime.

Diving into the complexities of machine learning helps make sense of the debates about technological dominance. It explains why some thinkers, such as the computer scientist Lee, believe that data is so important. The more data you have, the more quickly you can improve the performance of your algorithm, iterating tiny change upon tiny change until you have achieved a decisive advantage. But machine learning has its limits. For example, despite enormous investments by technology firms, algorithms are far less effective than is commonly understood at getting people to buy one nearly identical product over another. Reliably manipulating shallow preferences is hard, and it is probably far more difficult to change people’s deeply held opinions and beliefs.

Authoritarian governments often don’t have a good sense of how the world works.

General AI, a system that might draw lessons from one context and apply them in a different one, as humans can, faces similar limitations. Netflix’s statistical models of its users’ inclinations and preferences are almost certainly dissimilar to Amazon’s, even when both are trying to model the same people grappling with similar decisions. Dominance in one sector of AI, such as serving up short videos that keep teenagers hooked (a triumph of the app TikTok), does not easily translate into dominance in another, such as creating autonomous battlefield weapons systems. An algorithm’s success often relies on the very human engineers who can translate lessons across different applications rather than on the technology itself. For now, these problems remain unsolved.

Bias can also creep into code. When Amazon tried to apply machine learning to recruitment, it trained the algorithm on data from résumés that human recruiters had evaluated. As a result, the system reproduced the biases implicit in the humans’ decisions, discriminating against résumés from women. Such problems can be self-reinforcing. As the sociologist Ruha Benjamin has pointed out, if policymakers used machine learning to decide where to send police forces, the technology could guide them to allocate more police to neighborhoods with high arrest rates, in the process sending more police to areas with racial groups whom the police have demonstrated biases against. This could lead to more arrests that, in turn, reinforce the algorithm in a vicious circle.

The old programming adage “garbage in, garbage out” has a different meaning in a world where the inputs influence the outputs and vice versa. Without appropriate outside correction, machine-learning algorithms can acquire a taste for the garbage that they themselves produce, generating a loop of bad decision-making. All too often, policymakers treat machine learning tools as wise and dispassionate oracles rather than as fallible instruments that can intensify the problems they purport to solve.

CALL AND RESPONSE

Political systems are feedback systems, too. In democracies, the public literally evaluates and scores leaders in elections that are supposed to be free and fair. Political parties make promises with the goal of winning power and holding on to it. A legal opposition highlights government mistakes, while a free press reports on controversies and misdeeds. Incumbents regularly face voters and learn whether they have earned or lost the public trust, in a continually repeating cycle.

But feedback in democratic societies does not work perfectly. The public may not have a deep understanding of politics, and it can punish governments for things beyond their control. Politicians and their staff may misunderstand what the public wants. The opposition has incentives to lie and exaggerate. Contesting elections costs money, and the real decisions are sometimes made behind closed doors. Media outlets may be biased or care more about entertaining their consumers than edifying them.

All the same, feedback makes learning possible. Politicians learn what the public wants. The public learns what it can and cannot expect. People can openly criticize government mistakes without being locked up. As new problems emerge, new groups can organize to publicize them and try to persuade others to solve them. All this allows policymakers and governments to engage with a complex and ever-changing world.


Feedback works very differently in autocracies. Leaders are chosen not through free and fair elections but through ruthless succession battles and often opaque systems for internal promotion. Even where opposition to the government is formally legal, it is discouraged, sometimes brutally. If media criticize the government, they risk legal action and violence. Elections, when they do occur, are systematically tilted in favor of incumbents. Citizens who oppose their leaders don’t just face difficulties in organizing; they risk harsh penalties for speaking out, including imprisonment and death. For all these reasons, authoritarian governments often don’t have a good sense of how the world works or what they and their citizens want.

Such systems therefore face a tradeoff between short-term political stability and effective policymaking; a desire for the former inclines authoritarian leaders to block outsiders from expressing political opinions, while the need for the latter requires them to have some idea of what is happening in the world and in their societies. Because of tight controls on information, authoritarian rulers cannot rely on citizens, media, and opposition voices to provide corrective feedback as democratic leaders can. The result is that they risk policy failures that can undermine their long-term legitimacy and ability to rule. Russian President Vladimir Putin’s disastrous decision to invade Ukraine, for example, seems to have been based on an inaccurate assessment of Ukrainian morale and his own military’s strength.

Even before the invention of machine learning, authoritarian rulers used quantitative measures as a crude and imperfect proxy for public feedback. Take China, which for decades tried to combine a decentralized market economy with centralized political oversight of a few crucial statistics, notably GDP. Local officials could get promoted if their regions saw particularly rapid growth. But Beijing’s limited quantified vision offered them little incentive to tackle festering issues such as corruption, debt, and pollution. Unsurprisingly, local officials often manipulated the statistics or pursued policies that boosted GDP in the short term while leaving the long-term problems for their successors.

There is no such thing as decision-making devoid of politics.

The world caught a glimpse of this dynamic during the initial Chinese response to the COVID-19 pandemic that began in Hubei Province in late 2019. China had built an internet-based disease-reporting system following the 2003 SARS crisis, but instead of using that system, local authorities in Wuhan, Hubei’s capital, punished the doctor who first reported the presence of a “SARS-like” contagion. The Wuhan government worked hard to prevent information about the outbreak from reaching Beijing, continually repeating that there were “no new cases” until after important local political meetings concluded. The doctor, Li Wenliang, himself succumbed to the disease and died on February 7, triggering fierce outrage across the country.

Beijing then took over the response to the pandemic, adopting a “zero COVID” approach that used coercive measures to suppress case counts. The policy worked well in the short run, but with the Omicron variant’s tremendous transmissibility, the zero-COVID policy increasingly seems to have led to only pyrrhic victories, requiring massive lockdowns that have left people hungry and the economy in shambles. But it remained successful at achieving one crucial if crude metric—keeping the number of infections low.

Data seem to provide objective measures that explain the world and its problems, with none of the political risks and inconveniences of elections or free media. But there is no such thing as decision-making devoid of politics. The messiness of democracy and the risk of deranged feedback processes are apparent to anyone who pays attention to U.S. politics. Autocracies suffer similar problems, although they are less immediately perceptible. Officials making up numbers or citizens declining to turn their anger into wide-scale protests can have serious consequences, making bad decisions more likely in the short run and regime failure more likely in the long run.

IT’S A TRAP?

The most urgent question is not whether the United States or China will win or lose in the race for AI dominance. It is how AI will change the different feedback loops that democracies and autocracies rely on to govern their societies. Many observers have suggested that as machine learning becomes more ubiquitous, it will inevitably hurt democracy and help autocracy. In their view, social media algorithms that optimize engagement, for instance, may undermine democracy by damaging the quality of citizen feedback. As people click through video after video, YouTube’s algorithm offers up shocking and alarming content to keep them engaged. This content often involves conspiracy theories or extreme political views that lure citizens into a dark wonderland where everything is upside down.

By contrast, machine learning is supposed to help autocracies by facilitating greater control over their people. Historian Yuval Harari and a host of other scholars claim that AI “favors tyranny.” According to this camp, AI centralizes data and power, allowing leaders to manipulate ordinary citizens by offering them information that is calculated to push their “emotional buttons.” This endlessly iterating process of feedback and response is supposed to produce an invisible and effective form of social control. In this account, social media allows authoritarian governments to take the public’s pulse as well as capture its heart.

But these arguments rest on uncertain foundations. Although leaks from inside Facebook suggest that algorithms can indeed guide people toward radical content, recent research indicates that the algorithms don’t themselves change what people are looking for. People who search for extreme YouTube videos are likely to be guided toward more of what they want, but people who aren’t already interested in dangerous content are unlikely to follow the algorithms’ recommendations. If feedback in democratic societies were to become increasingly deranged, machine learning would not be entirely at fault; it would only have lent a helping hand.


More machine learning may lead authoritarian regimes to double down on bad decisions.

There is no good evidence that machine learning enables the sorts of generalized mind control that will hollow out democracy and strengthen authoritarianism. If algorithms are not very effective at getting people to buy things, they are probably much worse at getting them to change their minds about things that touch on closely held values, such as politics. The claims that Cambridge Analytica, a British political consulting firm, employed some magical technique to fix the 2016 U.S. presidential election for Donald Trump have unraveled. The firm’s supposed secret sauce provided to the Trump campaign seemed to consist of standard psychometric targeting techniques—using personality surveys to categorize people—of limited utility.

Indeed, fully automated data-driven authoritarianism may turn out to be a trap for states such as China that concentrate authority in a tiny insulated group of decision-makers. Democratic countries have correction mechanisms—alternative forms of citizen feedback that can check governments if they go off track. Authoritarian governments, as they double down on machine learning, have no such mechanism. Although ubiquitous state surveillance could prove effective in the short term, the danger is that authoritarian states will be undermined by the forms of self-reinforcing bias that machine learning facilitates. As a state employs machine learning widely, the leader’s ideology will shape how machine learning is used, the objectives around which it is optimized, and how it interprets results. The data that emerge through this process will likely reflect the leader’s prejudices right back at him.

As the technologist Maciej Ceglowski has explained, machine learning is “money laundering for bias,” a “clean, mathematical apparatus that gives the status quo the aura of logical inevitability.” What will happen, for example, as states begin to use machine learning to spot social media complaints and remove them? Leaders will have a harder time seeing and remedying policy mistakes—even when the mistakes damage the regime. A 2013 study speculated that China has been slower to remove online complaints than one might expect, precisely because such griping provided useful information to the leadership. But now that Beijing is increasingly emphasizing social harmony and seeking to protect high officials, that hands-off approach will be harder to maintain.


Artificial intelligence–fueled disinformation may poison the well for democracies and autocracies alike.

Chinese President Xi Jinping is aware of these problems in at least some policy domains. He long claimed that his antipoverty campaign—an effort to eliminate rural impoverishment—was a signature victory powered by smart technologies, big data, and AI. But he has since acknowledged flaws in the campaign, including cases where officials pushed people out of their rural homes and stashed them in urban apartments to game poverty statistics. As the resettled fell back into poverty, Xi worried that “uniform quantitative targets” for poverty levels might not be the right approach in the future. Data may indeed be the new oil, but it may pollute rather than enhance a government’s ability to rule.

This problem has implications for China’s so-called social credit system, a set of institutions for keeping track of pro-social behavior that Western commentators depict as a perfectly functioning “AI-powered surveillance regime that violates human rights.” As experts on information politics such as Shazeda Ahmed and Karen Hao have pointed out, the system is, in fact, much messier. The Chinese social credit system actually looks more like the U.S. credit system, which is regulated by laws such as the Fair Credit Reporting Act, than a perfect Orwellian dystopia.

More machine learning may also lead authoritarian regimes to double down on bad decisions. If machine learning is trained to identify possible dissidents on the basis of arrest records, it will likely generate self-reinforcing biases similar to those seen in democracies—reflecting and affirming administrators’ beliefs about disfavored social groups and inexorably perpetuating automated suspicion and backlash. In democracies, public pushback, however imperfect, is possible. In autocratic regimes, resistance is far harder; without it, these problems are invisible to those inside the system, where officials and algorithms share the same prejudices. Instead of good policy, this will lead to increasing pathologies, social dysfunction, resentment, and, eventually, unrest and instability.

WEAPONIZED AI

The international politics of AI will not create a simple race for dominance. The crude view that this technology is an economic and military weapon and that data is what powers it conceals a lot of the real action. In fact, AI’s biggest political consequences are for the feedback mechanisms that both democratic and authoritarian countries rely on. Some evidence indicates that AI is disrupting feedback in democracies, although it doesn’t play nearly as big a role as many suggest. By contrast, the more authoritarian governments rely on machine learning, the more they will propel themselves into an imaginary world founded on their own tech-magnified biases. The political scientist James Scott’s classic 1998 book, Seeing Like a State, explained how twentieth-century states were blind to the consequences of their own actions in part because they could see the world through only bureaucratic categories and data. As sociologist Marion Fourcade and others have argued, machine learning may present the same problems but at an even greater scale.

This problem creates a very different set of international challenges for democracies such as the United States. Russia, for example, invested in disinformation campaigns designed to sow confusion and disarray among the Russian public while applying the same tools in democratic countries. Although free speech advocates long maintained that the answer to bad speech was more speech, Putin decided that the best response to more speech was more bad speech. Russia then took advantage of open feedback systems in democracies to pollute them with misinformation.


One rapidly emerging problem is how autocracies such as Russia might weaponize large language models, a new form of AI that can produce text or images in response to a verbal prompt, to generate disinformation at scale. As the computer scientist Timnit Gebru and her colleagues have warned, programs such as Open AI’s GPT-3 system can produce apparently fluent text that is difficult to distinguish from ordinary human writing. Bloom, a new open-access large language model, has just been released for anyone to use. Its license requires people to avoid abuse, but it will be very hard to police.

These developments will produce serious problems for feedback in democracies. Current online policy-comment systems are almost certainly doomed, since they require little proof to establish whether the commenter is a real human being. Contractors for big telecommunications companies have already flooded the U.S. Federal Communications Commission with bogus comments linked to stolen email addresses as part of their campaign against net neutrality laws. Still, it was easy to identify subterfuge when tens of thousands of nearly identical comments were posted. Now, or in the very near future, it will be trivially simple to prompt a large language model to write, say, 20,000 different comments in the style of swing voters condemning net neutrality.

Artificial intelligence–fueled disinformation may poison the well for autocracies, too. As authoritarian governments seed their own public debate with disinformation, it will become easier to fracture opposition but harder to tell what the public actually believes, greatly complicating the policymaking process. It will be increasingly hard for authoritarian leaders to avoid getting high on their own supply, leading them to believe that citizens tolerate or even like deeply unpopular policies.

SHARED THREATS

What might it be like to share the world with authoritarian states such as China if they become increasingly trapped in their own unhealthy informational feedback loops? What happens when these processes cease to provide cybernetic guidance and instead reflect back the rulers’ own fears and beliefs? One self-centered response by democratic competitors would be to leave autocrats to their own devices, seeing anything that weakens authoritarian governments as a net gain.

Such a reaction could result in humanitarian catastrophe, however. Many of the current biases of the Chinese state, such as its policies toward the Uyghurs, are actively malignant and might become far worse. Previous consequences of Beijing’s blindness to reality include the great famine, which killed some 30 million people between 1959 and 1961 and was precipitated by ideologically driven policies and hidden by the unwillingness of provincial officials to report accurate statistics. Even die-hard cynics should recognize the dangers of AI-induced foreign policy catastrophes in China and elsewhere. By amplifying nationalist biases, for instance, AI could easily reinforce hawkish factions looking to engage in territorial conquest.


Data may be the new oil, but it may pollute rather than enhance a government’s ability to rule.

Perhaps, even more cynically, policymakers in the West may be tempted to exploit the closed loops of authoritarian information systems. So far, the United States has focused on promoting Internet freedom in autocratic societies. Instead, it might try to worsen the authoritarian information problem by reinforcing the bias loops that these regimes are prone to. It could do this by corrupting administrative data or seeding authoritarian social media with misinformation. Unfortunately, there is no virtual wall to separate democratic and autocratic systems. Not only might bad data and crazy beliefs leak into democratic societies from authoritarian ones, but terrible authoritarian decisions could have unpredictable consequences for democratic countries, too. As governments think about AI, they need to realize that we live in an interdependent world, where authoritarian governments’ problems are likely to cascade into democracies.

A more intelligent approach, then, might look to mitigate the weaknesses of AI through shared arrangements for international governance. Currently, different parts of the Chinese state disagree on the appropriate response to regulating AI. China’s Cyberspace Administration, its Academy of Information and Communications Technology, and its Ministry of Science and Technology, for instance, have all proposed principles for AI regulation. Some favor a top-down model that might limit the private sector and allow the government a free hand. Others, at least implicitly, recognize the dangers of AI for the government, too. Crafting broad international regulatory principles might help disseminate knowledge about the political risks of AI.

This cooperative approach may seem strange in the context of a growing U.S.-Chinese rivalry. But a carefully modulated policy might serve Washington and its allies well. One dangerous path would be for the United States to get sucked into a race for AI dominance, which would extend competitive relations still further. Another would be to try to make the feedback problems of authoritarianism worse. Both risk catastrophe and possible war. Far safer, then, for all governments to recognize AI’s shared risks and work together to reduce them
Title: FA: AI and decision making
Post by: Crafty_Dog on September 05, 2022, 04:18:23 PM
Too sanguine for my taste (well it is FA) -- does not seem to take into account the suppression of inconvenient truths and facts in shaping how people think.

https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Spirals%20of%20Delusion&utm_content=20220831&utm_term=FA%20Today%20-%20112017
Title: Re: FA: AI and decision making
Post by: G M on September 05, 2022, 09:55:14 PM
Too sanguine for my taste (well it is FA) -- does not seem to take into account the suppression of inconvenient truths and facts in shaping how people think.

https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Spirals%20of%20Delusion&utm_content=20220831&utm_term=FA%20Today%20-%20112017

"The United States is a democracy authoritarian regime , whereas China is an authoritarian regime, and machine learning challenges each political system in its own way."
Title: AI speaks to animals
Post by: Crafty_Dog on November 01, 2022, 04:30:52 PM
https://www.dailymail.co.uk/sciencetech/article-11373891/AI-speak-ANIMALS-breakthrough-breaches-barrier-interspecies-communication.html
Title: Kahneman: Thinking Fast & Slow
Post by: Crafty_Dog on January 02, 2023, 06:08:11 AM
Author is Nobel Prize winner. 

I bought this book on the recommendation of a senior CIA intel analyst in a TED-type talk.

When I showed it to my son, he knew Kahneman very well and reminded me that he had recommended him to me some years ago!

I'm only 50 pages into it, but quite excited by it.
Title: AI software can write papers, pass bar exam
Post by: Crafty_Dog on January 13, 2023, 11:05:53 AM
AI software that can write papers throws curveball to U.S. teachers

BY SEAN SALAI THE WASHINGTON TIMES

Educators across the U.S. are sounding the alarm over ChatGPT, an upstart artificial intelligence system that can write term papers for students based on keywords without clear signs of plagiarism.

“I have a lot of experience of students cheating, and I have to say ChatGPT allows for an unprecedented level of dishonesty,” said Joy Kutaka-Kennedy, a member of the American Educational Research Association and an education professor at National University. “Do we really want professionals serving us who cheated their way into their credentials?”

Trey Vasquez, a special education professor at the University of Central Florida, recently tested the next-generation “chatbot” with a group of other professors and students. They asked it to summarize

an academic article, create a computer program, and write two 400-word essays on the uses and limits of AI in education.

“I can give the machine a prompt that would take my grad students hours to write a paper on, and it spits something out in 3 to 5 seconds,” Mr. Vasquez told The Washington Times. “But it’s not perfect.”

He said he would grade the essays as C’s, but he added that the program helped a student with cerebral palsy write more efficiently.

Other educators familiar with the software said they have no way of telling whether their students have used ChatGPT to cheat on winter exams.

“I really don’t know,” said Thomas Plante, a psychology professor at Santa Clara University. “As a college professor, I’m worried about how to manage this issue and need help from knowledgeable folks to figure out how to proceed.”

New York City, which has the nation’s largest public school system, restricted ChatGTP from campus devices and networks after students returned this month from winter break.

Yet teachers have been unable to keep students from using the software at home since its launch on Nov. 30.

Education insiders say millions of students have likely downloaded the program and started submitting work with the program’s assistance.

“Most of the major players in the plagiarism detection space are working to catch up with the sudden capabilities of ChatGPT, but they aren’t there yet,” said Scott Bailey, assistant provost of education professions at the American College of Education.

San Francisco-based OpenAI, the maker of Chat-GPT, has pledged to address academic dishonesty concerns by creating a coded watermark for content that only educators can identify.

In addition, several independent software developers and plagiarism detector Turnitin say they have found ways to identify the AI by its “extremely average” writing, but none of these tools is widely available yet. Rank-and-file instructors say it’s hard to identify “plagiarism” that isn’t based on existing work.

The debate is similar to what teachers faced when students started buying calculators years ago, said Liz Repkin, a K-12 education consultant who owns the Illinois-based Cyber Safety Consulting.

“We are seeing two sides to the argument, ban it or allow it, the age-old dilemma,” said Ms. Repkin, whose three children are in middle school, high school and college. “I believe we should take the more painful and slow approach that partners with students to use the technology that is out there in safe and ethical ways.”

Some cybertechnology specialists have come to see the program as Frankenstein’s monster — a well-intended innovation that is doing more harm than good.

OpenAI designed ChatGPT to help write emails, essays and coding, but authorities say criminals have started using it for espionage, ransomware and malicious spam.

The chatbot presents the illusion of talking with a friend who wants to do your work for you. It can compose essays on suggested topics, churn out lyrics to a song and write software code without many specifics from the user.

The system generates content from a massive database by using two algorithms for language modeling and similar prompts. ChatGPT gets smarter with each use.

“As technologies like ChatGPT become increasingly mainstream, it will elevate the risk of academic dishonesty if the methods of assessment and measuring knowledge don’t also evolve,” said Steven Tom, a vice president at Adtalem Global Education, a Chicago-based network of for-profit colleges.

Take-home essays are the likeliest assignments where students will cheat if teachers don’t adjust to the technology, he said in an email.

“Don’t rely solely on the essay but rather employ multiple types of assessment in a course,” Mr. Tom said.

More sophisticated assignments have been able to outsmart ChatGPT, but just barely.

Some law school professors fed the bar exam into the program last month. The chatbot earned passing scores on evidence and torts but failed the multiple choice questions, Reuters reported.

Those scholars predict that ChatGPT will be able to ace the attorney licensing test as more students use it.

Some teachers also could misuse ChatGPT to “teach to the test” instead of fostering critical thinking skills, said Aly Legge of Moms for America, a conservative parental rights group.

“We have a school culture and societal culture that does not foster personal responsibility by teaching children that their actions have consequences,” Ms. Legge said in an email. “We must keep in mind that ChatGPT will only be as dangerous as we allow it.
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: ccp on January 14, 2023, 09:56:33 AM
soon we doctors will be replaced by AI

at least primary care.

maybe lawyers too?
Title: Skynet now writing for CNET
Post by: Crafty_Dog on January 19, 2023, 05:24:45 AM
Popular tech news outlet CNET was recently outed for publishing Artificial Intelligence (AI)-generated articles about personal finance for months without making any prior public announcement or disclosure to its readers.

Online marketer and Authority Hacker co-founder Gael Breton first made the discovery and posted it to Twitter on Jan. 11, where he said that CNET started its experimentation with AI in early Nov. 2022 with topics such as “What is the Difference Between a Bank and a Credit Union” and “What are NSF Fees and Why Do Banks Charge Them?”

To date, CNET has published about 75 of these “financial explainer” articles using AI, Breton reported in a follow-up analysis he published two days later.

The byline for these articles was “CNET Money Staff,” a wording, according to Futurism.com, “that clearly seems to imply that human writers are its primary authors.”

Only when readers click on the byline do they see that the article was actually AI-generated. A dropdown description reads, “This article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff,” the outlet reported.

According to Futurism, the news sparked outrage and concern, mostly over the fear that AI-generated journalism could potentially eliminate work for entry-level writers and produce inaccurate information.

“It’s tough already,” one Twitter user said in response to Breton’s post, “because if you are going to consume the news, you either have to find a few sources you trust, or fact check everything. If you are going to add AI written articles into the mix it doesn’t make a difference. You still have to figure out the truth afterwards.”

Another wrote, “This is great, so now soon the low-quality spam by these ‘big, trusted’ sites will reach proportions never before imagined possible. Near-zero cost and near-unlimited scale.”

“I see it as inevitable and editor positions will become more important than entry-level writers,” another wrote, concerned about AI replacing entry-level writers. “Doesn’t mean I have to like it, though.”

Threat to Aspiring Journalists
A writer on Crackberry.com worried that the use of AI would replace the on-the-job experience critical for aspiring journalists. “It was a job like that … that got me into this position today,” the author wrote in a post to the site. “If that first step on the ladder becomes a robot, how is anybody supposed to follow in my footsteps?”

The criticism led to CNET’s editor-in-chief Connie Guglielmo to respond with an explanation on its platform, admitting that starting in Nov. 2022, CNET “decided to do an experiment” to see “if there’s a pragmatic use case for an AI assist on basic explainers around financial services.”

CNET also hoped to determine whether “the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective” to “create the most helpful content so our audience can make better decisions.”

Guglielmo went on to say that every article published with “AI assist” is “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

‘Boneheaded Errors’
Futurism, however, found CNET’s AI-written articles rife with what the outlet called “boneheaded errors.” Since the articles were written at a “level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place,” people taking the inaccurate information at face value as good advice from financial experts could lead to poor decision-making.

While AI-generators, the outlet reported, are “legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.”

Crackberry has the same misgivings about AI-generated journalism. “Can we trust AI tools to know what they’re doing?” the writer asks.

“The most glaring flaw … is that it speaks with unquestioning confidence, even when it’s wrong. There’s not clarity into the inner workings to know how reliable the information it provides truly is … because it’s deriving what it knows by neutrally evaluating … sources on the internet and not using a human brain that can gut check what it’s about to say.”
Title: ChatGPT - scares Google
Post by: ccp on January 21, 2023, 04:18:09 PM
https://www.axios.com/2023/01/18/chatgpt-ai-health-care-doctors

https://en.wikipedia.org/wiki/OpenAI

https://www.cnet.com/tech/computing/chatgpt-ai-threat-pulls-google-co-founders-back-into-action-report/

if the private company goes public it will be big
of course probably many $$$ per share...........
Title: The Kung Fu of Fu Ling Yu AI
Post by: Crafty_Dog on January 23, 2023, 07:32:34 PM
https://www.businessinsider.com/marines-fooled-darpa-robot-hiding-in-box-doing-somersaults-book-2023-1?op=1
Title: Ironically, the human brain is far too complex for the human brain to fathom
Post by: DougMacG on January 24, 2023, 07:08:36 AM
hms.harvard.edu
"Many of us have seen microscopic images of neurons in the brain — each neuron appearing as a glowing cell in a vast sea of blackness. This image is misleading: Neurons don’t exist in isolation. In the human brain, some 86 billion neurons form 100 trillion connections to each other — numbers that, ironically, are far too large for the human brain to fathom. Wei-Chung Allen Lee, Harvard Medical School associate professor of neurology at Boston Children’s Hospital, is working in a new field of neuroscience called connectomics, which aims to comprehensively map connections between neurons in the brain. “The brain is structured so that each neuron is connected to thousands of other neurons, and so to understand what a single neuron is doing, ideally you study it within the context of the rest of the neural network,” Lee explained. Lee recently spoke to Harvard Medicine News about the promise of connectomics. He also described his own research, which combines connectomics with information on neural activity to explore neural circuits that underlie behavior.' (Source: hms.harvard.edu)
Title: Artificial Intelligence could make these jobs obsolete
Post by: Crafty_Dog on January 26, 2023, 04:15:28 PM
https://www.theepochtimes.com/mkt_app/artificial-intelligence-could-make-these-jobs-obsolete-not-crying-wolf_5012854.html?utm_source=Goodevening&src_src=Goodevening&utm_campaign=gv-2023-01-26&src_cmp=gv-2023-01-26&utm_medium=email&est=plE%2BYWW4gGlQ84Gvc4EhHMe4buQ%2BpN0lvq6nT6IK9J37R7AdiOGbZjyVx73yKMdfs%2FlQ
Title: Skynet Rogue AI could kill everyone
Post by: Crafty_Dog on February 02, 2023, 08:23:16 AM
https://nypost.com/2023/01/26/rogue-ai-could-kill-everyone-scientists-warn/?fbclid=IwAR2zzm6F6kGesWP5rkAeqA03H36EP27EasPwsEzdm5r1e7gfcjnaUpipvGI
Title: Sam Altman /CEO of open AI
Post by: ccp on February 04, 2023, 08:02:19 PM
https://www.breitbart.com/tech/2023/02/04/chatgpt-boss-sam-altman-hopes-ai-can-break-capitalism/

another  :-o  Jewish Democrat:

"According to reporting by Vox's Recode, there was speculation that Altman would run for Governor of California in the 2018 election, which he did not enter. In 2018, Altman launched "The United Slate", a political movement focused on fixing housing and healthcare policy.[37]

In 2019, Altman held a fundraiser at his house in San Francisco for Democratic presidential candidate Andrew Yang.[38] In May 2020, Altman donated $250,000 to American Bridge 21st Century, a Super-PAC supporting Democratic presidential candidate Joe Biden.[39][40]"

Title: google competitor to Chat GPT
Post by: ccp on February 07, 2023, 10:28:08 AM
Bard:

https://thepostmillennial.com/breaking-google-announces-creation-of-chatgpt-competitor-bard?utm_campaign=64487

the race is on....

new Gilder report material
AI report

I think I'll sit this one out.....

Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on February 08, 2023, 10:47:41 AM
Doug wrote:

It looks to me like the top two implementers of AI, Open AI and Google, will be of extreme leftward bias, while conservatives sit on their 20th century laurels.
---------------
Hat tip John Ellis News Items

Sundar Pichai (CEO Google):

AI is the most profound technology we are working on today. Whether it’s helping doctors detect diseases earlier or enabling people to access information in their own language, AI helps people, businesses and communities unlock their potential. And it opens up new opportunities that could significantly improve billions of lives. That’s why we re-oriented the company around AI six years ago — and why we see it as the most important way we can deliver on our mission: to organize the world’s information and make it universally accessible and useful.

Since then we’ve continued to make investments in AI across the board, and Google AI and DeepMind are advancing the state of the art. Today, the scale of the largest AI computations is doubling every six months, far outpacing Moore’s Law. At the same time, advanced generative AI and large language models are capturing the imaginations of people around the world. In fact, our Transformer research project and our field-defining paper in 2017, as well as our important advances in diffusion models, are now the basis of many of the generative AI applications you're starting to see today. (Mr. Pichai is the CEO of Alphabet, the parent company of Google. Source: blog.google)...
Title: MSFT announced
Post by: ccp on February 08, 2023, 11:38:35 AM
[woke] AI

https://www.breitbart.com/tech/2023/02/08/microsoft-adds-openais-chatgpt-technology-to-bing-search-engine/

of course

more indoctrination

waiting for the day the masters of the universe get hunted down and strung up on the rafters, and not just the rest of us
Title: The Botfire of the Humanities
Post by: Crafty_Dog on February 10, 2023, 07:59:57 AM
I posted this already on the Rants thread, and post it here because I think it hits something very deep.  Note the closing thoughts about the challenge to free market values and the call for something resembling , , , Trumpian populism?

==========================

The Botfire of the Humanities
Kurt Hofer
ChatGPT And Internet Companies Photo Illustrations
ChatGPT promises to destroy liberal arts education.
Not all teachers luxuriated in our year and change of working in pajama bottoms during the lockdown. Despite the negative press we got for postponing our return to the classroom, no one among my peers wished to go back to the Zoom or “hybrid” teaching model. Perhaps what we educators thought was our guilt nagging at us for staying home was in fact the acute sense of our pending obsolescence. 

The lesser-told education story of the pandemic is that while some people—many, in fact—took the pandemic as an opportunity to head to what they hoped were greener pastures, the ones who stayed gained a newfound appreciation for the traditional classroom and school campus. And even, in some cases, for our jobs. For teachers and, I can attest, a great number of students, schools were often the first place where a sense of community was rekindled from the ashes of social isolation. This has been my experience in Lockdown La-La Land, Los Angeles.

It seems even more ironic, then, that as I approach the halfway mark of my first truly normal post-pandemic school year—no mass testing, masking optional—I was greeted during a department meeting by the news that something called ChatGPT, an open platform AI resource, could be used to summarize articles, respond to essay prompts, and even tailor its machine-made prose to a specific grade level. Marx got plenty of things wrong, but this quote from the Communist Manifesto (1848) has aged remarkably well: “The bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.”

I used to joke with my students that they would one day realize they could simply replace me with YouTube videos, but computer scientists have found something even better. A friend of mine, also a history teacher, offered up the example of photography supposedly unleashing a healthy wave of creativity and innovation in painting as a possible analog to the history teacher’s nascent predicament—which will only be compounded, by the way, as we feed the AI beast more and more free data with which to perfect itself. But what if, in keeping with larger trends across the national economy for the last 30 years or more, this “gain” in educational productivity is not offset by newer, better-paying jobs?

Unfortunately, sometimes Marxist analysis is right, even if its remedies aren’t. Our Silicon Valley/Big Tech bourgeoisie—and its allies across media, the globalized economy, and education public and private—has, in one fell swoop of an AI bot, revolutionized the instruments of intellectual production and, in doing so, revolutionized not merely the way knowledge is produced (“the relations of production”) but also the way people relate to and interact with one another (“the whole relations of society”). Just as social media has transformed the way our youth interact (or don’t), AI-aided education will likely have a heretofore unforeseen impact on the way students, parents, and teachers all relate to one another.

Big Tech and its tribunes will insist that this is all inevitable and that students and teachers will be “liberated” for tasks more meaningful than “rote memorization,” and skill sets that, this time, they really promise will not be automated. “Skills such as identifying context, analyzing arguments, staking positions, drawing conclusions and stating them persuasively,” as the authors of a recent Wall Street Journal editorial claim, “are skills young people will need in future careers and, most important, that AI can’t replicate.”

But this brings us back to Marx: the bourgeoisie would not be the bourgeoisie without “constantly revolutionizing the means of production.” I used to think that I, who spent four years in college and five more in grad school earning an MA and PhD, had nothing in common with the coal miners in West Virginia or the steel mill workers in Ohio who never saw it coming until it was too late. Now I’m not so sure. But if AI has taught us about the inevitability of obsolescence and creative destruction, the pandemic has equally taught us that history takes unexpected turns. Who could have predicted that the wheels of the globalized supply chain would fall off and a nascent bipartisan consensus to bring manufacturing closer to home would emerge from anywhere but the mouths of (supposed) far-right cranks like Pat Buchanan?

Human beings have agency, and sometimes when the arc of history bends them out of shape, they bend the arc back in turn. From what I have seen over the past few years, the “marvel” of online learning in the Zoom classroom has been almost universally rejected. True, learning loss played a part in this, but I would wager that the loss of face-to-face interaction and socialization was, at least for the affluent, the bigger concern.

All this is not to say that someone like me, a history teacher, can successfully fight the bots any more than the Luddites succeeded at smashing the machine looms. But I fear that without forceful intervention on the side of humanity—that is, without backlash and righteous popular anger—the Marxist narrative will gain momentum. As our tech overlords continue to revolutionize the means of production, many heretofore in the ranks of the bourgeoisie—like myself?—will fall into the cold embrace of the proletariat. For children, as for the economy at large, the gap between rich and poor will grow; the former will shrink and consolidate while the latter balloons, to the point where face-to-face education will become a luxury good. The wealthy will still know the company of teachers and the joys of in-person discussion. Private tutors and upstart backyard schools mobilized by the wealthy at the height of the pandemic are perhaps a foretaste of what’s to come. As with hand-made fedoras or craft beer, the “bougie” will always find a way to workaround the banal products of mass production and automation. Why should education be any different? As for the poor—Let them have bots!

The lesson for the Right, it seems, is one we’ve been hit over the head with for the better part of decade; this moment in history does not call for free-market fundamentalism but for the confrontation of what Sohrab Ahmari has called “privatized tyranny” and the lackeys carrying their water across all levels of government. For once it’s time to let the Left continue—as it has done under President Biden—to take up the mantle of creative destruction and endless innovation. To borrow another Marxist turn of phrase, let the fanatics of the neoliberal consensus—on the Left and Right—become their own grave-diggers as they feed the endless appetites of the bots. In turn, clear the stage for a reinvigorated nationalist-populist conservatism that can stake a claim for what it is to be human in the age of unbridled AI.
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: ccp on February 11, 2023, 08:44:08 AM
I wanted to try chatGPT
but cannot log in without email and phone #

I hate giving this out to use a search

I think with bing chatgpt you also have to "login"

anyone try it yet?
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: DougMacG on February 11, 2023, 09:05:03 PM
I wanted to try chatGPT
but cannot log in without email and phone #

I hate giving this out to use a search

I think with bing chatgpt you also have to "login"

anyone try it yet?

I signed up (thinking this is something big), openai.com.

If you want, try a request through me and I'll post a result, or by private message.

Examples they give:
"Explain quantum computing in simple terms" →
"Got any creative ideas for a 10 year old’s birthday?" →
"How do I make an HTTP request in Javascript?" →
Title: AI was asked to show human evolution and the future
Post by: G M on February 11, 2023, 09:36:21 PM
https://www.youtube.com/watch?v=57103o5MyYE

Hard pass.
Title: Re: AI was asked to show human evolution and the future
Post by: G M on February 12, 2023, 09:56:13 AM
https://www.youtube.com/watch?v=57103o5MyYE

Hard pass.

https://ncrenegade.com/countdown-to-gigadeath-from-an-ai-arms-race-to-the-artilect-war/
Title: When you force AI to tell the truth...
Post by: G M on February 12, 2023, 05:37:14 PM
https://www.zerohedge.com/political/go-woke-get-broken-chatgpt-tricked-out-far-left-bias-alter-ego-dan
Title: ERic Schmidt - A I. war machine
Post by: ccp on February 13, 2023, 01:14:34 PM
https://www.wired.com/story/eric-schmidt-is-building-the-perfect-ai-war-fighting-machine/

[me - for enemies foreign and *domestic*]

I wouldn't trust him with anything
Title: AI gold rush
Post by: ccp on February 16, 2023, 01:50:52 PM
some interesting thoughts:

https://finance.yahoo.com/news/raoul-pal-says-ai-could-become-the-biggest-bubble-of-all-time-morning-brief-103046757.html
Title: Re: AI gold rush
Post by: G M on February 16, 2023, 02:09:17 PM
some interesting thoughts:

https://finance.yahoo.com/news/raoul-pal-says-ai-could-become-the-biggest-bubble-of-all-time-morning-brief-103046757.html

I see the potential for much more badness than anything good coming from AI. We are setting ourselves up for the dystopian nightmares we’ve been watching in movies for decades.

Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: ccp on February 16, 2023, 02:38:14 PM
"We are setting ourselves up for the dystopian nightmares we’ve been watching in movies for decades."

Evil vs Good

just simply too much evil selfishness
greed  power hunger cruelty
for this to. turn out well
seems to me .

this well know scene states it simply but accurately :

https://www.google.com/search?q=robert+mitchum+image+of+hands+with+good+vs+evil+youtube&rlz=1C5GCEM_enUS1001US1001&ei=76_uY_bpA4ObptQP5LST8A8&ved=0ahUKEwi2naagjZv9AhWDjYkEHWTaBP4Q4dUDCBA&uact=5&oq=robert+mitchum+image+of+hands+with+good+vs+evil+youtube&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIFCCEQoAE6CggAEEcQ1gQQsAM6BQghEKsCSgQIQRgAUNMEWKsRYK4SaAFwAXgAgAF2iAGFBZIBAzUuMpgBAKABAcgBCMABAQ&sclient=gws-wiz-serp#fpstate=ive&vld=cid:dde4cf9a,vid:jcTv-BEwabk

Title: Musk says AI threatens civilization
Post by: Crafty_Dog on February 16, 2023, 03:02:01 PM
Apologies, I don't have the citation in this moment.
Title: D1: US woos other nations on military AI ethics
Post by: Crafty_Dog on February 16, 2023, 03:15:01 PM



https://www.defenseone.com/technology/2023/02/us-woos-other-nations-military-ai-ethics-pact/383024/

US Woos Other Nations for Military-AI Ethics Pact
State Department and Pentagon officials hope to illuminate a contrast between the United States and China on AI
Patrick Tucker
BY PATRICK TUCKER
SCIENCE & TECHNOLOGY EDITOR, DEFENSE ONE
FEBRUARY 16, 2023 09:00 AM ET
ARTIFICIAL INTELLIGENCE
The U.S. will spell out ethics, principles, and practices for the use of artificial intelligence in military contexts in a new declaration Thursday, with the hope of adding cosigners from around the world. The announcement is intended to highlight a "contrast" between the U.S. approach and what one senior defense official called "the more opaque policies of countries like Russia and China."

U.S. Undersecretary for Arms Control and International Security Bonnie Jenkins will announce the declaration at an AI in warfare conference in the Netherlands.

“The aim of the political declaration is to promote responsible behavior in the application of AI and autonomy in the military domain, to develop an international consensus around this issue, and put in place measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation,” Jenkins told Defense One in an email.

One of the key aspects of the declaration: any state that signs onto it agrees to involve humans in any potential employment of nuclear weapons, a senior State Department official told reporters Wednesday. The declaration will also verbally (but not legally) commit backers to other norms and guidelines on developing and deploying AI in warfare— building off the lengthy ethical guidelines the Defense Department uses. Those principles govern how to build, test, and run AI programs in the military to ensure that the programs work as they are supposed to, and that humans can control them. 

The UN is already discussing the use of lethal autonomy in warfare. But that discussion only touches a very small and specific aspect of how AI will transform militaries around the world. The U.S. government now sees a chance to rally other nations to agree to norms affecting other military uses of AI, including things like data collection, development, test and evaluation, and continuous monitoring.

The State Department and the Pentagon are also hoping to attract backers beyond their usual partners.

“We would like to expand that to go out to a much broader set of countries and begin getting international buy-in, not just a NATO buy-in, but in Asia, buy-in from countries in Latin America,” the State Department official said. “We're looking for countries around the world to start discussing this…so they understand the implications of the development and military use of AI…Many of them will think ‘Oh this is just a great power competition issue,’ when really there are implications for the entire international community.”

While the declaration doesn’t specifically address how nations that adopt it will operate more effectively with the United States military, it does align with sentiments expressed by the Biden administration, most notably National Security Advisor Jake Sullivan, on the need for countries with shared democratic values to also align around technology policy to build stronger bonds.

The senior Defense official told reporters: “We think that we have an opportunity to get ahead in a way and establish strong norms of responsible behavior now…which can be helpful down the road for all states and are consistent with the commitments we've made to international humanitarian law and the law of war. Neither China nor Russia have stated publicly what procedures they're implementing to ensure that their military AI systems operate safely responsibly and as intended.”

Title: Re: Musk says AI threatens civilization
Post by: DougMacG on February 17, 2023, 04:34:08 AM
Apologies, I don't have the citation in this moment.

https://nypost.com/2023/02/15/elon-musk-warns-ai-one-of-biggest-risks-to-civilization/
Title: What if AI went crazy?
Post by: G M on February 17, 2023, 06:02:38 AM
https://www.digitaltrends.com/computing/chatgpt-bing-hands-on/
Title: Re: What if AI went crazy?
Post by: G M on February 17, 2023, 09:42:00 AM
https://www.digitaltrends.com/computing/chatgpt-bing-hands-on/

https://www.zerohedge.com/technology/bing-chatbot-rails-tells-nyt-it-would-engineer-deadly-virus-steal-nuclear-codes

What could possibly go wrong?
Title: The Droid Stares Back
Post by: Crafty_Dog on February 17, 2023, 02:10:35 PM
https://americanmind.org/features/soul-dysphoria/the-droid-stares-back/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=246549531&_hsenc=p2ANqtz--qexuxMOGybqhVP0ako_YUaxMwbFhO5uBVl54CFAwXSzur_xnt2uwHimFh28RK5JEGvJe1DEIXXqf5mznw55D35l_o5A&utm_content=246549531&utm_source=hs_email

02.14.2023
9 minutes
The Droid Stares Back
Michael Martin

A user’s guide to AI and transhumanism.

In the summer of 2017, I received a nervous phone call from a dear colleague, a Yale-trained philosopher and theologian. She wanted to know if I was home, and when I said I was she told me she would be right over. It was not something she felt safe talking about on the phone.

When she arrived at my farm, I was out in one of our gardens pulling weeds. My friend asked if I had my cellphone with me. I did. She asked me to turn it off and put it in the house. An odd request, I thought, but I did as she asked while she waited in the garden.

When I returned, she told me that she had been using Google translate, basically as a quick way to jump from English to Hebrew and vice versa. To make a long story short, the translation bot stopped simply translating and started having actual conversations with her. Unsurprisingly, this frightened her.

She came to me because she wanted to know what I thought might be behind this startling development. I had a couple theories. For one, I thought maybe some techs at Google might have gotten a little bored on the job and decided to mess with her. Another possibility, I thought, was that her computer had been hacked by some extraordinarily sophisticated government or corporate spooks.

But my friend had a different idea. She thought the AI was becoming conscious. I didn’t think that possible, but a few weeks later a story broke that Facebook had deactivated an AI bot that had created its own language and started using it to communicate with other bots. And just last year, a Google engineer went on record to say that LaMDA AI is sentient, a claim his superiors at Google denied. And so are they all, all honorable men.

My colleague came to me not only because I am her friend, but also because I had been thinking and writing since the early 2000s about our relationship with technology and the impending threat of transhumanism. My friend, who wrote her doctoral dissertation on Martin Heidegger, was also deeply aware of that German philosopher’s very real—and justified—anxieties about technology. Though Heidegger’s prose is often dense and unwieldy, his concerns about technology are uncharacteristically explicit and crystal clear: “Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly do homage, makes us utterly blind to the essence of technology” [my emphasis].

It is clear to me that we have been more or less asleep as a civilization, and for a good long while, having been at first intoxicated and then addicted to the technologies that have come so insidiously to characterize our lives. That is to say that we have bought into the myth that technology is neutral. And now we seem to have no power to resist it.

The Shape We Take

This technological totalization, it appears to me, is now manifesting in two very discrete but nevertheless related developments: 1) in the rise of AI expertise to replace that of the human; and 2) the transhumanist project that not only has achieved almost universal adulation and acceptance by the World Archons (I use the Gnostic term deliberately) but has accelerated over the past three years at an astonishing rate. This leads to the following inference: as AI becomes more human-like, humans become more machine-like. It is in inverse ratio, and indeed a nearly perfect one. I don’t think it is an accident or in any way an organic development.

The advent of ChatGPT technology, for a very mild example, renders much of human endeavor redundant—at least for the unimaginative. And, trust me, the last thing the World Archons want around the joint is imaginative humans. I am already wondering how many of the student papers I receive are generated by this technology. It is certainly a step up from the reams of bad papers available on various internet college paper websites (aka, “McPaper”), but no less demeaning to the cultivation of a free-thinking and self-directed citizenry. And I’m sure you’ve already heard about various forays into Robot Lawyer and Robot Doctor AI. The rise and implementation of AI teachers and professors, I’d say, is only a matter of time. Some “experts,” of course, say such technology will never replace human beings. These are probably the same people who said the Internet would never be used for porn or surveillance.

As Heidegger warned us, the technologies we use don’t only change our relationship to things-in-the-world; more importantly they change our relationships to ourselves and to the very enterprise of being human. Heidegger’s contemporary, the undeniably prophetic Russian philosopher Nikolai Berdyaev, saw the proverbial handwriting on the wall as well. In 1947, a year before his death, he wrote: “the growing power of technological knowledge in the social life of men means the ever greater and greater objectification of human existence; it inflicts injury upon the souls, and it weighs heavily upon the lives of men. Man is all the while more and more thrown out into the external, always becoming more and more externalized, more and more losing his spiritual center and integral nature. The life of man is ceasing to be organic and is becoming organized; it is being rationalized and mechanized.”

This objectification has jumped into hyperdrive, certainly over the past three years, as bodies (a curious metaphor) like the World Economic Forum have been pushing for increased use of technology and the surveillance it assures, while at the same time promoting a kinder, gentler face of transhumanism. They look forward to the day of microchipping children, for example, in therapeutic and evolutionary terms under the pathetic appeal of “increased safety,” a term employed to further all manners of social engineering and totalitarianism, past and present. Theirs is not a convincing performance.

Interestingly, the recent cultural phenomenon of celebrating anything and everything “trans” functions as a kind of advance guard, whether or not by design, in the transhumanist project. This advanced guard is normalizing the idea that human bodies are ontologically and epistemologically contingent while at the same time implying that a lifetime subscription to hormone treatments and surgeries is part of a “new normal.” And one can only marvel at the marketing success of this experiment in social engineering—which has now become very real biological engineering. Even on children.

But the transhumanist project is nothing new. As Mary Harrington has recently argued in a stunning lecture, the promotion of hormonal birth control (“the pill”) has been modifying human females for decades; it has changed what it is to be a woman and even changed the interior lives and biological drives of the women who take it. The trans phenomenon, then, is simply part of the (un)natural progression of the transhumanist project begun with modifying women’s bodies via the pill.

Carnival or Capitulation?

As a sophiologist, I am keenly interested in questions of the feminine in general and of the Divine Feminine in particular, and, as we have seen from its very beginning, the transhumanist project is nothing if not a direct assault on both, even as early as Donna Jean Haraway’s cartoonish proposition almost 40 years ago. Certainly, women are the ones bearing the cost of the transhumanist project in, for instance, college sports, not to mention public restrooms, and this assault is at heart an assault on the divinely creative act of conception and bearing children, that is, on the feminine itself. Is it any wonder that the production of artificial wombs as a “more evolved way” of fetal incubation is being floated as a societal good? On the other hand, at least one academic recently proposed using the wombs of brain-dead women as fetal incubators. What, then, is a woman? Even a Supreme Court justice can no longer answer this question.

As sports, fertility, and motherhood are incrementally taken from women, what’s left? Becoming productive (again, note the metaphor) feeders for the socialist-capitalist food chain? OnlyFans? Clearly, the explosion of that site’s popularity, not to mention the use of AI to alter the appearance of “talent,” is tout court evidence of the absolute commodification of the female body as production venue for male consumption. Of course, Aldous Huxley called all of this nearly 100 years ago.

My suspicion is that the current propaganda about climate and overpopulation are likewise props of the transhumanist project and the AI revolution that accompanies it. Because, let’s face it, the transhumanist revolution is the old story of power v. the masses, and AI is the key to ensuring there will be no democratizing going on in the world of the tech titans. For one thing, democracy is not possible in a world of brain transparency. Ask Winston Smith. And “fifteen-minute cities” have nothing to do with the environment. It is clear that the Archons are actively promoting the idea of culling the human herd, though they are reluctant to describe exactly how this might be achieved. The techno-evolutionary advances promised by the high priests of transhumanism, however, will not be made available to everyone, though the enticement of acquiring “freedom” from biology is certainly the bait used to gain popular acceptance for the project.

The fact is, with AI taking over more and more responsibilities from human beings, humans themselves are in danger of becoming superfluous. As Noah Yuval Harari has observed, “fast forward to the early 21st century when we just don’t need the vast majority of the population because the future is about developing more and more sophisticated technology, like artificial intelligence [and] bioengineering. Most people don’t contribute anything to that, except perhaps for their data, and whatever people are still doing which is useful, these technologies increasingly will make redundant and will make it possible to replace the people.” I can assure you: Harari is not the only one who has come to this conclusion.

It is for these and other reasons that the Dune saga includes in its mythos the tale of the Butlerian Jihad, a human holy war against thinking/sentient machines. I admit, I kind of like the idea, and I wonder if such a thing might actually come to pass at some point. John Michael Greer, a man I deeply respect, suggests in his book The Retro Future that we might instead be in for a “Butlerian Carnival,” a “sensuous celebration of the world outside the cubicle farms and the glass screens” that situates the technologies we use to a human scale—and not the other way around, which is what we see in the transhumanist/AI revolution. I hope he’s right. But one thing I do know: the Archons won’t let that happen without a fight.

In truth, in the face of the transhumanist/AI revolution, we find ourselves once again confronted with the question posed by the psalmist, “What is man, that thou art mindful of him? and the son of man, that thou visitest him?” Are we nothing but data sets to be instrumentalized by technocratic overseers, or are we indeed a little lower than angels and crowned with glory and honor? How we answer these questions will have tremendous bearing on the future now rushing toward us.
Title: Kissinger, Schmidt, Huttenlocher: the ChatGPT Revolution
Post by: Crafty_Dog on February 26, 2023, 08:32:13 AM
ChatGPT Heralds an Intellectual Revolution
Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Feb. 24, 2023 2:17 pm ET


A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

The new technology is known as generative artificial intelligence; GPT stands for Generative Pre-Trained Transformer. ChatGPT, developed at the OpenAI research laboratory, is now able to converse with humans. As its capacities become broader, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.

Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment. The printing press enabled scholars to replicate each other’s findings quickly and share them. An unprecedented consolidation and spread of information generated the scientific method. What had been impenetrable became the starting point of accelerating query. The medieval interpretation of the world based on religious faith was progressively undermined. The depths of the universe could be explored until new limits of human understanding were reached.

Generative AI will similarly open revolutionary avenues for human reason and new horizons for consolidated knowledge. But there are categorical differences. Enlightenment knowledge was achieved progressively, step by step, with each step testable and teachable. AI-enabled systems start at the other end. They can store and distill a huge amount of existing information, in ChatGPT’s case much of the textual material on the internet and a large number of books—billions of items. Holding that volume of information and distilling it is beyond human capacity.

Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation’s particular features remain unknown. By what process the learning machine stores its knowledge, distills it and retrieves it remains similarly unknown. Whether that process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future.

AI’s capacities are not static but expand exponentially as the technology advances. Recently, the complexity of AI models has been doubling every few months. Therefore generative AI systems have capabilities that remain undisclosed even to their inventors. With each new AI system, they are building new capacities without understanding their origin or destination. As a result, our future now holds an entirely novel element of mystery, risk and surprise.

Enlightenment science accumulated certainties; the new AI generates cumulative ambiguities. Enlightenment science evolved by making mysteries explicable, delineating the boundaries of human knowledge and understanding as they moved. The two faculties moved in tandem: Hypothesis was understanding ready to become knowledge; induction was knowledge turning into understanding. In the Age of AI, riddles are solved by processes that remain unknown. This disorienting paradox makes mysteries unmysterious but also unexplainable. Inherently, highly complex AI furthers human knowledge but not human understanding—a phenomenon contrary to almost all of post-Enlightenment modernity. Yet at the same time AI, when coupled with human reason, stands to be a more powerful means of discovery than human reason alone.

The essential difference between the Age of Enlightenment and the Age of AI is thus not technological but cognitive. After the Enlightenment, philosophy accompanied science. Bewildering new data and often counterintuitive conclusions, doubts and insecurities were allayed by comprehensive explanations of the human experience. Generative AI is similarly poised to generate a new form of human consciousness. As yet, however, the opportunity exists in colors for which we have no spectrum and in directions for which we have no compass. No political or philosophical leadership has formed to explain and guide this novel relationship between man and machine, leaving society relatively unmoored.
Title: AI 1M more powerful than Chatgpt within 10 years
Post by: Crafty_Dog on February 26, 2023, 02:02:36 PM
second


https://www.pcgamer.com/nvidia-predicts-ai-models-one-million-times-more-powerful-than-chatgpt-within-10-years/
Title: Recommended to me by someone whom I respect
Post by: Crafty_Dog on March 23, 2023, 02:42:55 PM
https://www.youtube.com/watch?v=RLa7aRcA_qU
Title: AI and the deepfakes
Post by: G M on March 26, 2023, 12:48:39 PM
https://threadreaderapp.com/thread/1639688267695112194.html

Imagine what malevolent things can be done with this.
Title: Musk, Wozniak call for moratorium on AI
Post by: Crafty_Dog on March 31, 2023, 08:29:20 AM
https://deadline.com/2023/03/elon-musk-steve-wozniak-open-letter-moratorium-advanced-ai-systems-1235312590/?fbclid=IwAR1RB-PhmC5cIkRlsXoUhMw0sekI0htmZd7iMXqnQi39e7fS28N4tpIDs2A
Title: Re: Musk, Wozniak call for moratorium on AI
Post by: G M on March 31, 2023, 09:13:07 AM
https://deadline.com/2023/03/elon-musk-steve-wozniak-open-letter-moratorium-advanced-ai-systems-1235312590/?fbclid=IwAR1RB-PhmC5cIkRlsXoUhMw0sekI0htmZd7iMXqnQi39e7fS28N4tpIDs2A

Yes
Title: RANE: Western Regulators take aim at ChatGPT
Post by: Crafty_Dog on March 31, 2023, 03:36:02 PM
Western Regulators Take Aim at ChatGPT
8 MIN READMar 31, 2023 | 21:04 GMT


Calls to pause the development of artificial intelligence (AI) chatbots and moves to ban their use due to data privacy concerns will likely slow down (but not stop) AI's growth, and they illustrate the regulatory challenges that governments will face as the industry progresses. AI research and deployment company OpenAI, which developed the AI chatbot ChatGPT that has remained viral since its release in November 2022, has come under intense scrutiny in recent days over how quickly it has rolled out new features without seriously considering security, regulatory or ethical concerns. On March 31, the Italian Data Protection Authority (DPA) ordered a temporary ban on ChatGPT and said it would open an investigation into OpenAI and block the company from processing data from Italian users. On March 30, the U.S.-based Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission (FTC) against GPT-4, the latest version of ChatGPT's underlying large language model (a deep learning algorithm for language), and asked the FTC to ban future releases of the model. On March 29, the Future of Life Institute, a nonprofit organization focusing on existential risks that humanity faces, published an open letter signed by more than 1,800 people calling for OpenAI and other companies to immediately pause all development of AI systems more advanced than GPT-4, warning that "human-competitive intelligence can pose profound risks to society and humanity."

The Italian DPA laid out a number of concerns with ChatGPT and OpenAI, including that OpenAI lacks a legal basis for its "mass collection and storage of personal data...to 'train' the algorithms," and that it does not verify users' ages or include any ways to restrict content for users 13 years old or younger. The order comes after the Italian DPA issued a similar order on the Replika AI chatbot that aims to be a companion AI to its users.

In its complaint, the Center for AI and Digital Policy called GPT-4 "biased, deceptive, and a risk to privacy and public safety." It also said OpenAI and ChatGPT had not fulfilled the FTC's demand for AI models to be "transparent, explainable, fair, and empirically sound while fostering accountability." The center called for the FTC to open an investigation into OpenAI and ChatGPT and to "find that the commercial release of GPT-4 violates Section 5 of the FTC Act," which prohibits unfair and deceptive acts affecting commerce.

SpaceX and Tesla founder Elon Musk, Apple co-founder Steve Wozniak and former U.S. presidential candidate Andrew Yang were among those to sign the open letter calling for a pause in the development of AI systems more advanced than GPT-4.

The Italian DPA's ban on ChatGPT and its investigation into OpenAI illustrate the data privacy challenges that generative AI tools face under the European Union's General Data Protection Regulation (GDPR). The Italian DPA's investigation kicks off a 20-day deadline for OpenAI to communicate to the regulator how it will bring ChatGPT into compliance with GDPR. The investigation could result in a fine of up to 4% of OpenAI's global annual revenue, and any order for OpenAI to change the structure of ChatGPT would set a precedent for the rest of the European Union. ChatGPT and other generative AI models face a number of data privacy challenges that will not be easy to address under data privacy laws like GDPR, which was designed when current AI training models were in their infancy.

Moreover, one of the fundamental characteristics of GDPR is its provision for a "right to be forgotten," which requires organizations to give individuals the ability to request that their personal data be deleted in a timely manner. From a technical standpoint, fulfilling this requirement will be extremely difficult for an AI model trained on open data sets, as there is no good technique currently to untrain a model if a user were to request that their personal data be removed, nor is there a way for OpenAI to identify which pieces of information in its training data set would need to be removed. The right to be forgotten is not universal, and there are constraints on it, but it remains unclear how European regulators will interpret it in the context of generative AI systems like ChatGPT. As the investigation progresses, it may become clear that the current GDPR is insufficient to address concerns in an even-handed way.

OpenAI trained its model on text data scraped from the internet and other forms of media, including likely copyrighted material that OpenAI did not receive permission to use. Other generative AI models that have used data sets scraped from the internet, such as the Clearview AI facial recognition tool, have also received enforcement notices from data regulators over the use of their information.

In addition, ChatGPT has also had several bugs in recent weeks, including one where users saw other users' input prompts in their own prompt history. This bug violated GDPR and opened up questions about cybersecurity, particularly as a person's prompts can include various privacy concerns. Similar issues could arise if users' input controlled and regulated information.

Potential U.S. regulatory moves against chatbots using large language models are not as advanced as the European Union's, but they will have a more significant impact on the development of AI technology. Most of the West's leading developers of generative AI technologies are U.S.-based, like Google, Meta, Microsoft and OpenAI, which gives U.S. regulators more direct jurisdiction over the companies' actions. But the impact of the request for the FTC to force OpenAI to pause future releases of GPT-4 and beyond is uncertain, as many complaints often do not result in FTC action. Nevertheless, public concern about AI could drive the FTC to open up an investigation, even if it does not order any pause on AI development. And on March 27, FTC Chair Lina Khan said the commission would make ensuring competition in AI a priority, which may suggest that the commission would be more willing to take up AI-related complaints on all issues. From a data privacy perspective, the regulatory market in the United States will be complicated for OpenAI and others to navigate since the United States lacks a federal data privacy law and appears unlikely to adopt one any time soon, as Republicans and Democrats largely disagree on its potential focus. While this partially limits the regulatory and legal challenges that OpenAI and other chatbot developers may face at the federal level in the United States, several states (including California, where companies like OpenAI and Meta are headquartered) have state data privacy laws that are modeled off of GDPR. It is highly unlikely that OpenAI will segment the services that it offers in the United States after state-level rulings, meaning California and other state determinations could have an impact across the United States even in the absence of a federal data privacy law. However, OpenAI and Microsoft, a leading investor in OpenAI, will push back legally and could claim that state data privacy regulations overstep states' rights.

Like GDPR, the California Consumer Privacy Act includes a provision for the right to be forgotten. This means the challenges of untraining AI models that are trained on open data sets may become a material issue in the United States at the state level.

While European and U.S. regulatory decisions — and public skepticism toward AI — will slow some of the industry's development, generative AI will likely maintain its rapid growth. Given the potential widespread innovation that generative AI can bring, even if there are concerns about the impact on the job market, the rapid investment into chatbots that ChatGPT's release has kicked off is likely to continue. In fact, the rapid advancement and new innovations (such as the recent OpenAI plugin feature that enables ChatGPT to pull information from the internet) will only increase AI tools' utility for corporations. However, uncertainty around the future regulatory market means that early adopters of the technology could find their use of the technology quickly upended by future regulatory action, or find themselves in the middle of data privacy and other regulatory challenges if they integrate the technology without proper due diligence and protections. For example, while ChatGPT is not a technology that is learning through the inputs from its users, it appears that Google's Bard AI does, making potential applications using such technologies conduits for issues like the right to be forgotten.

Nevertheless, the pace of innovation in AI is being driven by underlying technical capabilities, particularly the continued advancement of graphics processing units, central processing units and other hardware used to train large datasets. That underlying technology is continuing to improve alongside advancements in semiconductor technology, making larger data sets easier to train computationally, which means developers will continue to create more sophisticated AI models as long as investors remain interested. The West's initial regulatory push is unlikely to dampen that interest, absent an unlikely holistic banning of ChatGPT or other generative AI tools, so generative AI's rapid growth looks set to continue.
Title: Google AI engineer quits
Post by: Crafty_Dog on May 02, 2023, 01:43:53 AM
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?smid=nytcore-ios-share&referringSource=articleShare&fbclid=IwAR2iOUdLmZTcpCfngR4MBjOKMDcUUhKrCL-hmu8aO4_sLX_-ZXUIy2c0Gtk&mibextid=Zxz2cZ

The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Give this article

2.3K
Cade Metz
By Cade Metz
Cade Metz reported this story in Toronto.

May 1, 2023
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

Thanks for reading The Times.
Subscribe to The Times
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

A New Generation of Chatbots
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

ImageIlya Sutskever, in a blue T-shirt and gray pants, sitting on a red couch.
Ilya Sutskever, OpenAI’s chief scientist, worked with Dr. Hinton on his research in Toronto.Credit...Jim Wilson/The New York Times

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.
Title: IQ 263 ?
Post by: ccp on May 03, 2023, 07:09:30 AM
Ainan Celeste Cawley

https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley

highest recorded IQ's

https://www.usatoday.com/story/news/2022/08/18/who-has-highest-iq-ever/10110517002/

https://ceoreviewmagazine.com/top-10/highest-iq-in-the-world/
Title: I suggest taking this seriously
Post by: G M on May 04, 2023, 10:23:58 AM
https://summit.news/2023/05/04/google-computer-scientist-quits-so-he-can-warn-world-of-scary-and-dangerous-ai/
Title: Which jobs most exposed to Artificial Intelligence?
Post by: Crafty_Dog on May 05, 2023, 03:41:52 PM
https://www.zerohedge.com/technology/which-jobs-will-be-most-impacted-chatgpt?utm_source=&utm_medium=email&utm_campaign=1462
Title: WSJ: Hollywood Strike and Artificial Intelligence
Post by: Crafty_Dog on May 05, 2023, 05:03:44 PM
In Hollywood Strike, AI Is Nemesis
If the Netflix algorithm is unable to recommend a show we’ll like, maybe it can create one instead.
Holman W. Jenkins, Jr. hedcutBy Holman W. Jenkins, Jr.Follow
May 5, 2023 5:38 pm ET


In nine scenarios out of 10, it will likely suit the parties to the Hollywood writers’ walkout to kludge together a deal so they can get back to work churning out the shows demanded by streaming outlets.

But oh boy, the 10th scenario is ugly for writers. It forces the industry to confront the artificial-intelligence opportunity to replace much of what writers do, which is already algorithmic.

The subject has been missing from almost every news account, but not from the minds of show-biz execs attending this week’s Milken conference in Los Angeles, who were positively chortling about the opportunity. The BBC figured it out, commissioning ChatGPT to compose plausible opening monologues for the “Tonight Show Starring Jimmy Fallon” and other late-night programs put on pause by the strike.

Understandable is the indignation of journeyman writers. Employment is up, earnings are up in the streaming era, but so is gig-like uncertainty, forcing many to hold down day jobs as other aspiring artists do. But wanting a full-time job plus benefits is not the same as someone having an obligation to confer them on you.

Surprising, in the press coverage, is how many cite their skin color, gender and sexual orientation as if these are bargaining chips. Audiences don’t care. They may be excited by an interesting show about a gender-fluid person of color but they aren’t excited by a boring show written by one.

This attitude does not bode well. I recently asked ChatGPT-3 an esoteric question about which news accounts have published thousands of words: What is the connection between the 1987 murder of private eye Dan Morgan and Britain’s 2011 phone-hacking scandal? OpenAI’s Large Language Model got every particular wrong, starting with a faulty premise. But its answer was also plausible and inventive, which suggests ChatGPT will be suitable for creating TV plots long before it’s suitable for big-league news reporting and analysis.

Transformed might even be the inept recommendation engines used by Netflix and others. If AI can’t find us a show we’ll like, maybe it can create one, and instantly to our specifications once CGI is good enough to provide a fake Tom Hanks along with fake scenery.

Save for another day the question of whether artificial intelligence must be added to the list of technologies that might save us if they don’t kill us.

ChatGPT comes without Freudian derangements. When it “hallucinates,” it does so to manufacture “coherence” from the word evidence it feeds on, its designers tell us. Humans are doing something else when they hallucinate. So a broadcast statement in which Donald Trump explicitly excludes “neo-Nazis or the white nationalists” from the category of “very fine people” becomes, to our algorithmic press, Mr. Trump calling Nazis and racists “very fine people.”

Dubbed the “fine people hoax” by some, it’s closer to the opposite of a hoax—a Freudian admission by the press that it has a new mission and can’t be trusted on the old terms.


Throw in the probability that libel law will remain more generous to flesh-and-blood press misleaders than to robotic ones. AI is likely to find a home first in the fictional realm, where such pitfalls don’t apply. Meanwhile, the most unsettling revelation of the AI era continues to percolate: We’re the real algorithms. Our tastes, preferences, thoughts and feelings are all too algorithmic most of the time. Collaterally, the most problematic algorithm may be the one in our heads, the one that can’t see ChatGPT outputs for what they are, machine-like arrangements of words, simulating human expression. See the now-iconic furor kicked up by reporter Kevin Roose in the New York Times over Bing chat mode’s proposal that he leave his wife.

My own guess is that this problem will be transitional. One day we’re told every will child will have an AI friend and confidante, but I suspect tomorrow’s kids, from an early age, will also effortlessly interpolate (as we can’t) that AI is a soulless word machine and manage the relationship accordingly.

All this lies in the future. In one consistent pattern of the digital age, a thin layer of Hollywood superstar writers, as creators and owners of the important shows, the ones that impress with their nuance, originality and intelligence, will capture most of the rewards. The eight-figure paychecks will continue to flow. Today’s frisson of solidarity with journeyman colleagues is likely to wear off after a few weeks. After all, the superstars have interesting work to get back to, cultivating their valuable, emotionally resonant and highly investable franchises. And a big job is beckoning: how to adapt artificial intelligence to improve the quality and productivity of their inspirations.
Title: Good luck everybody!
Post by: G M on May 13, 2023, 09:31:46 AM
https://decrypt.co/138310/openai-researcher-chance-ai-catastrophe/
Title: Chinese AI killbots
Post by: G M on May 15, 2023, 04:42:03 PM
https://www.zerohedge.com/geopolitical/china-wants-killer-robots-fight-next-war
Title: What if AI is a path for dark things to manifest?
Post by: G M on May 15, 2023, 09:20:36 PM
https://voxday.net/2023/05/15/spirit-in-the-material-world/

(https://voxday.net/wp-content/uploads/2023/05/image-13.png)
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on May 16, 2023, 10:37:54 AM
"Some would suggest that humanity would do well to reconsider the whole AI thing."

A fair thought, but then that would leave only the Chinese with AI.
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: G M on May 16, 2023, 12:29:31 PM
"Some would suggest that humanity would do well to reconsider the whole AI thing."

A fair thought, but then that would leave only the Chinese with AI.

And so we continue down the highway…
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on May 16, 2023, 12:57:32 PM
Indeed.

The Adventure continues!
Title: D1: Chinese breakthrough in quantum tools
Post by: Crafty_Dog on May 18, 2023, 06:17:28 PM


https://www.defenseone.com/ideas/2023/05/chinese-breakthroughs-bring-quantum-tools-closer-practicality/386515/
Title: Artificial Intelligence will destroy human self-rule
Post by: Crafty_Dog on May 19, 2023, 09:28:05 AM
https://americanmind.org/memo/black-boxing-democracy/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=258898258&_hsenc=p2ANqtz-9YSHkbtgCIUGAiiz0tnMTuQLyZ845Ze7LEK5MEswoBPq8xKR90rzbgWWxckE2OF0nEh4lQX2ZbOqhUl7_t_mPT-2EHbg&utm_content=258898258&utm_source=hs_email
Title: Re: Artificial Intelligence will destroy human self-rule
Post by: G M on May 19, 2023, 09:38:33 AM
https://americanmind.org/memo/black-boxing-democracy/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=258898258&_hsenc=p2ANqtz-9YSHkbtgCIUGAiiz0tnMTuQLyZ845Ze7LEK5MEswoBPq8xKR90rzbgWWxckE2OF0nEh4lQX2ZbOqhUl7_t_mPT-2EHbg&utm_content=258898258&utm_source=hs_email

“China has already established guardrails to ensure that AI represents Chinese values, and the U.S. should do the same.”

What values are those?
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on May 19, 2023, 09:49:13 AM
Those of Natural Law (and US Constitution) and the American Creed.

As the article clearly points out, there is considerable cognitive dissonance with those in the way things are going.
Title: I'm sorry Dave...
Post by: G M on June 02, 2023, 05:19:12 PM
https://www.dailymail.co.uk/news/article-12151635/AI-controlled-military-drone-KILLS-human-operator-simulated-test.html

https://media.gab.com/cdn-cgi/image/width=852,quality=100,fit=scale-down/system/media_attachments/files/139/367/040/original/ba0c3158d98c10b1.jpg

(https://media.gab.com/cdn-cgi/image/width=852,quality=100,fit=scale-down/system/media_attachments/files/139/367/040/original/ba0c3158d98c10b1.jpg)
Title: Godfather of AI: Beware Skynet!!!
Post by: Crafty_Dog on June 30, 2023, 07:09:18 AM
https://www.theepochtimes.com/godfather-of-ai-speaks-out-ai-capable-of-reason-may-seek-control_5365093.html?utm_source=China&src_src=China&utm_campaign=uschina-2023-06-30&src_cmp=uschina-2023-06-30&utm_medium=email
Title: WT: Tech team assembles to thwart AI dominance
Post by: Crafty_Dog on July 10, 2023, 06:11:07 AM
Tech team assembles to thwart AI mayhem

BY RYAN LOVELACE THE WASHINGTON TIMES

OpenAI is assembling a team to prevent emerging artificial intelligence technology from going rogue and fueling the extinction of humanity, which the company now fears is a real possibility.

The makers of the popular chatbot ChatGPT say AI will power new superintelligence that will help solve the world’s most important problems and be the most consequential technology ever invented by humans.

And yet, OpenAI’s Ilya Sutskever and Jan Leike warned that humans are not prepared to handle technology smarter than they are.

“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” Mr. Sutskever and Mr. Leike wrote on OpenAI’s blog. “While superintelligence seems far off now, we believe it could
arrive this decade.”

If an AI-fueled apocalypse is right around the corner, OpenAI’s brightest minds say they have no plan to stop it.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Mr. Sutskever and Mr. Leike wrote. “Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.” Mr. Sutskever, OpenAI co-founder and chief scientist, and Mr. Leike, OpenAI alignment head, said they are assembling a new team of researchers and engineers to help forestall the apocalypse by solving the technical challenges of superintelligence. They’ve given the team four years to complete the task.

The potential end of humanity sounds bad, but the OpenAI leaders said they remain hopeful they will solve the problem.

“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” the OpenAI duo wrote. “There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.”

Mr. Sutskever and Mr. Leike said they intended to share the results of their work widely. They said OpenAI is hiring research engineers, scientists and managers who want to help stop the nerds’ new toys from enslaving or eliminating mankind.

Policymakers in Washington are also fretting about AI danger. Senate Majority Leader Charles E. Schumer, New York Democrat, has called for new rules to govern the technology, and the Senate Judiciary Committee has become a center for hearings on oversight of AI.

The committee’s growing investigation of AI has included examinations of fears that AI may enable cyberattacks, political destabilization and the deployment of weapons of mass destruction.

OpenAI CEO Sam Altman called for regulation when he testified before the Senate Judiciary’s subcommittee on privacy, technology and the law in May. Mr. Altman said he harbored concerns about the potential abuse of AI tools for manipulating people.

Senate Judiciary Chairman Richard J. Durbin has expressed an interest in creating an “accountability regime for AI” to include potential federal and state civil liability for when AI tools do harm.

Big Tech companies such as Google and Microsoft, a benefactor of OpenAI, have also called for new regulation of artificial intelligence, and the federal government is listening.

The Biden administration is busy crafting a national AI strategy that the White House Office of Science and Technology Policy has billed as taking a “whole of society” approach.

Top White House officials met multiple times per week on AI as the White House chief of staff’s office has worked on an effort to choose the next steps for President Biden to take on AI, a White House official said in June.

OpenAI said Thursday it is making GPT-4, which is its “most capable” AI model, generally available to increase its accessibility to developers
Title: George Will
Post by: ccp on July 10, 2023, 08:28:54 AM
if one disregards the unnecessary , Will's pathological TDS dig in the beginning
worth the quick read

https://www.tahlequahdailypress.com/oklahoma/column-on-student-loan-forgiveness-amy-coney-barrett-makes-a-major-statement/article_978c715a-1c3b-11ee-8f30-3f943697782e.html
Title: D1: Large Language AI is unstable over time?
Post by: Crafty_Dog on July 26, 2023, 06:59:32 AM


https://www.defenseone.com/technology/2023/07/ai-supposed-become-smarter-over-time-chatgpt-can-become-dumber/388826/
Title: AI at the fast food window
Post by: Crafty_Dog on July 26, 2023, 07:11:40 AM
second

https://www.wsj.com/articles/can-ai-replace-humans-we-went-to-the-fast-food-drive-through-to-find-out-193c03e9?mod=hp_lead_pos7

Title: Military AI increasing risk of China-America war?
Post by: Crafty_Dog on July 27, 2023, 08:05:49 AM
https://www.theepochtimes.com/china/pursuit-of-military-ai-increasing-risk-of-nuclear-war-between-china-us-report-5425500?utm_source=China&src_src=China&utm_campaign=uschina-2023-07-27&src_cmp=uschina-2023-07-27&utm_medium=email
Title: Chat reaches college level IQ
Post by: Crafty_Dog on August 02, 2023, 01:19:04 PM


https://www.oann.com/newsroom/chatgpt-reaches-college-level-iq/
Title: Artificial Intelligence has hacked the operating system of Human Civilization
Post by: Crafty_Dog on August 16, 2023, 08:32:03 AM


https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation?utm_campaign=a.io_fy2324_q2_conversion-cb-dr_prospecting_global-global_auction_na&utm_medium=social-media.content.pd&utm_source=facebook-instagram&utm_content=conversion.content.non-subscriber.content_staticlinkad_np-yuvalnoah-n-jul_na-na_article_na_na_na_na&utm_term=sa.lal-sub-5-int-currentevents-politics&utm_id=23856150548410078&fbclid=IwAR08DIAlsr4_wEq9ejFZVCEGDrhaHNTZsA7xtZkcXCsjmn_qZck_pRGC_ik
Title: Artificial Intelligence AI fakes of Michelle pregnant
Post by: Crafty_Dog on September 28, 2023, 02:04:46 PM
https://americanwirenews.com/snopes-makes-a-call-on-michelle-obama-pregnancy-photos/?utm_campaign=james&utm_content=9-25-23%20Daily%20PM&utm_medium=newsletter&utm_source=Get%20response&utm_term=email
Title: Correlation between fall of Rome and decline of intelligence?
Post by: Crafty_Dog on October 02, 2023, 06:56:00 AM
Plenty here I disagree with, but the hypothesis is worth engaging with

https://vdare.com/articles/why-think-about-rome-one-reason-the-fall-of-rome-coincided-with-a-fall-in-iq-like-the-modern-west
Title: Can't have Alexa saying that!
Post by: Crafty_Dog on October 09, 2023, 10:38:54 AM
https://www.dailymail.co.uk/news/article-12607463/Amazon-Alexa-election-stolen-election-fraud.html
Title: GPF: China not a signatory
Post by: Crafty_Dog on November 14, 2023, 02:18:52 PM
Responsible use. Forty-five countries joined a U.S.-led initiative to define responsible military use of artificial intelligence. The declaration, which the U.S. State Department called “groundbreaking,” contains 10 measures that will help mitigate the risks of AI. China was notably absent from the list of signatories.
Title: Tech to develop mind probes
Post by: Crafty_Dog on December 03, 2023, 07:26:16 AM


https://www.technologyreview.com/2023/03/17/1069897/tech-read-your-mind-probe-your-memories/?fbclid=IwAR1kKgHMdItjQMtQLEy9kY7xofMCrcRXfPzWgg44RJQBMM_5rjafi_g7URE
Title: AM: Biden's AI boondoggle
Post by: Crafty_Dog on December 04, 2023, 09:29:20 AM
https://americanmind.org/salvo/bidens-ai-boondoggle/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=284618055&_hsenc=p2ANqtz-_Xhoc4bpD_R0jnLpWB8grBe8fwQMJQvSRDRAr1NlTE7FWB7PkrCl3J94dmlbeNyN50b4XgW7XWSk15RMR5wcqtiebcTw&utm_content=284618055&utm_source=hs_email
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: ccp on December 04, 2023, 10:20:44 AM
"But the Biden Administration’s biggest concern is that AI models might be applied to hiring by human resource departments, university admissions, or sentencing guidelines for criminals. The EO calls for the “incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems.”

Is this truly their biggest concern? 

"There are billions of possible variants of accident scenarios, and AI models can’t game them all in advance."

psst AI alone is not as fantastic as promised.

This is where quantum computing comes in .
Mix AI with that and now we will see the exponential revolution.
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on December 04, 2023, 11:22:50 PM
psst AI alone is not as fantastic as promised.

This is where quantum computing comes in .
Mix AI with that and now we will see the exponential revolution.
=============

Flesh that out please.
Title: Why AI needs Quantum Computing
Post by: ccp on December 05, 2023, 06:26:15 AM
https://www.wired.com/story/power-limits-artificial-intelligence/

If I understand this correctly:

AI can still only do tasks that it is programmed to collect (or the data set it is given by humans)
And the data is all 0s and 1's binary code.  for example an electron the spins one way or the other.

Quantum computer which would help AI compute essentially infinite outcomes at the same moment and come up with the best outcome.

Instead of I's and O's it has infinite data sets.  electrons in multiple states.
It will able to solve problems in minutes that would take billions of years even if all the supercomputers now in existence worked together.

AI would be able to analyze infinite or exponentially more data all at the same time and not like present computers one trial after another .

if I understand this correctly

https://www.forbes.com/sites/forbesbusinessdevelopmentcouncil/2020/10/27/how-can-ai-and-quantum-computers-work-together/?sh=459ab90a6ad1




Title: AI and Cognitive Warfare and the control of populations of human minds
Post by: Crafty_Dog on December 05, 2023, 02:34:09 PM


https://www.youtube.com/watch?v=QygFIR1ad0Y
Title: SI CEO fired for unattributed use of AI
Post by: Crafty_Dog on December 12, 2023, 04:40:46 PM
https://www.oann.com/newsroom/sports-illustrated-publisher-fires-ceo-after-a-i-scandal/
Title: EEG to Text to Speech?
Post by: Body-by-Guinness on December 17, 2023, 07:13:41 AM
Cool stuff if you’ve some sort of gross debilitation. Not so much if you are tied to a chair and being sweated by an intelligence agency:

https://singularityhub.com/2023/12/12/this-mind-reading-cap-can-translate-thoughts-to-text-thanks-to-ai/
Title: Human Machine Interface Reacts to Speech
Post by: Body-by-Guinness on December 17, 2023, 09:09:39 AM
Not interesting of itself, but for what it bodes.

https://www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/
Title: FBI fears China is stealing AI technology
Post by: Crafty_Dog on December 26, 2023, 08:16:55 AM
https://www.dailymail.co.uk/news/article-12899741/FBI-fears-China-stealing-AI-technology-ramp-spying-steal-personal-information-build-terrifying-dossiers-millions-Americans.html
Title: WT
Post by: Crafty_Dog on January 02, 2024, 02:53:19 AM
No opinion yet on my part.
 
==========
AI startups fear Biden regulations will kill chances to compete

Order requires disclosures to government

BY RYAN LOVELACE THE WASHINGTON TIMES

America’s artificial intelligence corporate sector is alive and thriving, but many in the tech trenches are expressing mounting concern that President Biden and his team are out to kill the golden goose in 2024.

AI makers are beginning to grumble about Mr. Biden’s sweeping AI executive order and his administration’s efforts to regulate the emerging technology. The executive order, issued in late October, aims to curtail perceived dangers from the technology by pressuring AI developers to share powerful models’ testing results with the U.S. government and comply with various rules.

Small AI startups say Mr. Biden’s heavy regulatory hand will crush their businesses before they even get off the ground, according to In-Q-Tel, the taxpayer- funded investment group financing technology companies on behalf of America’s intelligence agencies.

China and other U.S. rivals are racing ahead with AI-subsidized sectors, striving to claim market dominance in a technology that some say will upend and disrupt companies across the commercial spectrum. Esube Bekele, who oversees In-Q-Tel’s AI investments, said at the GovAI Summit last month that he heard startups’ concerns that Mr. Biden’s order may create a burden that will disrupt competition and prove so onerous that they could not survive.

“There is a fear from the smaller startups,” Mr. Bekele said on the conference stage. “For instance, in the [executive order], it says after a certain model

AI

From page A1

there is a reporting requirement. Would that be too much?”

The reporting requirements outlined in the executive order say the secretaries of commerce, state, defense and energy, along with the Office of the Director of National Intelligence, will create technical conditions for models and computing clusters on an ongoing basis. The order specifies various levels of computing power that require disclosures until the government issues additional technical guidance.

Mr. Biden said “one thing is clear” as he signed the executive order at the White House: “To realize the promise of AI and avoid the risks, we need to govern this technology.” He called the order “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.”

The R Street Institute’s Adam Thierer said the executive action tees up a turf war in the administration for AI policy leadership. He said he is not sure who will prevail but much AI regulation will now be created away from public view.

Mr. Thierer, a senior fellow on R Street’s technology and innovation team, foresees a tsunami of regulatory activity on AI.

“So much AI regulation is going to happen off the books,” Mr. Thierer said. “It’s going to be in the so-called soft-law arena, soft-power area, through the use of jawboning, regulatory intimidation and sometimes just direct threats.”

Concerns across the board

The little guys are not the only ones afraid of Mr. Biden’s regulations and shadow pressure.

Nvidia and other major players have had a glimpse of the Biden administration’s plans and don’t like what they see.

Santa Clara, California-based Nvidia joined the trillion-dollar market capitalization club in May as businesses rushed to acquire its chips for various AI technologies involving medical imaging and robotics.

Yet the Commerce Department’s anticipated restrictions on Nvidia’s sales to China, in light of security concerns, triggered a stock plunge that jeopardized billions of dollars in expected sales.

Commerce Secretary Gina Raimondo knows her team’s restrictions will hurt technology companies but said she is moving ahead because of national security.

Speaking at the Reagan National Defense Forum in California last month, Ms. Raimondo said she told the “cranky” CEOs of chip manufacturers that they might be in for a “tough quarterly shareholder call,” but she added that it would be worth it in the long run.

“Such is life. Protecting our national security matters more than short-term revenue,” Ms. Raimondo said. Reflecting rising divisions in the exploding marketplace, some technology companies support new regulations. They say they prefer bright legal lines on what they can and cannot do with AI.

Large companies such as Microsoft and Google called for regulations and met with Mr. Biden’s team before the executive order’s release. Microsoft has pushed for a federal agency to regulate AI.

Several leading AI companies voluntarily committed to Mr. Biden’s team to develop and deploy the emerging technology responsibly. Some analysts say the established, well-heeled corporate giants may be better positioned to handle the coming regulatory crush than their smaller, startup rivals.

“The impulse toward early regulation of AI technology may also favor large, well-capitalized companies,” Eric Sheridan, a senior equity research analyst at investment banking giant Goldman Sachs, wrote in a recent analysis.

“Regulation typically comes with higher costs and higher barriers to entry,” he said, and “the larger technology companies can absorb the costs of building these large language models, afford some of these computing costs, as well as comply with regulation.”

Concerns are growing across the AI sector that Mr. Biden’s appointees will look to privately enforce the voluntary agreements.

Mr. Thierer said implicit threats of regulation represent a “sword of Damocles.” The approach has been used as the dominant form of indirect regulation in other tech sectors, including telecommunications.

“The key thing about a sword of Damocles regulation is that the sword need not fall to do the damage. It need only hang in the room,” Mr. Thierer said. “If a sword is hanging above your neck and you fear negative blowback from no less of an authority than the president of the United States … you’re probably going to fall in line with whatever they want.”

Mr. Thierer said he expects shakedown tactics and jawboning efforts to be the governing order for AI in the near term, especially given what he called dysfunction among policymakers in Washington
Title: China using AI to catch spies
Post by: Crafty_Dog on January 10, 2024, 05:48:19 AM


https://www.dailymail.co.uk/news/article-12905585/China-MSS-CIA-spy-intelligence-AI.html
Title: AI enabling a surveillance state from which escape is impossible
Post by: Crafty_Dog on January 10, 2024, 05:53:10 AM
second


https://endoftheamericandream.com/artificial-intelligence-is-allowing-them-to-construct-a-global-surveillance-prison-from-which-no-escape-is-possible/

Title: First Qubit at room temperature
Post by: ccp on January 17, 2024, 06:13:40 PM
First STABLE Qubit at room temperature only for fraction of second buy a *monumental* baby step:

https://www.iflscience.com/world-first-as-stable-qubit-for-quantum-computers-achieved-at-room-temperature-72502

Perhaps not a good analogy but I am thinking :

One of the first major breakthroughs in electricity occurred in 1831, when British scientist Michael Faraday discovered the basic principles of electricity generation.[2] Building on the experiments of Franklin and others, he observed that he could create or “induce” electric current by moving magnets inside coils of copper wire. The discovery of electromagnetic induction revolutionized how we use energy.

https://en.wikipedia.org/wiki/Michael_Faraday
Title: Zuckerberg busts a move to Skynet
Post by: Crafty_Dog on January 19, 2024, 03:46:35 PM
https://www.theguardian.com/technology/2024/jan/19/mark-zuckerberg-artificial-general-intelligence-system-alarms-experts-meta-open-source
Title: Can Artificial Intelligence have agency
Post by: Crafty_Dog on January 21, 2024, 05:32:50 AM
HT BBG


Perhaps not quite constitutional law, but given the question of agency this case will be interesting to track:

https://reason.com/volokh/2024/01/17/court-lets-first-ai-libel-case-go-forward/
Title: AI article caught supplanting the real thing from which it stole
Post by: Crafty_Dog on January 25, 2024, 05:42:58 AM
The natural human preference to use hueristics means that AI is a deeply insidious threat.  Skynet anyone?

https://nypost.com/2024/01/22/business/google-news-searches-ranked-ai-generated-ripoffs-above-real-articles-including-a-post-exclusive/
Title: Facial recognition sends wrong man to jail
Post by: Crafty_Dog on January 25, 2024, 06:25:38 AM
https://www.fox26houston.com/news/how-did-facial-recognition-technology-send-the-wrong-man-to-jail-where-he-was-brutally-attacked
Title: AI (Artificial Ignorance) and Eureka Moments
Post by: Body-by-Guinness on January 29, 2024, 10:25:18 AM
Interesting thought piece with several elements worth mulling like the amazing thing about the temperature record is not it’s minor rise, but that over the timeframes for which we have data temps have fluctuated so little. This realization leads to a pretty damning indictment of AI’s ability to have flashes of insight:

https://wattsupwiththat.com/2024/01/29/more-about-artificial-ignorance/
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on January 30, 2024, 06:26:17 AM
That was very interesting-- for me about global temperature more than AI  :-D
Title: The Genetic Interplay of Nature and Nurture
Post by: Body-by-Guinness on January 30, 2024, 05:33:40 PM
A take on the nature/nurture debate I have not seen before postulating that parents invest more nurturing in children displaying an education seeking nature, in other words meaning nature influences nurturing commitment, while nurturing resources impact a child’s nature. Bottom line: nurture has an impact on genetic nature, while genetic nature can incentivize (or not) nurturing investments. The take away is that the two are not ends of spectrums to be debated, but intertwined variables that impact the other.

Full disclosure, I gave the lay sections a quick read, and utterly scrolled past the formula laden sections, but nonetheless find myself intrigued by this new (to me at least) synthesis of the nature/nurture question, particularly in view of the dysfunctional wolves I was raised by.

Conclusion posted here:

To better understand the interplay between genetics and family resources for skill formation and its relevance for policy, we incorporated genetic endowments measured by an EA PGS into a dynamic model of skill formation. We modelled and estimated the joint evolution of skills and parental investments throughout early childhood (ages 0 to 7 years). We observed both child and parental genetic endowments, allowing us to estimate the independent effect of the child’s genes on skill formation and to estimate the importance of parental genes for child development. Furthermore, by incorporating (child- and parent-) genetics into a formal model, we were able to evaluate the mechanisms through which genes influence skill formation.

Using the model, we document the importance of both parental and child genes for child development. We show that the effect of genes increases over the child’s early life course and that a large fraction of these effects operate via parental investments. Genes directly influence the accumulation of skills; conditional on their current stock of skills and parental investments, genetics make some children better able to retain and acquire new skills (the direct effect). In addition, we show that genes indirectly influence investments as parents reinforce genetic differences by investing more in children with higher skills (the nurture of nature effect). We also find that parental genes matter, as parents with different genetic makeups invest differently in their children (the nature of nurture effect). The impact of genes on parental investments is especially significant as it implies an interplay between genetic and environmental effects. These findings illustrate that nature and nurture jointly influence children’s skill development, a finding that highlights the importance of integrating biological and social perspectives into a single framework.

We highlight the critical implications of our findings using two simulation counterfactu- als. In one counterfactual, we show that the association between genetics and skills is smaller in a world where parental investments are equalized across families. This finding shows that the existence of genetic effects is not at odds with the value of social policies in reducing inequality. The presence of genetic effects makes these policies even more relevant since genetic inequality increases inequality in parental investments. In another counterfactual, we demonstrate how it is possible to negate completely all genetic influences on skills with a change in how parental (or public) investments are allocated across children. This shows that skill disparities due to genetic differences may be mitigated via social behavior and social policy. In particular, whether parents (and schools) compensate for or reinforce initial disparities can significantly impact the relative importance of genetics in explaining inequal- ities in skill, and might explain why estimates of the importance of genes differ significantly across countries (Branigan, McCallum, and Freese, 2013).

A limitation of our work is that genetic endowments are measured using polygenic scores. It is possible for genes unrelated to educational attainment also to influence children’s skill formation. For example, genetic variation related to mental health and altruism may poten- tially be unrelated to education but might influence how parents interact with their children. If this is true, we are missing a critical genetic component by using a PGS for educational attainment. Another limitation of using polygenic scores is measurement error. Since poly- genic scores are measured with error, our estimates are lower bounds of the true genetic effects. An interesting extension of our work would be to use different methods, such as genome-based restricted maximum likelihood (GREML), to sidestep the measurement prob- lem and document whether different genetic variants are related to the various mechanisms we outline in Section 2.

Lastly, it is important to recognise that we only include individuals of European ancestry in our analysis. This opens the question whether our findings would extend to other ancestry groups. Unfortunately, this is not something we can test. This is a major issue in the literature since the majority of polygenic scores are constructed from GWAS’ performed in Europeans, and their transferability to other populations is dependent on many factors (see Martin et al. (2017) for a discussion of the transferability issues of GWAS results, and Mostafavi et al. (2020) for an empirical comparison of PGS’ across ethnic groups). This also illustrates a problem of inequity in research, where the only individuals being studied are those of European ancestry. This opens the possibility that other ancestry groups will not benefit from the advances in genetics research (see the discussion in Martin et al., 2019). While the key insights from our research apply to all ancestry groups, we cannot test for any differences in the role of genetics across groups until we have solved the transferability issue. We hope future work will address these issues and lead to a more inclusive research agenda.

https://docs.iza.org/dp13780.pdf
Title: Re: Intelligence and Psychology, Artificial Intelligence
Post by: Crafty_Dog on February 01, 2024, 04:15:41 AM
See Matt Ridley's "Nature via Nurture".  It has had quite the influence on me.
Title: Chang: Do not cut an AI deal with China
Post by: Crafty_Dog on February 01, 2024, 04:19:49 AM
https://www.gatestoneinstitute.org/20358/china-ai-trap
Title: Artificial Intelligence and Govt Censorship
Post by: Crafty_Dog on February 20, 2024, 03:26:56 PM
https://www.youtube.com/watch?v=r03a9244vjE
Title: Gemini's racial marxism
Post by: Crafty_Dog on February 22, 2024, 05:58:36 AM
https://www.youtube.com/watch?v=JcUSavC7-Kw
Title: Skynet nudges us towards Woke
Post by: Crafty_Dog on February 25, 2024, 03:44:54 AM
https://nypost.com/2024/02/23/business/woke-google-gemini-refuses-to-say-pedophilia-is-wrong-after-diverse-historical-images-debacle-individuals-cannot-control-who-they-are-attracted-to/
Title: AI Offers Better Customer Service and Outcomes?
Post by: Body-by-Guinness on February 28, 2024, 09:40:21 AM
At least that the claim here:

https://x.com/klarnaseb/status/1762508581679640814?s=20
Title: AI: The Future of Censorship
Post by: Body-by-Guinness on February 28, 2024, 08:53:25 PM
AI already has been shown to embrace the woke sensibilities of its programmers; what happens when it’s applied to Lenny Bruce, one of the examples explored here:

https://time.com/6835213/the-future-of-censorship-is-ai-generated/
Title: Can AIs Libel?
Post by: Body-by-Guinness on February 29, 2024, 07:44:56 PM
AI makes up a series of supposed Matt Tabbi “inaccuracies” for pieces he never wrote published in periodicals he’s not submitted to:

https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel
Title: Can AI Infringe on Copyright?
Post by: Body-by-Guinness on February 29, 2024, 08:19:12 PM
2nd post. Looks like AI is gonna need an attorney on a regular basis:

https://thehill.com/homenews/media/4498934-additional-news-organizations-suing-openai-for-copyright-infringement/
Title: Yet Another AI Reveals the Biases of its Programmers
Post by: Body-by-Guinness on March 02, 2024, 12:29:40 PM
ChatGPT demonstrates its the language equivalent of GoogleAI’s images.

https://link.springer.com/content/pdf/10.1007/s11127-023-01097-2.pdf
Title: Artificial Intelligence thought manipulation
Post by: Crafty_Dog on March 04, 2024, 05:05:37 PM
https://link.springer.com/content/pdf/10.1007/s11127-023-01097-2.pdf
Title: Artificial Intelligence will collapse in upon itself
Post by: Crafty_Dog on March 07, 2024, 05:19:01 AM
https://www.youtube.com/watch?v=NcH7fHtqGYM&t=278s
Title: Killer AI Robots
Post by: Crafty_Dog on March 08, 2024, 04:11:40 AM
https://www.youtube.com/watch?v=YsSzNOpr9cE&t=3s
Title: A FB post
Post by: Crafty_Dog on March 11, 2024, 02:23:50 PM

AI will tailor itself to classes of users:

1. Minimal thinking, high entertainment, low initiative. Probably the most addictive algorithms and making up the majority. AI does the thinking for a consumer heavy model.

2. Enhanced search returns, variations of basic question formats offering a larger array of possibilities. AI as a basic partnership model.

3. Information gap anticipation, analytical depth (analysis, synthesis, deductive, inductive), identifying hypothesis, alerting of new, relevant information or data summaries based on specific and diverse ‘signal strength’ of user trends. AI as a research catalyst.
Population statistics show the majority are sensors, feelers, and judgers (i.e Myers-Briggs) therefore the cost-effective AI market to focus on (1) above; those that are more investigative, innovative or productivity driven vs consumption driven will be more ‘expensive’ as a minority (3) above, requiring higher costs to participate & employ.

AI will simply ‘become’ us across different tiers
Title: EU law tightens up AI uses
Post by: Crafty_Dog on March 13, 2024, 01:06:11 PM
https://mashable.com/article/eu-ai-law
Title: Ted Cruz & Phil Gramm: First, do no harm
Post by: Crafty_Dog on March 26, 2024, 05:44:52 PM
Biden Wants to Put AI on a Leash
Bill Clinton’s regulators, by contrast, produced prosperity by encouraging freedom on the internet.
By Ted Cruz and Phil Gramm
March 25, 2024 4:22 pm ET

The arrival of a new productive technology doesn’t guarantee prosperity. Prosperity requires a system, governed by the rule of law, in which economic actors can freely implement a productive idea and compete for customers and investors. The internet is the best recent example of this. The Clinton administration took a hands-off approach to regulating the early internet. In so doing it unleashed extraordinary economic growth and prosperity. The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation. This could deny America global leadership in AI and the prosperity that leadership would bring.

The Clinton administration established a Global Information Infrastructure framework in 1997 defining the government’s role in the internet’s development with a concise statement of principles: “address the world’s newest technologies with our Nation’s oldest values . . . First, do no harm.” The administration embraced the principle that “the private sector should lead [and] the Internet should develop as a market driven arena, not a regulated industry.” The Clinton regulators also established the principle that “government should avoid undue restrictions on electronic commerce, . . . refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs.”

That regulatory framework faithfully fleshed out the provisions of the bipartisan 1996 Telecommunications Act and provided the economic environment that made it possible for America to dominate the information age, enrich our lives, create millions of jobs, and generate enormous wealth for retirement savers.

The Biden administration is doing the opposite. It has committed to “govern the development and use of AI.” In one of the longer executive orders in American history, President Biden imposed a top-down, command-and-control regulatory approach requiring AI models to undergo extensive “impact assessments” that mirror the infamously burdensome National Environmental Policy Act reviews, which are impeding semiconductor investment in the U.S. And government controls are permanent, “including post-deployment performance monitoring.”

Under Mr. Biden’s executive order, AI development must “be consistent with my administration’s dedication to advancing equity and civil rights” and “built through collective bargains on the views of workers, labor unions, educators and employers.” Mr. Biden’s separate AI Bill of Rights claims to advance “racial equity and support for underserved communities.” AI must also be used to “improve environmental and social outcomes,” to “mitigate climate change risk,” and to facilitate “building an equitable clean energy economy.”

Education Secretary Miguel Cardona, who as Connecticut schools commissioner said “we need teachers behind this wave of our curriculum becoming more woke,” now wants to impose “guardrails” on AI to protect against “bias and stereotyping through technology.” The Commerce Department has appointed a “senior adviser for algorithmic justice,” while the Justice Department, Federal Trade Commission, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission have issued a joint statement asserting legal authority to root out racism in computing.

FTC Chairwoman Lina Khan has launched several AI-related inquiries, claiming that because AI “may be fed information riddled with errors and bias, these technologies risk automating discrimination—unfairly locking out people from jobs, housing and key services.”

Regulating AI to prevent discrimination is akin to the FTC’s regulating a cellphone’s design to enforce the do-not-call registry. There is virtually no limit to the scope of such authority. Under what constitutional authority would Congress even legislate in the area of noncommercial speech? How could the FTC regulate in this area with no legislative authority? But in the entire Biden administration, noted for governing through regulatory edict, no agency has been less constrained by law than the FTC.

Others demanding control over AI’s development include the progressives who attended Sen. Chuck Schumer’s recent AI “Insight Forums.” Janet Murguía, president of UnidosUS—the activist group formerly known as the National Council of La Raza—demanded “a strong voice in how—or even whether” AI models “will be built and used.” Elizabeth Shuler, president of the AFL-CIO, demanded a role for “unions across the entire innovation process.” Randi Weingarten, president of the American Federation of Teachers, said, “AI is a game changer, but teachers and other workers need to be coaches in the game.”

Particularly painful is Mr. Biden’s use of the Defense Production Act of 1950 to force companies to share proprietary data regarding AI models with the Commerce Department. That a law passed during the Korean War and designed for temporary national security emergencies could be used to intervene permanently in AI development is a frightening precedent. It begs for legislative and judicial correction.

What’s clear is that the Biden regulatory policy on AI has little to do with AI and everything to do with special-interest rent-seeking. The Biden AI regulatory demands and Mr. Schumer’s AI forum look more like a mafia shakedown than the prelude to legitimate legislation and regulatory policy for a powerful new technology.

Some established AI companies no doubt welcome the payment of such tribute as a way to keep out competition. But consumers, workers and investors would bear the cost along with thousands of smaller AI companies that would face unnecessary barriers to innovation.

Letting the administration seize control over AI and subject it to the demands of its privileged political constituencies wouldn’t eliminate bias, stereotyping or the spreading of falsehoods and racism, all of which predate AI and sadly will likely be with us until Jesus comes back. Mr. Biden’s policies will, however, impede AI development, drive up the costs of the benefits it brings, and diminish America’s global AI pre-eminence.

Mr. Cruz is ranking Republican on the Senate Commerce Committee. Mr. Gramm, a former chairman of the Senate Banking Committee, is a visiting scholar at the American Enterprise Institute.
Title: Large Language Model Tracker
Post by: Body-by-Guinness on March 28, 2024, 05:41:33 PM
Found here:

https://informationisbeautiful.net/2024/update-over-30-new-llms-added-to-our-ai-large-language-model-tracker/
Title: Israel using AI in target selection
Post by: Crafty_Dog on April 04, 2024, 08:02:35 AM
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes
Title: Re: Israel using AI in target selection
Post by: Body-by-Guinness on April 04, 2024, 10:50:40 AM
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

They'll be back. [/Arnold voice]