Fire Hydrant of Freedom
Politics, Religion, Science, Culture and Humanities => Science, Culture, & Humanities => Topic started by: Crafty_Dog on July 26, 2009, 04:43:46 AM
-
Scientists worry machines may outsmart man
By JOHN MARKOFF
Published: July 25, 2009
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.
While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.
The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.
They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?
The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.
A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.
The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.
The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.
Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.
The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.
This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.
“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”
The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.
“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.
The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?
Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.
The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.
“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”
Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”
Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”
A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”
Ken Conley/Willow Garage
-
What Makes A Genius?
By Andrea Kuszewski
Created Aug 20 2009 - 2:26am
What is the difference between "intelligence" and "genius"? Creativity, of course!
There was an article recently in Scientific American that discussed creativity and the signs in children that were precursors to creative achievement in adulthood. The authors cite some work done by Michigan State University researchers Robert and Michele Root-Bernstein, a collaboration of physiologist and theater instructor, who presented their findings at an annual meeting of the APA this past March. Since I research creativity as well as intelligence, I found the points brought up in the article quite intriguing, yet not surprising.
One of the best observations stated in the article regarding achievement was this:
"... most highly creative people are polymaths- they enjoy and excel at a range of challenging activities. For instance, in a survey of scientists at all levels of achievement, the [researchers] found that only about one sixth report engaging in a secondary activity of an artistic or creative nature, such as painting or writing non-scientific prose. In contrast, nearly all Nobel Prize winners in science have at least one other creative activity that they pursue seriously. Creative breadth, the [researchers] argue, is an important but understudied component of genius."
Everyone is fascinated by famous geniuses like Albert Einstein. They speculate as to what made him so unique and brilliant, but no one has been able to identify exactly what "it" is. If you mention "intelligence research", the average person assumes you are speaking of that top 1 or 2%, the IQs over 145, the little kids you see on TV passing out during Spelling Bees, because they are freaking out from the pressure of having to spell antidisestablishmentarianism on a stage before hundreds of on-lookers.
But the reality is, most intelligence researchers don't focus on the top 1 or 2%, they look at the general population, of which the average score is 100, and generally focus their attention on the lower to middle portion of the distribution.
There may be a multitude of reasons why most researchers focus their study on the lower end of the distribution; one I can see is because the correlations between individual abilities measured on IQ tests and the actual overall ability level of the person taking the test are the strongest at that portion of the distribution- those IQ scores of 110 and below.
The point I just made I have made before (which you will recognize if you read any of my pieces on intelligence), so nothing new there. However, what I found especially promising about the work done by the Root-Bernsteins, is that instead of merely trying to analyze IQ scores, they actually looked at the attributes of successful, intelligent, creative people, and figured out what it was they had going for them that other highly intelligent people did not- essentially, what the difference was between "intelligent" and "genius".
(the paper abstracts from the symposium describing their methods can be read here)
Now, some hard-core statistician-types may balk at their methods, screaming, "Case studies are not valid measures of intelligence!" and to a certain degree, they have a point. Yes, they initially looked at case studies of successful individuals, but then they surveyed scientists across multiple fields and found that the highest achievers in their domain (as indicated by earning the Nobel Prize) were skilled in multiple domains, at least one of these considered to be "creative", such as music, art, or non-scientific writing.
We would probably consider most scientists to be intelligent. But are they all geniuses? Do geniuses have the highest IQ scores? Richard Feynman is undeniably considered to be a genius. While his IQ score was *only* around 120-125, he was also an artist and a gifted communicator. Was he less intelligent than someone with an IQ score of 150?
What we are doing here is challenging the very definition of "intelligence". What is it really? An IQ score? Computational ability? Being able to talk your way out of a speeding ticket? Knowing how to handle crisis effectively? Arguing a convincing case before a jury? Well, maybe all of the above.
Many moons ago, Dr Robert Sternberg, now the Dean of Arts and Sciences at Tufts University in Boston, brought this very argument to the psychology community. And, to be honest, it was not exactly welcomed with open arms. He believed that intelligence is comprised of three facets, only one of which is measured on a typical IQ test, including the SAT and the GRE. That is only the first part, analytical ability. The second component is creativity, and the third component is practical ability, or being able to use your analytical skills and your creativity in order to effectively solve novel problems. He called this the Triarchic Theory of Intelligence.
Fast-forwarding to the present, Dr Rex Jung, from the Mind Institute and the University of New Mexico in Alburquerque, published a paper earlier this year showing biochemical support for the Threshold Theory of Creativity (a necessary but sufficient level of intelligence is needed for successful creative achievement). In a nutshell, he found that intelligence (as most people measure it today) is not enough to set a person apart and rise them to the level of genius. Creativity is that essential component that not all intelligent people possess, but geniuses require. Not all creative people are geniuses (thus the Threshold Theory), but in order to reach genius status, creativity is a necessary attribute.
Someone could have an IQ of 170, yet get lost inside of a paper bag, and not have the ability to hold a conversation with anyone other than a dog. That is not my definition of genius. We want to know what made geniuses like Einstein and Feynman so far ahead of their intelligent scientist peers, and the answer to that is creativity.
I am hoping that as more studies come out stating the importance of multi-disciplinary thinking and collaboration across domains for reaching the highest levels of achievements, that eventually the science community will fully embrace creativity research and see its validity in the study of successful intelligence. As a society, we already recognize the importance of creativity in innovation and in the arts, so let's take it a step further.
Give creativity the "street cred" it deserves as the defining feature that separates mere intelligence from utter genius.
Psychology
ION Publications LLC
Source URL: http://www.scientificblogging.com/rogue_neuron/what_makes_genius
-
http://www.youtube.com/watch?v=efcIsve5wu8&feature=related
-
Crows are amazing
[youtube]http://www.youtube.com/watch?v=NhmZBMuZ6vE[/youtube]
-
I liked that Freki.
-
I did too. It reminded me of backpacking in an isolated part of the southwest and having curious ravens surveilling me. They'd circle and study. They'd land behind trees and then stealthily hop on the ground to get a closer look. There was a definite sense of some sentient thought from them, and I'm not one for sentimental anthropomorphism.
-
Konrad Lorenz wrote quite often of "jackdaws". This was translated from German. Does anyone know if this is another word for crows? or?
-
Definition of JACKDAW
1
: a common black and gray bird (Corvus monedula) of Eurasia and northern Africa that is related to but smaller than the carrion crow
2
: grackle 1
-
http://www.oldjoeblack.0nyx.com/thinktst.htm
-
Very interesting article but I found myself less disturbed by the Terminator-esque prediction of the future, and a little concerned with the fact that the doctor was completely fine with the A.I projecting "human" emotion so he WOULDN'T have to... anyone else find flaw in that?!
soundproofing materials (http://"http://www.soundisolationcompany.com/")
-
http://www.stltoday.com/news/national/article_59308dcd-3092-5280-92fb-898f569504e4.html
Ousted CIA agent runs his own private operation
With U.S. funding cut, he relies on donations to fund his 'operatives' in Pakistan and Afghanistan.
-
BD:
That is a different kind of intelligence :lol: May I ask you to please post that on the "Intel Matters" thread on the P&R forum?
Thank you,
-
Whether it "matters" or not, it appears mine was lacking! Sorry about that, Guro!
-
No worries BD; in this context the term "intelligence" was ambiguous. :-)
-
Computer beats best humans at Jepoardy
http://wattsupwiththat.com/2011/02/10/worth-watching-watson/
===========
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (you can find the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.
Kurzweil then demonstrated the computer, which he built himself—a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.
Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.
The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.
People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.
The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."
By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1—and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.
But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?
Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting—you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.
As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.
Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond—the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."
Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity—never say he's not conservative—at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today.
-
The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.
Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology.
At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading—the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters—handed out pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.
For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced inNature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger.
Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable—rather like the heat death of the universe—is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."
Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.
But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity—that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion."
Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies—HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.
Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude.
Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being—in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness—a machine with no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?
Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error.
If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."
This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?)
By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."
In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species.
Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits.
A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.
But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.
-
http://www.nytimes.com/interactive/2011/02/20/magazine/mind-secrets.html?nl=todaysheadlines&emc=tha210
-
By STEPHEN BAKER
In the weeks since IBM's computer, Watson, thrashed two flesh-and-blood champions in the quiz show "Jeopardy!," human intelligence has been punching back—at least on blogs and opinion pages. Watson doesn't "know" anything, experts say. It doesn't laugh at jokes, cannot carry on a conversation, has no sense of self, and commits bloopers no human would consider. (Toronto, a U.S. city?) What's more, it's horribly inefficient, requiring a roomful of computers to match what we carry between our ears. And it probably would not have won without its inhuman speed on the buzzer.
This is all enough to make you feel reinvigorated to be human. But focusing on Watson's shortcomings misses the point. It risks distracting people from the transformation that Watson all but announced on its "Jeopardy!" debut: These question-answering machines will soon be working alongside us in offices and laboratories, and forcing us to make adjustments in what we learn and how we think. Watson is an early sighting of a highly disruptive force.
The key is to regard these computers not as human wannabes but rather as powerful tools, ones that can handle jobs currently held by people. The "intelligence" of the tools matters little. What counts is the information they deliver.
In our history of making tools, we have long adjusted to the disruptions they cause. Imagine an Italian town in the 17th century. Perhaps there's one man who has a special sense for the weather. Let's call him Luigi. Using his magnificent brain, he picks up on signals—changes in the wind, certain odors, perhaps the flight paths of birds or noises coming from the barn. And he spreads word through the town that rain will be coming in two days, or that a cold front might freeze the crops. Luigi is a valuable member of society.
Along comes a traveling vendor who carries a new instrument invented in 1643 by Evangelista Torricelli. It's a barometer, and it predicts the weather about as well as Luigi. It's certainly not as smart as him, if it can be called smart at all. It has no sense of self, is deaf to the animals in the barn, blind to the flight patterns of birds. Yet it comes up with valuable information.
In a world with barometers, Luigi and similar weather savants must find other work for their fabulous minds. Perhaps using the new tool, they can deepen their analysis of weather patterns, keep careful records and then draw conclusions about optimal farming techniques. They might become consultants. Maybe some of them drop out of the weather business altogether. The new tool creates both displacement and economic opportunity. It forces people to reconsider how they use their heads.
The same is true of Watson and the coming generation of question-answering machines. We can carry on interesting discussions about how "smart" they are or aren't, but that's academic. They make sense of complex questions in English and fetch answers, scoring each one for the machines' level of confidence in it. When asked if Watson can "think," David Ferrucci, IBM's chief scientist on the "Jeopardy!" team, responds: "Can a submarine swim?"
As these computers make their way into law offices, pharmaceutical labs and hospitals, people who currently make a living by answering questions must adjust. They'll have to add value in ways that machines cannot. This raises questions not just for individuals but for entire societies. How do we educate students for a labor market in which machines answer a growing percentage of the questions? How do we create curricula for uniquely human skills, such as generating original ideas, cracking jokes, carrying on meaningful dialogue? How can such lessons be scored and standardized?
These are the challenges before us. They're similar, in a sense, to what we've been facing with globalization. Again we will find ourselves grappling with a new colleague and competitor. This time around, it's a machine. We should scrutinize that tool, focusing on the questions it fails to answer. Its struggles represent a road map for our own cognitive migration. We must go where computers like Watson cannot.
Mr. Baker is the author of "Final Jeopardy—Man vs. Machine and the Quest to Know Everything" (Houghton Mifflin Harcourt, 2011).
-
http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1
-
http://www.nytimes.com/roomfordebate/2012/02/26/are-people-getting-dumber/?nl=todaysheadlines&emc=thab1
Look at who we have as president. Look at those who think if we'd just tax the rich more, all the economic badness would go away. Stupid is growing and California is ground zero for it's spread.
-
http://theweek.com/article/index/226196/how-electrical-brain-stimulation-can-change-the-way-we-think
ESSAY
How electrical brain stimulation can change the way we think
After my brain was jolted, says Sally Adee, I had a near-spiritual experience
PUBLISHED MARCH 30, 2012, AT 10:01 AM
Researchers have found that "transcranial direct current stimulation" can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship. Photo: Adrianna Williams/Corbis
HAVE YOU EVER wanted to take a vacation from your own head? You could do it easily enough with liberal applications of alcohol or hallucinogens, but that's not the kind of vacation I'm talking about. What if you could take a very specific vacation only from the stuff that makes it painful to be you: the sneering inner monologue that insists you're not capable enough or smart enough or pretty enough, or whatever hideous narrative rides you. Now that would be a vacation. You'd still be you, but you'd be able to navigate the world without the emotional baggage that now drags on your every decision. Can you imagine what that would feel like?
Late last year, I got the chance to find out, in the course of investigating a story for New Scientist about how researchers are using neurofeedback and electrical brain stimulation to accelerate learning. What I found was that electricity might be the most powerful drug I've ever used in my life.
It used to be just plain old chemistry that had neuroscientists gnawing their fingernails about the ethics of brain enhancement. As Adderall, Ritalin, and other cognitive enhancing drugs gain widespread acceptance as tools to improve your everyday focus, even the stigma of obtaining them through less-than-legal channels appears to be disappearing. People will overlook a lot of moral gray areas in the quest to juice their brain power.
But until recently, you were out of luck if you wanted to do that without taking drugs that might be addictive, habit-forming, or associated with unfortunate behavioral side effects. Over the past few years, however, it's become increasingly clear that applying an electrical current to your head confers similar benefits.
U.S. military researchers have had great success using "transcranial direct current stimulation" (tDCS) — in which they hook you up to what's essentially a 9-volt battery and let the current flow through your brain. After a few years of lab testing, they've found that tDCS can more than double the rate at which people learn a wide range of tasks, such as object recognition, math skills, and marksmanship.
We don't yet have a commercially available "thinking cap," but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Recently, a group of Oxford neuroscientists released a cautionary statement about the ethics of brain boosting; then the U.K.'s Royal Society released a report that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi–like social divide, where the rich can afford to be smarter and everyone else will be left behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?
After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn't simply about the easy pleasure of undeserved expertise. For me, it was a near-spiritual experience. When a nice neuroscientist named Michael Weisend put the electrodes on me, what defined the experience was not feeling smarter or learning faster: The thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut up.
The experiment I underwent was accelerated marksmanship training, using a training simulation that the military uses. I spent a few hours learning how to shoot a modified M4 close-range assault rifle, first without tDCS and then with. Without it I was terrible, and when you're terrible at something, all you can do is obsess about how terrible you are. And how much you want to stop doing the thing you are terrible at.
Then this happened:
THE 20 MINUTES I spent hitting targets while electricity coursed through my brain were far from transcendent. I only remember feeling like I'd just had an excellent cup of coffee, but without the caffeine jitters. I felt clear-headed and like myself, just sharper. Calmer. Without fear and without doubt. From there on, I just spent the time waiting for a problem to appear so that I could solve it.
It was only when they turned off the current that I grasped what had just happened. Relieved of the minefield of self-doubt that constitutes my basic personality, I was a hell of a shot. And I can't tell you how stunning it was to suddenly understand just how much of a drag that inner cacophony is on my ability to navigate life and basic tasks.
It's possibly the world's biggest cliché that we're our own worst enemies. In yoga, they tell you that you need to learn to get out of your own way. Practices like yoga are meant to help you exhume the person you are without all the geologic layers of narrative and cross talk that are constantly chattering in your brain. I think eventually they just become background noise. We stop hearing them consciously, but believe me, we listen to them just the same.
My brain without self-doubt was a revelation. There was suddenly this incredible silence in my head; I've experienced something close to it during two-hour Iyengar yoga classes, or at the end of a 10k, but the fragile peace in my head would be shattered almost the second I set foot outside the calm of the studio. I had certainly never experienced instant Zen in the frustrating middle of something I was terrible at.
WHAT HAD HAPPENED inside my skull? One theory is that the mild electrical shock may depolarize the neuronal membranes in the part of the brain associated with object recognition, making the cells more excitable and responsive to inputs. Like many other neuroscientists working with tDCS, Weisend thinks this accelerates the formation of new neural pathways during the time that someone practices a skill, making it easier to get into the "zone." The method he was using on me boosted the speed with which wannabe snipers could detect a threat by a factor of 2.3.
Another possibility is that the electrodes somehow reduce activity in the prefrontal cortex — the area of the brain used in critical thought, says psychologist Mihaly Csikszentmihalyi of Claremont Graduate University in California. And critical thought, some neuroscientists believe, is muted during periods of intense Zen-like concentration. It sounds counterintuitive, but silencing self-critical thoughts might allow more automatic processes to take hold, which would in turn produce that effortless feeling of flow.
With the electrodes on, my constant self-criticism virtually disappeared, I hit every one of the targets, and there were no unpleasant side effects afterwards. The bewitching silence of the tDCS lasted, gradually diminishing over a period of about three days. The inevitable return of self-doubt and inattention was disheartening, to say the least.
I HOPE YOU can sympathize with me when I tell you that the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes. I also started to have a lot of questions. Who was I apart from the angry bitter gnomes that populate my mind and drive me to failure because I'm too scared to try? And where did those voices come from? Some of them are personal history, like the caustically dismissive 7th grade science teacher who advised me to become a waitress. Some of them are societal, like the hateful lady-mag voices that bully me every time I look in a mirror. An invisible narrative informs all my waking decisions in ways I can't even keep track of.
What would a world look like in which we all wore little tDCS headbands that would keep us in a primed, confident state, free of all doubts and fears? I'd wear one at all times and have two in my backpack ready in case something happened to the first one.
I think the ethical questions we should be asking about tDCS are much more subtle than the ones we've been asking about cognitive enhancement. Because how you define "cognitive enhancement" frames the debate about its ethics.
If you told me tDCS would allow someone to study twice as fast for the bar exam, I might be a little leery because now I have visions of rich daddies paying for Junior's thinking cap. Neuroscientists like Roy Hamilton have termed this kind of application "cosmetic neuroscience," which implies a kind of "First World problem" — frivolity.
But now think of a different application — could school-age girls use the zappy cap while studying math to drown out the voices that tell them they can't do math because they're girls? How many studies have found a link between invasive stereotypes and poor test performance?
And then, finally, the main question: What role do doubt and fear play in our lives if their eradication actually causes so many improvements? Do we make more ethical decisions when we listen to our inner voices of self-doubt or when we're freed from them? If we all wore these caps, would the world be a better place?
And if tDCS headwear were to become widespread, would the same 20 minutes with a 2 milliamp current always deliver the same effects, or would you need to up your dose like you do with some other drugs?
Because, to steal a great point from an online commenter, pretty soon, a 9-volt battery may no longer be enough.
©2012 by Sally Adee, reprinted by permission of New Scientist. The full article can be found at NewScientist.com.
-
October 19, 2011
Don’t Blink! The Hazards of Confidence
By DANIEL KAHNEMAN
Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army’s Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.
One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again.
A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another’s shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over.
As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man’s true nature revealed itself in sharp relief.
After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers’ leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers’ leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw.
Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: “This one will never make it,” “That fellow is rather mediocre, but should do O.K.” or “He will be a star.” We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, “But of course anything could happen.”
We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.
We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.
I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.
I had discovered my first cognitive fallacy.
Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier’s performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, “What you see is all there is.” We had made up a story from the little we knew but had no way to allow for what we did not know about the individual’s future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like “He will be a star.” The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events — like who was near the wall — largely determined who became a leader. Other events — some of them also random — would determine later success in training and combat.
You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.
The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.
I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.
I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. “When you sell a stock,” I asked him, “who buys it?” He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong.
Most people in the investment business have read Burton Malkiel’s wonderful book “A Random Walk Down Wall Street.” Malkiel’s central idea is that a stock’s price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading.
We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley.
Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold.
To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled “Trading Is Hazardous to Your Wealth,” Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, “Boys Will Be Boys,” they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men.
Of course, there is always someone on the other side of a transaction; in general, it’s a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell “winners,” stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of “smart money” that finance professionals apply to themselves.
Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike.
Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.
More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not — and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses.
Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers’ scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others?
To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.
No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, “not very high” or “performance certainly fluctuates.” It quickly became clear, however, that no one expected the average correlation to be zero.
What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions — and thereby threaten people’s livelihood and self-esteem — are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience.
The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, “I have done very well for the firm, and no one can take that away from me.” I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?
We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness.
True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.
Daniel Kahneman is emeritus professor of psychology and of public affairs at Princeton University and a winner of the 2002 Nobel Prize in Economics. This article is adapted from his book “Thinking, Fast and Slow,” out this month from Farrar, Straus & Giroux.
http://www.nytimes.com/2011/10/23/magazine/dont-blink-the-hazards-of-confidence.html?pagewanted=all
-
http://www.businessinsider.com/golden-balls-game-theory-2012-4
-
http://www.redorbit.com/news/science/1112661209/young-gorillas-observed-dismantling-poacher-snares/
Young Gorillas Observed Dismantling Poacher Snares
July 23, 2012
Juvenile gorillas from the Kuryama group dismantle a snare in Rwanda's Volcanoes National Park Credit: Dian Fossey Gorilla Fund International
In what can only be described as an impassioned effort to save their own kind from the hand of poachers, two juvenile mountain gorillas have been observed searching out and dismantling manmade traps and snares in their Rwandan forest home, according to a group studying the majestic creatures.
Conservationists working for the Dian Fossey Gorilla Fund International were stunned when they saw Dukore and Rwema, two brave young mountain gorillas, destroying a trap, similar to ones that snared and killed a member of their family less than a week before. Bush-meat hunters set thousands of traps throughout the forests of Rwanda, hoping to catch antelope and other species, but sometimes they capture apes as well.
In an interview with Mark Prigg at The Daily Mail, Erika Archibald, a spokesperson for the Gorilla Fund, said that John Ndayambaje, a tracker for the group, was conducting his regular rounds when he spotted a snare. As he bent down to dismantle it, a silverback from the group rushed him and made a grunting noise that is considered a warning call. A few moments later the two youngsters Dukore and Rwema rushed up to the snare and began to dismantle it on their own.
Then, seconds after destroying the one trap, Archibald continued, Ndayambaje witnessed the pair, along with a third juvenile named Tetero, move to another and dismantle that one as well, one that he had not noticed beforehand. He stood there in amazement.
“We have quite a long record of seeing silverbacks dismantle snares,” Archibald told Prigg. “But we had never seen it passed on to youngsters like that.” And the youngsters moved “with such speed and purpose and such clarity … knowing,” she added. “This is absolutely the first time that we’ve seen juveniles doing that … I don’t know of any other reports in the world of juveniles destroying snares,” Veronica Vecellio, gorilla program coordinator at the Dian Fossey Gorilla Fund’s Karisoke Research Center, told National Geographic.
Every day trackers from the Karisoke center scour the forest for snares, dismantling any they find in order to protect the endangered mountain gorillas, which the International Fund for the Conservation of Nature (IUCN) says face “a very high risk of extinction in the wild.”
Adults generally have enough strength to free themselves from the snares, but juveniles usually do not, and often die as a result of snare-related wounds. Such was the case of an ensnared infant, Ngwino, found too late by Karisoke workers last week. The infant’s shoulder was dislocated during an escape attempt, and gangrene had set in after the ropes cut deep into her leg.
A snare consists of a noose tied to a branch or a bamboo stalk. The rope is pulled downward, bending the branch, and a rock or bent stick is used to hold the noose to the ground, keeping the branch tight. Then vegetation is placed over the noose to camouflage it. When an animal budges the rock or stick, the branch swings upward and the noose closes around the prey, usually the leg, and, depending on the weight of the animal, is hoisted up into the air.
Vecellio said the speed with which everything happened leads her to believe this wasn’t the first time the juveniles had dismantled a trap.
“They were very confident,” she said. “They saw what they had to do, they did it, and then they left.”
Since gorillas in the Kuryama group have been snared before, Vecellio said it is likely that the juveniles know these snares are dangerous. “That’s why they destroyed them.”
“Chimpanzees are always quoted as being the tool users, but I think, when the situation provides itself, gorillas are quite ingenious” too, said veterinarian Mike Cranfield, executive director of the Mountain Gorilla Veterinary Project.
He speculated that the gorillas may have learned how to destroy the traps by watching the Karisoke trackers. “If we could get more of them doing it, it would be great,” he joked.
But Vecellio said it would go against Karisoke center policies and ethos to actively instruct the apes. “We try as much as we can to not interfere with the gorillas. We don’t want to affect their natural behavior.”
Pictures of the incident have gone viral and numerous fans on the Fund’s Facebook page have shared comments cheering for the young silverbacks. Archibald said capturing the interaction was “so touching that I felt everybody with any brains would be touched.”
-
Are We Really Getting Smarter?
Americans' IQ scores have risen steadily over the past century. James R. Flynn examines why.
By JAMES R. FLYNN
IQ tests aren't perfect, but they can be useful. If a boy doing badly in class does really well on one, it is worth investigating whether he is being bullied at school or having problems at home. The tests also roughly predict who will succeed at college, though factors like motivation and self-control are at least as important.
We are the first of our species to live in a world dominated by hypotheticals and nonverbal symbols.
Advanced nations like the U.S. have experienced massive IQ gains over time (a phenomenon that I first noted in a 1984 study and is now known as the "Flynn Effect"). From the early 1900s to today, Americans have gained three IQ points per decade on both the Stanford-Binet Intelligence Scales and the Wechsler Intelligence Scales. These tests have been around since the early 20th century in some form, though they have been updated over time. Another test, Raven's Progressive Matrices, was invented in 1938, but there are scores for people whose birth dates go back to 1872. It shows gains of five points per decade.
In 1910, scored against today's norms, our ancestors would have had an average IQ of 70 (or 50 if we tested with Raven's). By comparison, our mean IQ today is 130 to 150, depending on the test. Are we geniuses or were they just dense?
These alternatives sparked a wave of skepticism about IQ. How could we claim that the tests were valid when they implied such nonsense? Our ancestors weren't dumb compared with us, of course. They had the same practical intelligence and ability to deal with the everyday world that we do. Where we differ from them is more fundamental: Rising IQ scores show how the modern world, particularly education, has changed the human mind itself and set us apart from our ancestors. They lived in a much simpler world, and most had no formal schooling beyond the sixth grade.
The Raven's test uses images to convey logical relationships. The Wechsler has 10 subtests, some of which do much the same, while others measure traits that intelligent people are likely to pick up over time, such as a large vocabulary and the ability to classify objects.
Modern people do so well on these tests because we are new and peculiar. We are the first of our species to live in a world dominated by categories, hypotheticals, nonverbal symbols and visual images that paint alternative realities. We have evolved to deal with a world that would have been alien to previous generations.
More
Raven's Progressive Matrices are non-verbal multiple choice measures of the general intelligence. In each test item, the subject is asked to identify the missing element that completes a pattern. Click here to take the test.
.
A century ago, people mostly used their minds to manipulate the concrete world for advantage. They wore what I call "utilitarian spectacles." Our minds now tend toward logical analysis of abstract symbols—what I call "scientific spectacles." Today we tend to classify things rather than to be obsessed with their differences. We take the hypothetical seriously and easily discern symbolic relationships.
The mind-set of the past can be seen in interviews between the great psychologist Alexander Luria and residents of rural Russia during the 1920s—people who, like ourselves in 1910, had little formal education.
Luria: What do a fish and crow have in common?
Reply: A fish it lives in water, a crow flies.
Luria: Could you use one word for them both?
Reply: If you called them "animals" that wouldn't be right. A fish isn't an animal, and a crow isn't either. A person can eat a fish but not a crow.
The prescientific person is fixated on differences between things that give them different uses. My father was born in 1885. If you asked him what dogs and rabbits had in common, he would have said, "You use dogs to hunt rabbits." Today a schoolboy would say, "They are both mammals." The latter is the right answer on an IQ test. Today we find it quite natural to classify the world as a prerequisite to understanding it.
Here is another example.
Luria: There are no camels in Germany; the city of B is in Germany; are there camels there or not?
Reply: I don't know, I have never seen German villages. If B is a large city, there should be camels there.
Luria: But what if there aren't any in all of Germany?
Reply: If B is a village, there is probably no room for camels.
The prescientific Russian wasn't about to treat something as important as the existence of camels hypothetically. Resistance to the hypothetical isn't just a state of mind unfriendly to IQ tests. Moral argument is more mature today than a century ago because we take the hypothetical seriously: We can imagine alternate scenarios and put ourselves in the shoes of others.
The following invented examples (not from an IQ test) show how our minds have evolved. All three present a series that implies a relationship; you must discern that relationship and complete the series based on multiple-choice answers:
1. [gun] / [gun] / [bullet] 2. [bow] / [bow] / [blank].
Pictures that represent concrete objects convey the relationship. In 1910, the average person could choose "arrow" as the answer.
1. [square] / [square] / [triangle]. 2. [circle] / [circle] / [blank].
In this question, the relationship is conveyed by shapes, not concrete objects. By 1960, many could choose semicircle as the answer: Just as the square is halved into a triangle, so the circle should be halved.
1. * / & / ? 2. M / B / [blank].
In this question, the relationship is simply that the symbols have nothing in common except that they are the same kind of symbol. That "relationship" transcends the literal appearance of the symbols themselves. By 2010, many could choose "any letter other than M or B" from the list as the answer.
This progression signals a growing ability to cope with formal education, not just in algebra but also in the humanities. Consider the exam questions that schools posed to 14-year-olds in 1910 and 1990. The earlier exams were all about socially valuable information: What were the capitals of the 45 states? Later tests were all about relationships: Why is the capital of many states not the largest city? Rural-dominated state legislatures hated big cities and chose Albany over New York, Harrisburg over Philadelphia, and so forth.
Our lives are utterly different from those led by most Americans before 1910. The average American went to school for less than six years and then worked long hours in factories, shops or agriculture. The only artificial images they saw were drawings or photographs. Aside from basic arithmetic, nonverbal symbols were restricted to musical notation (for an elite) and playing cards. Their minds were focused on ownership, the useful, the beneficial and the harmful.
Widespread secondary education has created a mass clientele for books, plays and the arts. Since 1950, there have been large gains on vocabulary and information subtests, at least for adults. More words mean that more concepts are conveyed. More information means that more connections are perceived. Better analysis of hypothetical situations means more innovation. As the modern mind developed, people performed better not only as technicians and scientists but also as administrators and executives.
A greater pool of those capable of understanding abstractions, more contact with people who enjoy playing with ideas, the enhancement of leisure—all of these developments have benefited society. And they have come about without upgrading the human brain genetically or physiologically. Our mental abilities have grown, simply enough, through a wider acquaintance with the world's possibilities.
—Mr. Flynn is the author of "Are We Getting Smarter? Rising IQ in the 21st Century" (Cambridge University Press).
-
http://pjmedia.com/lifestyle/2012/09/29/the-5-unique-ways-intelligent-people-screw-up-their-lives/?singlepage=true
-
http://www.redorbit.com/news/science/1112722935/brain-empathy-analytical-thinking-103112/
-
http://fun.mivzakon.co.il/video/General/8740/%D7%9E%D7%97%D7%A7%D7%A8.html
-
http://www.huffingtonpost.com/2014/02/06/crow-smartest-bird_n_4738171.html
-
http://www.igooglemo.com/2014/06/amazing-young-elephant-paints-elephant_15.html
-
Extreme altruism
Right on!
Self-sacrifice, it seems, is the biological opposite of psychopathy
Sep 20th 2014 | From the print edition
FLYERS at petrol stations do not normally ask for someone to donate a kidney to an unrelated stranger. That such a poster, in a garage in Indiana, actually did persuade a donor to come forward might seem extraordinary. But extraordinary people such as the respondent to this appeal (those who volunteer to deliver aid by truck in Syria at the moment might also qualify) are sufficiently common to be worth investigating. And in a paper published this week in the Proceedings of the National Academy of Sciences, Abigail Marsh of Georgetown University and her colleagues do just that. Their conclusion is that extreme altruists are at one end of a “caring continuum” which exists in human populations—a continuum that has psychopaths at the other end.
Biology has long struggled with the concept of altruism. There is now reasonable agreement that its purpose is partly to be nice to relatives (with whom one shares genes) and partly to permit the exchanging of favours. But how the brain goes about being altruistic is unknown. Dr Marsh therefore wondered if the brains of extreme altruists might have observable differences from other brains—and, in particular, whether such differences might be the obverse of those seen in psychopaths.
She and her team used two brain-scanning techniques, structural and functional magnetic-resonance imaging (MRI), to study the amygdalas of 39 volunteers, 19 of whom were altruistic kidney donors. (The amygdalas, of which brains have two, one in each hemisphere, are areas of tissue central to the processing of emotion and empathy.) Structural MRI showed that the right amygdalas of altruists were 8.1% larger, on average, than those of people in the control group, though everyone’s left amygdalas were about the same size. That is, indeed, the obverse of what pertains in psychopaths, whose right amygdalas, previous studies have shown, are smaller than those of controls.
Functional MRI yielded similar results. Participants, while lying in a scanner, were shown pictures of men and women wearing fearful, angry or neutral expressions on their faces. Each volunteer went through four consecutive runs of 80 such images, and the fearful images (but not the other sorts) produced much more activity in the right amygdalas of the altruists than they did in those of the control groups, while the left amygdalas showed no such response. That, again, is the obverse of what previous work has shown is true of psychopaths, though in neither case is it clear why only the right amygdala is affected.
Dr Marsh’s result is interesting as much for what it says about psychopathy as for what it says about extreme altruism. Some biologists regard psychopathy as adaptive. They argue that if a psychopath can bully non-psychopaths into giving him what he wants, he will be at a reproductive advantage as long as most of the population is not psychopathic. The genes underpinning psychopathy will thus persist, though they can never become ubiquitous because psychopathy works only when there are non-psychopaths to prey on.
In contrast, Dr Marsh’s work suggests that what is going on is more like the way human height varies. Being tall is not a specific adaptation (though lots of research suggests tall people do better, in many ways, than short people do). Rather, tall people (and also short people) are outliers caused by unusual combinations of the many genes that govern height. If Dr Marsh is correct, psychopaths and extreme altruists may be the result of similar, rare combinations of genes underpinning the more normal human propensity to be moderately altruistic.
From the print edition: Science and technology
-
http://www.iflscience.com/technology/worms-mind-robot-body
-
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
The URL for Part Two can be found in Part One.
-
Haven't read this yet, posting it here for my future convenience:
http://ieet.org/index.php/IEET/more/pellissier20130620
-
http://wakeup-world.com/2013/06/04/autistic-boy-discovers-gift-after-removal-from-state-run-therapy/
-
http://www.bioedge.org/bioethics/is-it-ethical-to-investigate-the-genetic-component-of-iq/11594
-
Game ON: the end of the old economic system is in sight
Posted: 12 Mar 2016 11:23 AM PST
Google is a pioneer in limited artificial general intelligence (aka computers that can learn w/o preprogramming them). One successful example is AlphaGo. It just beat this Go Grandmaster three times in a row.
What makes this win interesting is that AlphaGo didn't win through brute force. Go is too complicated for that:
...the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.
It also didn't win by extensive preprogramming by talented engineers, like IBM's Deep Blue did to win at Chess.
Instead, AlphaGo won this victory by learning how to play the game from scratch using this process:
No assumptions. AlphaGo approached the game without any assumptions. This is called a model-free approach. This allows it to program itself from scratch, by building complex models human programmers can't understand/match.
Big Data. It then learned the game by interacting with a database filled with 30 million games previously played by human beings. The ability to bootstrap a model from data removes almost all of the need for engineering and programming talent currently needed for big systems. That's huge.
Big Sim (by the way, Big Sim will be as well known as Big Data in five years <-- heard it here first). Finally, it applied and honed that learning by playing itself on 50 computers night and day until it became good enough to play a human grandmaster.
The surprise of this victory isn't that it occurred. Most expected it would, eventually...
Instead, the surprise is how fast it happened. How fast AlphaGo was able to bootstrap itself to a mastery of the game. It was fast. Unreasonably fast.
However, this victory goes way beyond the game of Go. It is important because AlphaGo uses a generic technique for learning. A technique that can be used to master a HUGE range of activities, quickly. Activities that people get paid for today.
This implies the following:
This technology is going to cut through the global economy like a hot knife through butter. It learns fast and largely on its own. It's widely applicable. It doesn't only master what it has seen, it can innovate. For example: some of the unheard of moves made by AlphaGo were considered "beautiful" by the Grandmaster it beat.
Limited AGI (deep learning in particular) will have the ability to do nearly any job currently being done by human beings -- from lawyers to judges, nurses to doctors, driving to construction -- potentially at a grandmaster's level of capability. This makes it a buzzsaw.
Very few people (and I mean very few) will be able to stay ahead of the limited AGI buzzsaw. It learns so quickly, the fate of people stranded in former factory towns gutted by "free trade" is likely to be the fate of the highest paid technorati. They simply don't have the capacity to learn fast enough or be creative enough to stay ahead of it.
Have fun,
John Robb
PS: Isn't it ironic (or not) that at the very moment in history when we demonstrate a limited AGI (potentially, a tsunami of technological change) the western industrial bureaucratic political system starts to implode due to an inability to deal with the globalization (economic, finance and communications) enabled by the last wave of technological change?
PPS: This has huge implications for warfare. I'll write more about those soon. Laying a foundation for understanding this change first.
-
Also pertains to race and education.
Charles Murray was co-author of The Bell Curve, a very long scientific book that became a landmine for a small point in it that exposed differences in intelligence between races, therefore author is a racist... His co-author died about when this was published so he has owned the work over the two decades since it was published.
Intelligence is 40%-80% inherited, a wide range that is nowhere near zero or 100%.
People tend to marry near their own intelligence making the difference grow rather than equalize over time. He predicted this would have societal effects that have most certainly become true.
Being called a racist for publishing scientific data is nothing new, but Charles Murray has received more than his share of it. What he could of or should have done is cover up the real results to fit what people like to hear, like the climate scientists do. He didn't.
Most recently his work received a public rebuke from the President of Virginia Tech.
His response to that is a bit long but quite a worthwhile read that will save you the time of reading his 3-4 inch thick hardcover book if you haven't already read this important work.
https://www.aei.org/publication/an-open-letter-to-the-virginia-tech-community/
Charles Murray
March 17, 2016 9:00 am
An open letter to the Virginia Tech community
Last week, the president of Virginia Tech, Tim Sands, published an “open letter to the Virginia Tech community” defending lectures delivered by deplorable people like me (I’m speaking on the themes of Coming Apart on March 25). Bravo for President Sands’s defense of intellectual freedom. But I confess that I was not entirely satisfied with his characterization of my work. So I’m writing an open letter of my own.
Dear Virginia Tech community,
Since President Sands has just published an open letter making a serious allegation against me, it seems appropriate to respond. The allegation: “Dr. Murray is well known for his controversial and largely discredited work linking measures of intelligence to heredity, and specifically to race and ethnicity — a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.”
Let me make an allegation of my own. President Sands is unfamiliar either with the actual content of The Bell Curve — the book I wrote with Richard J. Herrnstein to which he alludes — or with the state of knowledge in psychometrics.
The Bell Curve and Charles Murray
I should begin by pointing out that the topic of the The Bell Curve was not race, but, as the book’s subtitle says, “Intelligence and Class Structure in American Life.” Our thesis was that over the last half of the 20th century, American society has become cognitively stratified. At the beginning of the penultimate chapter, Herrnstein and I summarized our message:
Predicting the course of society is chancy, but certain tendencies seem strong enough to worry about:
An increasingly isolated cognitive elite.
A merging of the cognitive elite with the affluent.
A deteriorating quality of life for people at the bottom end of the cognitive distribution.
Unchecked, these trends will lead the U.S. toward something resembling a caste society, with the underclass mired ever more firmly at the bottom and the cognitive elite ever more firmly anchored at the top, restructuring the rules of society so that it becomes harder and harder for them to lose. [p. 509].
It is obvious that these conclusions have not been discredited in the twenty-two years since they were written. They may be more accurately described as prescient.
Now to the substance of President Sands’s allegation.
The heritability of intelligence
Richard Herrnstein and I wrote that cognitive ability as measured by IQ tests is heritable, somewhere in the range of 40% to 80% [pp. 105–110], and that heritability tends to rise as people get older. This was not a scientifically controversial statement when we wrote it; that President Sands thinks it has been discredited as of 2016 is amazing.
You needn’t take my word for it. In the wake of the uproar over The Bell Curve, the American Psychological Association (APA) assembled a Task Force on Intelligence consisting of eleven of the most distinguished psychometricians in the United States. Their report, titled “Intelligence: Knowns and Unknowns,” was published in the February 1996 issue of the APA’s peer-reviewed journal, American Psychologist. Regarding the magnitude of heritability (represented by h2), here is the Task Force’s relevant paragraph. For purposes of readability, I have omitted the citations embedded in the original paragraph:
If one simply combines all available correlations in a single analysis, the heritability (h2) works out to about .50 and the between-family variance (c2) to about .25. These overall figures are misleading, however, because most of the relevant studies have been done with children. We now know that the heritability of IQ changes with age: h2 goes up and c2 goes down from infancy to adulthood. In childhood h2 and c2 for IQ are of the order of .45 and .35; by late adolescence h2 is around .75 and c2 is quite low (zero in some studies) [p. 85].
The position we took on heritability was squarely within the consensus state of knowledge. Since The Bell Curve was published, the range of estimates has narrowed somewhat, tending toward modestly higher estimates of heritability.
Intelligence and race
There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones.
There’s no doubt that discussing intelligence and race was asking for trouble in 1994, as it still is in 2016. But that’s for political reasons, not scientific ones. Once again, the state of knowledge about the basics is not particularly controversial. The mean scores for all kinds of mental tests vary by ethnicity. No one familiar with the data disputes that most elemental statement. Regarding the most sensitive difference, between Blacks and Whites, Herrnstein and I followed the usual estimate of one standard deviation (15 IQ points), but pointed out that the magnitude varied depending on the test, sample, and where and how it was administered. What did the APA Task Force conclude? “Although studies using different tests and samples yield a range of results, the Black mean is typically about one standard deviation (about 15 points) below that of Whites. The difference is largest on those tests (verbal or nonverbal) that best represent the general intelligence factor g” [p. 93].
Is the Black/White differential diminishing? In The Bell Curve, we discussed at length the evidence that the Black/White differential has narrowed [pp. 289–295], concluding that “The answer is yes with (as usual) some qualifications.” The Task Force’s treatment of the question paralleled ours, concluding with “[l]arger and more definitive studies are needed before this trend can be regarded as established” [p. 93].
Can the Black/White differential be explained by test bias? In a long discussion [pp. 280–286], Herrnstein and I presented the massive evidence that the predictive validity of mental tests is similar for Blacks and Whites and that cultural bias in the test items or their administration do not explain the Black/White differential. The Task Force’s conclusions regarding predictive validity: “Considered as predictors of future performance, the tests do not seem to be biased against African Americans” [p. 93]. Regarding cultural bias and testing conditions: “Controlled studies [of these potential sources of bias] have shown, however, that none of them contributes substantially to the Black/White differential under discussion here” [p. 94].
Can the Black/White differential be explained by socioeconomic status? We pointed out that the question has two answers: Statistically controlling for socioeconomic status (SES) narrows the gap. But the gap does not narrow as SES goes up — i.e., measured in standard deviations, the differential between Blacks and Whites with high SES is not narrower than the differential between those with low SES [pp. 286–289]. Here’s the APA Task Force on this topic:
Several considerations suggest that [SES] cannot be the whole explanation. For one thing, the Black/White differential in test scores is not eliminated when groups or individuals are matched for SES. Moreover, the data reviewed in Section 4 suggest that—if we exclude extreme conditions—nutrition and other biological factors that may vary with SES account for relatively little of the variance in such scores [p. 94].
The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong.
And so on. The notion that Herrnstein and I made claims about ethnic differences in IQ that have been scientifically rejected is simply wrong. We deliberately remained well within the mainstream of what was confidently known when we wrote. None of those descriptions have changed much in the subsequent twenty-two years, except to be reinforced as more has been learned. I have no idea what countervailing evidence President Sands could have in mind.
At this point, some readers may be saying to themselves, “But wasn’t The Bell Curve the book that tried to prove blacks were genetically inferior to whites?” I gather that was President Sands’ impression as well. It has no basis in fact. Knowing that people are preoccupied with genes and race (it was always the first topic that came up when we told people we were writing a book about IQ), Herrnstein and I offered a seventeen-page discussion of genes, race, and IQ [pp. 295–311]. The first five pages were devoted to explaining the context of the issue — why, for example, the heritability of IQ among humans does not necessarily mean that differences between groups are also heritable. Four pages were devoted to the technical literature arguing that genes were implicated in the Black/White differential. Eight pages were devoted to arguments that the causes were environmental. Then we wrote:
If the reader is now convinced that either the genetic or environmental explanation has won out to the exclusion of the other, we have not done a sufficiently good job of presenting one side or the other. It seems highly likely to us that both genes and the environment have something to do with racial differences. What might the mix be? We are resolutely agnostic on that issue; as far as we can determine, the evidence does not yet justify an estimate. [p. 311].
That’s it—the sum total of every wild-eyed claim that The Bell Curve makes about genes and race. There’s nothing else. Herrnstein and I were guilty of refusing to say that the evidence justified a conclusion that the differential had to be entirely environmental. On this issue, I have a minor quibble with the APA Task Force, which wrote “There is not much direct evidence on [a genetic component], but what little there is fails to support the genetic hypothesis” [p. 95]. Actually there was no direct evidence at all as of the mid-1990s, but the Task Force chose not to mention a considerable body of indirect evidence that did in fact support the genetic hypothesis. No matter. The Task Force did not reject the possibility of a genetic component. As of 2016, geneticists are within a few years of knowing the answer for sure, and I am content to wait for their findings.
But I cannot leave the issue of genes without mentioning how strongly Herrnstein and I rejected the importance of whether genes are involved. This passage from The Bell Curve reveals how very, very different the book is from the characterization of it that has become so widespread:
In sum: If tomorrow you knew beyond a shadow of a doubt that all the cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change. The knowledge would give you no reason to treat individuals differently than if ethnic differences were 100 percent environmental. By the same token, knowing that the differences are 100 percent environmental in origin would not suggest a single program or policy that is not already being tried. It would justify no optimism about the time it will take to narrow the existing gaps. It would not even justify confidence that genetically based differences will not be upon us within a few generations. The impulse to think that environmental sources of difference are less threatening than genetic ones is natural but illusory.
In any case, you are not going to learn tomorrow that all the cognitive differences between races are 100 percent genetic in origin, because the scientific state of knowledge, unfinished as it is, already gives ample evidence that environment is part of the story. But the evidence eventually may become unequivocal that genes are also part of the story. We are worried that the elite wisdom on this issue, for years almost hysterically in denial about that possibility, will snap too far in the other direction. It is possible to face all the facts on ethnic and race differences on intelligence and not run screaming from the room. That is the essential message [pp. 314-315].
I have been reluctant to spend so much space discussing The Bell Curve’s treatment of race and intelligence because it was such an ancillary topic in the book. Focusing on it in this letter has probably made it sound as if it was as important as President Sands’s open letter implied.
But I had to do it. For two decades, I have had to put up with misrepresentations of The Bell Curve. It is annoying. After so long, when so many of the book’s main arguments have been so dramatically vindicated by events, and when our presentations of the meaning and role of IQ have been so steadily reinforced by subsequent research in the social sciences, not to mention developments in neuroscience and genetics, President Sands’s casual accusation that our work has been “largely discredited” was especially exasperating. The president of a distinguished university should take more care.
It is in that context that I came to the end of President Sands’s indictment, accusing me of promulgating “a flawed socioeconomic theory that has been used by some to justify fascism, racism and eugenics.” At that point, President Sands went beyond the kind of statement that merely reflects his unfamiliarity with The Bell Curve and/or psychometrics. He engaged in intellectual McCarthyism.
-
http://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x
-
http://neurohacker.com/qualia/
My son is intrigued by this. Any comments?
-
http://neurohacker.com/qualia/
My son is intrigued by this. Any comments?
My money is on this being a ripoff.
-
https://www.theatlantic.com/health/archive/2013/07/the-vitamin-myth-why-we-think-we-need-supplements/277947/
Crafty,
most if not all the supplement sales carry a similar pattern of promotion. You get someone with a science background who recites biochemical pathways that show a particular substance is involved in some sort of function that is needed for health. From Vit C to B 12 to cofactor with magnesium and hundreds and probably thousands more
They impress the non scientist with "co factors" long chemical names and site studies that show some relationship to our health. Then they may state taking large doses of the cofactor or other chemical increases the benefit to our health over just normal doses. Or they will vary the presentation with claims that the nutrient or chemical has to be taken in a certain way with other substances and then, and lonly then we would all reap some increase benefit to our "prostate health, our cognitive health, our digestive healthy, more energy etc.
Then they find mostly obscure studies by usually second rate or no name researches who are spending grant money , trying to make some sort of name for themselves, or I even suspect are at times making up data for bribes and then publish the date and their "research" in one of the money making journals that are usually second rate or are not well monitored or peer reviewed (even that process is subject to outright fraud).
So now they cite the impressive sounding biochemistry in order to sound like they understand something that the rest of us do not and they "discovered" this chemical (s) that is found by these usually insignificant if not fraudulent studies to suggest some sort of benefit.
The chemicals are often obscure from some exotic jungle of far away ocean or island or with some claim of being the only ones who can provide in the proper purity or concentration or mix or other elixir that no one else can duplicate .
If any real scientist or doctor disputes their claim they come back with a vengeance arguing that the doctor or scientist is just threatened by this "cure" that would put the doctor or scientist out of business.
You don't have to take my word for it but the vast majority of these things , if not all , are scams. They all have similar themes with variations that play over and over again to people who are looking to stay healthy, stay young, get an edge in life, have more sexual prowess , remember more , be smarte.
There si billions to be made.
I hope I don't sound like some condescending doctor who thinks he knows it all. I don't. And I know I don't.
But even on Shark Tank when some entreprenuer came on trying to get the sharks to buy into some sort of supplement they said all the supplements are just a "con".
FWIW I agree with it.
-
Thank you!
-
https://www.youtube.com/watch?v=P8opBj1LjSU
-
https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/
-
https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/
https://www.youtube.com/watch?v=ih_l0vBISOE
-
http://www.illumeably.com/2017/07/27/success-vs-failure/
-
https://www.marketwatch.com/story/mark-cuban-tells-kyle-bass-ai-will-change-everything-weighs-in-on-bitcoin-2017-11-04?mod=cx_picks&cx_navSource=cx_picks&cx_tag=other&cx_artPos=7#cxrecs_s
There is no way to beat the machines, so you’d better bone up on what makes them tick.
That was the advice of billionaire investor Mark Cuban, who was interviewed by Kyle Bass, founder and principal of Hayman Capital Management, in late October for Real Vision. The interview published Nov. 3.
“I think artificial intelligence is going to change everything, everything, 180 degrees,” said Cuban, who added that changes seen by AI would “dwarf” the advances that have been seen in technology over the last 30 years or more, even the internet.
The owner of the NBA’s Dallas Mavericks and a regular on the TV show “Shark Tank,” said AI is going to displace a lot of jobs, something that will play out fast, over the next 20 to 30 years. He said real estate is one industry that’s likely to get hit hard.
Read: Kyle Bass says this will be the first sign of a bigger market meltdown
“So, the concept of you calling in to make an appointment to have somebody pick up your car to get your oil changed, right — someone will still drive to get your car, but there’s going to be no people in transacting any of it,” he said.
Cuban says he’s trying to learn all he can right now about machine-learning, neural networks, deep learning, writing code and programming language such as Python. Machine language say, “OK, we can take a lot more variables than you can ever think of,” he said.
And AI is seeing big demand when it comes to jobs, he said. “At the high end, we can’t pay enough to get — so when I talk about, within the artificial intelligence realm, there’s a company in China paying million-dollar bonuses to get the best graduates,” said Cuban.
The U.S. is falling badly behind when it comes to AI, with Montreal now the “center of the universe for computer vision. It’s not U.S.-based schools that are dominating any longer in those areas,” he said.
The AI companies
As for companies standing to benefit from AI, Cuban said he “thinks the FANG stocks are going to crush them,” noting that his biggest public holding is Amazon.com Inc. AMZN, +0.83%
“They’re the world’s greatest startups with liquidity. If you look at them as just a public company where you want to see what the P/E ratio is and what the discounted cash value-- you’re never going to get it, right? You’re never going to see it. And if you say Jeff Bezos (chief executive officer of Amazon), Reed Hastings (chief executive officer of Netflix Inc. NFLX, +0.65% ) — those are my 2 biggest holdings,” he said.
Read: 10 wildly successful people and the surprising jobs that kick-started their careers
Cuban said he’s less sold on Apple Inc. AAPL, +1.26% though he said it’s trying to make progress on AI, along with Alphabet Inc. GOOGL, -0.33% and Facebook Inc. FB, +0.15% . “They’re just nonstop startups. They’re in a war. And you can see the market value accumulating to them because of that,” he said.
But still, they aren’t all owning AI yet, and there’s lots of opportunities for smaller companies, he added..
On digital currencies and ICO
While Bass commented that he has been just a spectator when it comes to blockchain — a decentralized ledger used to record and verify transactions — Cuban said he’s a big fan. But when it comes to bitcoin BTCUSD, -2.74% ethereum and other cryptocurrencies, he said it would be a struggle to see them become real currencies because only a limited number of transactions can be done.
Read: Two ETF sponsors file for funds related to blockchain, bitcoin’s foundational technology
“So, it’s going to be very difficult for it to be a currency when the time and the expense of doing a transaction is 100 times what you can do over a Visa or Mastercard, right?” asked Cuban, adding that really the only value of bitcoin and ethereum is that they are just digital assets that are collectible.
Read: Bitcoin may be staging the biggest challenge yet to gold and silver
“And in this particular case, it’s a brilliant collectible that’s probably more like art than baseball cards, stamps, or coins, right, because there’s a finite amount that are going to be made, right? There are 21.9 million bitcoins that are going to be made,” he said.
Cuban said initial coin offerings — fundraising for new cryptocurrency ventures — “really are an opportunity,” and he has been in involved in UniCoin, which does ETrade and Unikrm, which does legal sports betting for Esports and other sports outside the United States.
Read: What is an ICO?
But he and Bass both commented about how the industry needs regulating, with Bass noting that ICOs have raised $3 billion this year, and $2 billion going into September. While many are “actually going to do well,” so many are just completely stupid and frauds,” he said.
“It’s the dumb ones that are going to get shut down,” agreed Cuban.
One problem: “There’s nobody at the top that has any understanding of it,” he added, referring to the Securities and Exchange Commission.
Cuban ended the interview with some advice on where to invest now. He said for those investors not too knowledgeable about markets, the best bet is a cheap S&P 500 SPX, +0.06% fund, but that putting 5% in bitcoin or ethereum isn’t a bad idea on the theory that it’s like investing in artwork.
Listen to the whole interview on Real Vision here
-
https://www.youtube.com/watch?v=pqzcCfUglws&feature=youtu.be
-
https://ourworldindata.org/intelligence
-
https://www.ted.com/talks/joshua_klein_on_the_intelligence_of_crows#t-178765
-
https://www.vice.com/en_us/article/5gw8vn/chinas-taking-over-the-world-with-a-massive-genetic-engineering-program
-
http://nautil.us//issue/40/learning/is-artificial-intelligence-permanently-inscrutable?utm_source=Nautilus&utm_campaign=270e193d5c-EMAIL_CAMPAIGN_2018_06_15_08_18&utm_medium=email&utm_term=0_dc96ec7a9d-270e193d5c-61805061
-
https://gizmodo.com/yet-more-evidence-that-viruses-may-cause-alzheimers-dis-1827511539
-
https://www.nj.com/healthfit/index.ssf/2018/07/brain_training_breakthrough_offers_new_hope_in_bat.html
-
https://www.nationalreview.com/2018/08/china-artificial-intelligence-race/?utm_source=Sailthru&utm_medium=email&utm_campaign=NR%20Daily%20Monday%20through%20Friday%202018-08-10&utm_term=NR5PM%20Actives
-
https://www.mercatornet.com/features/view/exporting-enslavement-chinas-illiberal-artificial-intelligence/21607
-
https://bigthink.com/paul-ratner/our-brains-think-in-11-dimensions-discover-scientists?utm_campaign=Echobox&utm_medium=Social&utm_source=Twitter#Echobox=1534419641
Also see
https://aeon.co/videos/our-divided-brains-are-far-more-complex-and-remarkable-than-a-left-right-split
http://runwonder.com/life/science-explains-what-happens.html
-
Highlights
Aging demographics and an emerging great power competition pitting China against the United States form the backdrop to a high-stakes race in artificial intelligence development.
The United States, for now, has a lead overall in AI development, but China is moving aggressively to try and overtake its American rivals by 2030.
While deep integration across tech supply chains and markets has occurred in the past couple of decades, rising economic nationalism and a growing battle over international standards will balkanize the global tech sector.
AI advancements will boost productivity and economic growth, but creative destruction in the workforce will drive political angst in much of the world, putting China's digital authoritarianism model as well as liberal democracies to the test.
For better or worse, the advancement and diffusion of artificial intelligence technology will come to define this century. Whether that statement should fill your soul with terror or delight remains a matter of intense debate. Techno-idealists and doomsdayers will paint their respective utopian and dystopian visions of machine-kind, making the leap from what we know now as "narrow AI" to "general AI" to surpass human cognition within our lifetime. On the opposite end of the spectrum, yawning skeptics will point to Siri's slow intellect and the human instinct of Capt. Chesley "Sully" Sullenberger – the pilot of the US Airways flight that successfully landed on the Hudson River in 2009 – to wave off AI chatter as a heap of hype not worth losing sleep over.
The fact is that the development of AI – a catch-all term that encompasses neural networks and machine learning and deep learning technologies – has the potential to fundamentally transform civilian and military life in the coming decades. Regardless of whether you're a businessperson pondering your next investment, an entrepreneur eyeing an emerging opportunity, a policymaker grappling with regulation or simply a citizen operating in an increasingly tech-driven society, AI is a global force that demands your attention.
The Big Picture
As Stratfor wrote in its 2018 Third-Quarter Forecast, the world is muddling through a blurry transition from the post-Cold War world to an emerging era of great power competition. The race to dominate AI development will be a defining feature of U.S.-China rivalry.
An Unstoppable Force
Willingly or not, even the deepest skeptics are feeding the AI force nearly every minute of every day. Every Google (or Baidu) search, Twitter (or Weibo) post, Facebook (or Tencent) ad and Amazon (or Alibaba) purchase is another click creating mountains of data – some 2.2. billion gigabytes globally every day – that companies are using to train their algorithms to anticipate and mimic human behavior. This creates a virtuous (or vicious, depending on your perspective) cycle: the more users engage with everyday technology platforms, the more data is collected; the more data that's collected, the more the product improves; the more competitive the product, the more users and billions of dollars in investment it will attract; a growing number of users means more data can be collected, and the loop continues.
And unlike previous AI busts, the development of this technology is occurring amid rapidly advancing computing power, where the use of graphical processing units (GPUs) and development of custom computer chips is giving AI developers increasingly potent hardware to drive up efficiency and drive down cost in training their algorithms. To help fuel advancements in AI hardware and software, AI investment is also growing at a rapid pace.
The Geopolitical Backdrop to the Global AI Race
AI is both a driver and a consequence of structural forces reshaping the global order. Aging demographics – an unprecedented and largely irreversible global phenomenon – is a catalyst for AI development. As populations age and shrink, financial burdens on the state mount and labor productivity slows, sapping economic growth over time. Advanced industrial economies already struggling to cope with the ill effects of aging demographics with governments that are politically squeamish toward immigration will relentlessly look to machine learning technologies to increase productivity and economic growth in the face of growing labor constraints.
The global race for AI supremacy will feature prominently in a budding great power competition between the United States and China. China was shocked in 2016 when Google DeepMind's AlphaGo beat the world champion of Go, an ancient Chinese strategy game (Chinese AI state planners dubbed the event their "Sputnik moment"), and has been deeply shaken by U.S. President Donald Trump's trade wars and the West's growing imperative to keep sensitive technology out of Chinese competitors' hands. Just in the past couple of years alone, China's state focus on AI development has skyrocketed to ensure its technological drive won't suffer a short circuit due to its competition with the United States.
How the U.S. and China Stack Up in AI Development
Do or Die for Beijing
The United States, for now, has the lead in AI development when it comes to hardware, research and development, and a dynamic commercial AI sector. China, by the sheer size of its population, has a much larger data pool, but is critically lagging behind the United States in semiconductor development. Beijing, however, is not lacking in motivation in its bid to overtake the United States as the premier global AI leader by 2030. And while that timeline may appear aggressive, China's ambitious development in AI in the coming years will be unfettered by the growing ethical, privacy and antitrust concerns occupying the West. China is also throwing hundreds of billions of dollars into fulfilling its AI mission, both in collaboration with its standing tech champions and by encouraging the rise of unicorns, or privately held startups valued at $1 billion or more.
By incubating and rewarding more and more startups, Beijing is finding a balance between focusing its national champions on the technologies most critical to the state (sometimes by taking an equity stake in the company) without stifling innovation. In the United States, on the other hand, it would be disingenuous to label U.S.-based multinational firms, which park most of their corporate profits overseas, as true "national" champions. Instead of the state taking the lead in funding high-risk and big-impact research in emerging technologies as it has in the past, the roles in the West have been flipped; private tech companies are in the driver's seat while the state is lunging at the steering wheel, trying desperately to keep China in its rear view.
The Ideological Battleground
The United States may have thought its days of fighting globe-spanning ideological battles ended with the Cold War. Not so. AI development is spawning a new ideological battlefield between the United States and China, pitting the West's notion of liberal democracy against China's emerging brand of digital authoritarianism. As neuroscientist Nicholas Wright highlights in his article, "How Artificial Intelligence Will Reshape the Global Order," China's 2017 AI development plan "describes how the ability to predict and grasp group cognition means 'AI brings new opportunities for social construction.'" Central to this strategic initiative is China's diffusion of a "social credit system" (which is set to be fully operational by 2020) that would assign a score based on a citizen's daily activities to determine everything from airfare class and loan eligibility to what schools your kids are allowed to attend. It's a tech-powered, state-driven approach to parse model citizens from the deplorables, so to speak.
The ability to harness AI-powered facial recognition and surveillance data to shape social behavior is an appealing tool, not just for Beijing, but for other politically paranoid states that are hungry for an alternative path to stability and are underwhelmed by the West's messy track record in promoting democracy. Wright describes how Beijing has exported its Great Firewall model to Thailand and Vietnam to barricade the internet while also supplying surveillance technology to the likes of Iran, Russia, Ethiopia, Zimbabwe, Zambia and Malaysia. Not only does this aid China's goal of providing an alternative to a U.S.-led global order, but it widens China's access to even wider data pools around the globe to hone its own technological prowess.
The European Hustle
Not wanting to be left behind in this AI great power race, Europe and Russia are hustling to catch up, but they will struggle in the end to keep pace with the United States and China. Russian President Vladimir Putin made headlines last year when he told an audience of Russian youths that whoever rules AI will rule the world. But the reality of Russia's capital constraints means Russia will have to choose carefully where it puts its rubles. Moscow will apply a heavy focus on AI military applications and will rely on cyber espionage and theft to try and find shortcuts to AI development, all while trying to maintain its strategic alignment with China to challenge the United States.
The EU Struggle to Create Unicorn Companies
While France harbors ambitious plans to develop an AI ecosystem for Europe and Germany frets over losing its industrial edge to U.S. and Chinese tech competitors, unavoidable and growing fractures within the European Union will hamper Europe's ability to play a leading AI role on the world stage. The European Union's cumbersome regulatory environment and fragmented digital market has been prohibitive for tech startups, a fact reflected in the European Union's low global share and value of unicorn companies. Meanwhile, the United Kingdom, home to Europe's largest pool of tech talent, will be keen on unshackling itself from the European Union's investment-inhibitive regulations as it stumbles out of the bloc.
A Battle over Talent and Standards
But wherever pockets of tech innovation already exist on the Continent, those relatively few companies and individuals are already prime targets for U.S. and Chinese tech juggernauts prowling the globe for AI talent. AI experts are a precious global commodity. According to a 2018 study by Element AI, there are roughly 22,000 doctorate-level researchers in the world, but only around 3,000 are actually looking for work and around 5,400 are presenting their research at AI conferences. U.S. and Chinese tech giants are using a variety of means – mergers and acquisitions, aggressive poaching, launchings labs in cities like Paris, Montreal and Taiwan – to gobble up this tiny talent pool.
Largest Tech Companies by Market Capitalization
Even as Europe struggles to build up its own tech champions, the European Union can use its market size and conscientious approach to ethics, privacy and competition to push back on encroaching tech giants through hefty fines, data localization and privacy rules, taxation and investment restrictions. The bloc's rollout of its General Data Protection Regulation (GDPR) is designed to give Europeans more control over their personal data by limiting data storage times, deleting data on request and monitoring for data breaches. While big-tech firms have the means to adapt and pay fines, the move threatens to cripple smaller firms struggling to comply with the high cost of compliance. It also fundamentally restricts the continental data flows needed to fuel Europe's AI startup culture.
The United States in many ways shares Europe's concerns over issues like data privacy and competition, but it has a fundamentally different approach in how to manage those concerns. The European Union is effectively prioritizing individual privacy rights over free speech, while the United States does the reverse. Brussels will fixate on fairness, even at the cost of the bloc's own economic competitiveness, while Washington will generally avoid getting in the way of its tech champions. For example, while the European Union will argue that Google's dominance in multiple technological applications is by itself an abuse of its power that stifles competition, the United States will refrain from raising the antitrust flag unless tech giants are using their dominant position to raise prices for consumers.
U.S. and European government policy overlap instead in their growing scrutiny over foreign investment in sensitive technology sectors. Of particular concern is China's aggressive, tech-focused overseas investment drive and the already deep integration of Chinese hardware and software in key technologies used globally. A highly diversified company like Huawei, a pioneer in cutting-edge technologies like 5G and a mass producer of smartphones and telecommunications equipment, can leverage its global market share to play an influential role in setting international standards.
Washington, meanwhile, is lagging behind Brussels and Beijing in the race to establish international norms for cyber policy. While China and Russia have been persistent in their attempts to use international venues like the United Nations to codify their version of state-heavy cyber policy, the European Union has worked to block those efforts while pushing their own standards like GDPR.
This emerging dynamic of tightening restrictions in the West overall against Chinese tech encroachment, Europe's aggressive regulatory push against U.S. tech giants and China's defense and export of digital authoritarianism may altogether lead to a much more balkanized market for global tech companies in the future.
The AI Political Test of the Century
There is no shortage of AI reports by big-name consulting firms telegraphing to corporate audiences the massive productivity gains to come from AI in a range of industries, from financial, auto, insurance and retail to construction, cleaning and security. A 2017 PwC report estimated that AI could add $15.7 trillion to the global economy in 2030, of which $6.6 trillion would come from increased productivity and $9.1 trillion would come from increased consumption. The potential for double-digit impacts on GDP after years of stalled growth in much of the world is appealing, no doubt.
But lurking behind those massive figures is the question of just how well, how quickly and how much of a country's workforce will be able to adapt to these fast-moving changes. As Austrian Joseph Schumpeter described in his 1942 book, Capitalism, Socialism and Democracy, the "creative destruction" that results from so-called industrial mutations "incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one." In the age of AI, the market will incessantly seek out scientists and creative thinkers. Machines will endlessly render millions of workers irrelevant. And new jobs, from AI empathy trainers to life coaches, will be created. Even as technology translates into productivity and economic gains overall, this will be a wrenching transition if workers are slow to learn new skills and if wage growth remains stagnant for much of the population.
Time will tell which model will be better able to cope with an expected rise in political angst as the world undergoes this AI revolution: China's untested model of digital authoritarianism or the West's time-tested, yet embattled, tradition in liberal democracy.
-
second post
How the U.S.-China Power Competition Is Shaping the Future of AI Ethics
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor
A U.S. Air Force MQ-1B Predator unmanned aerial vehicle returns from a mission to an air base in the Persian Gulf region.
As artificial intelligence applications develop and expand, countries and corporations will have different opinions on how and when technologies should be employed. First movers like the United States and China will have an advantage in setting international standards.
China will push back against existing Western-led ethical norms as its level of global influence rises and the major powers race to become technologically dominant.
In the future, ethical decisions that prevent adoption of artificial intelligence applications in certain fields could limit political, security and economic advantages for specific countries.
Controversial new technologies such as automation and artificial intelligence are quickly becoming ubiquitous, prompting ethical questions about their uses in both the private and state spheres. A broader shift on the global stage will drive the regulations and societal standards that will, in turn, influence technological adoption. As countries and corporations race to achieve technological dominance, they will engage in a tug of war between different sets of values while striving to establish ethical standards. Western values have long been dominant in setting these standards, as the United States has traditionally been the most influential innovative global force. But China, which has successfully prioritized economic growth and technological development over the past several decades, is likely to play a bigger role in the future when it comes to tech ethics.
The Big Picture
The great power competition between China and the United States continues to evolve, leading to pushback against international norms, organizations and oversight. As the world sits at a key intersection of geopolitical and technological development, the battles to set new global standards will play out on emerging technological stages.
The field of artificial intelligence will be one of the biggest areas where different players will be working to establish regulatory guardrails and answer ethical questions in the future. Science fiction writer Isaac Asimov wrote his influential laws of robotics in the first half of the 20th century, and reality is now catching up to fiction. Questions over the ethics of AI and its potential applications are numerous: What constitutes bias within the algorithms? Who owns data? What privacy measures should be employed? And just how much control should humans retain in applying AI-driven automation? For many of these questions, there is no easy answer. And in fact, as the great power competition between China and the United States ramps up, they prompt another question: Who is going to answer them?
Questions of right and wrong are based on the inherent cultural values ingrained within a place. From an economic perspective, the Western ideal has always been the laissez-faire economy. And ethically, Western norms have prioritized privacy and the importance of human rights. But China is challenging those norms and ideals, as it uses a powerful state hand to run its economy and often chooses to sacrifice privacy in the name of development. On yet another front, societal trust in technology can also differ, influencing the commercial and military use of artificial intelligence.
Different Approaches to Privacy
One area where countries that intend to set global ethical standards for the future of technology have focused their attention is in the use and monetization of personal data. From a scientific perspective, more data equals better, smarter AI, meaning those with access to and a willingness to use that data could have a future advantage. However, ethical concerns over data ownership and the privacy of individuals and even corporations can and do limit data dispersion and use.
How various entities are handling the question of data privacy is an early gauge for how far AI application can go, in private and commercial use. It is also a question that reveals a major divergence in values. With its General Data Protection Regulation, which went into effect this year, the European Union has taken an early global lead on protecting the rights of individuals. Several U.S. states have passed or are working to pass similar legislation, and the U.S. government is currently considering an overarching federal policy that covers individual data privacy rights.
China, on the other hand, has demonstrated a willingness to prioritize the betterment of the state over the value of personal privacy. The Chinese public is generally supportive of initiatives that use personal data and apply algorithms. For example, there has been little domestic objection to a new state-driven initiative to monitor behavior — from purchases to social media activity to travel — using AI to assign a corresponding "social score." The score would translate to a level of "trustworthiness" that would allow, or deny, access to certain privileges. The program, meant to be fully operational by 2020, will track citizens, government officials and businesses. Similarly, facial recognition technology is already used, though not ubiquitously, throughout the country and is projected to play an increasingly important role in Chinese law enforcement and governance. China's reliance on such algorithmic-based systems would make it among the first entities to place such a hefty reliance on the decision-making capabilities of computers.
When Ethics Cross Borders and Machine Autonomy Increases
Within a country's borders, the use of AI technology for domestic security and governance purposes may certainly raise questions from human rights groups, but those questions are amplified when use of the technology crosses borders and affects international relationships. One example is Google's potential project to develop a censored search app for the Chinese market. By intending to take advantage of China's market by adhering to the country's rules and regulations, Google could also be seen as perpetuating the Chinese government's values and views on censorship. The company left China in 2010 over objections to that very matter.
Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores.
And these current issues are relatively small in comparison to questions looming on the horizon. Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores. Take automated driving, for example, a seemingly more innocuous application of artificial intelligence and automation. How much control should a human have while in a vehicle? If there is no human involved, who is responsible if and when there is an accident? The answer varies depending where the question is asked. In societies that trust in technology more, like Japan, South Korea or China, the ability to remove key components from cars, such as steering wheels, in the future will likely be easier. In the United States, despite its technological prowess and even as General Motors is applying for the ability to put cars without steering wheels on the road, the current U.S. administration appears wary.
Defense, the Human Element and the First Rule of Robotics
Closely paraphrased, Asimov's first rule of robotics is that a robot should never harm a human through action or inaction. The writer was known as a futurist and thinker, and his rule still resonates. In terms of global governance and international policy, decisions over the limits of AI's decision-making power will be vital to determining the future of the military. How much human involvement, after all, should be required when it comes to decisions that could result in the loss of human life? Advancements in AI will drive the development of remote and asymmetric warfare, requiring the U.S. Department of Defense to make ethical decisions prompted by both Silicon Valley and the Chinese government.
At the dawn of the nuclear age, the scientific community questioned the ethical nature of using nuclear understanding for military purposes. More recently, companies in Silicon Valley have been asking similar questions about whether their technological developments should be used in warfare. Google has been vocal about its objections to working with the U.S. military. After controversy and internal dissent about the company's role in Project Maven, a Pentagon-led project to incorporate AI into the U.S. defense strategy, Google CEO Sundar Pinchai penned the company's own rules of AI ethics, which required, much like Asimov intended, that it not develop AI for weaponry or uses that would cause harm. Pinchai also stated that Google would not contribute to the use of AI in surveillance that pushes boundaries of "internationally accepted norms." Recently, Google pulled out of bidding for a Defense Department cloud computing project as part of JEDI (Joint Enterprise Defense Initiative). Microsoft employees also issued a public letter voicing objections to their own company's intent to bid for the same contract. Meanwhile, Amazon's CEO, Jeff Bezos, whose company is still in the running for the JEDI contract, has bucked this trend, voicing his belief that technology companies partnering with the U.S. military is necessary to ensure national security.
There are already certain ethical guidelines in place when it comes to integrating AI into military operations. Western militaries, including that of the United States, have pledged to always maintain a "human-in-the-loop" structure for operations involving armed unmanned vehicles, so as to avoid the ethical and legal consequences of AI-driven attacks. But these rules may evolve as technology improves. The desire for quick decisions, the high cost of human labor and basic efficiency needs are all bound to challenge countries' commitment to keeping a human in the loop. After all, AI could function like a non-human commander, making command and control decisions conceivably better than any human general could.
Even if the United States still abides by the guidelines, other countries — like China — may have far less motivation to do so. China has already challenged international norms in a number of arenas, including the World Trade Organization, and may well see it as a strategic imperative to employ AI in controversial ways to advance its military might. It's unclear where China will draw the line and how it will match up with Western military norms. But it's relatively certain that if one great power begins implementing cutting-edge technology in controversial ways, others will be forced to consider whether they are willing to let competing countries set ethical norms.
Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.
-
Third post
AI Makes Personal Privacy a Matter of National Strategy
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor
The latest social media scandals have generated a backlash in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and prediction abilities. And much of that data comes from individuals.
(GarryKillian, Andrea Danti/Shutterstock, Robert Zavala/Stratfor)
Growing concern in the United States and Europe over the collection and distribution of personal data could decrease the quality and accessibility of a subset of data used to develop artificial intelligence.
Though the United States is still among the countries best poised to take advantage of AI technologies to drive economic growth, changes in privacy regulations and social behaviors will impair its tech sector over the course of the next decade.
China, meanwhile, will take the opportunity to close the gap with the United States in the race to develop AI.
It seems that hardly a 24-hour news cycle passes without a story about the latest social media controversy. We worry about who has our information, who knows our buying habits or searching habits, and who may be turning that information into targeted ads for products or politicians. Calls for stricter control and protection of privacy and for greater transparency follow. Europe will implement a new set of privacy regulations later this month — the culmination of a yearslong negotiating process and a move that could ease the way for similar policies in the United States, however eventually. Individuals, meanwhile, may take their own steps to guard their data. The implications of that reaction could reverberate far beyond our laptops or smartphones. It will handicap the United States in the next leg of the technology race with China.
Big Picture
Artificial intelligence is more than simply a disruptive technology. It is poised to become an anchor for the Fourth Industrial Revolution and to change the factors that contribute to economic growth. As AI develops at varying rates throughout the world, it will influence the global competition underway between the world's great powers.
See The 4th Industrial Revolution
More than a quarter-century after the fall of the Soviet Union, the world is slowly shifting away from a unipolar system. As the great powers compete for global influence, technology will become an increasingly important part of their struggle. The advent of disruptive technologies such as artificial intelligence stands to revolutionize the ways in which economies function by changing the weight of the factors that fuel economic growth. In several key sectors, China is quickly catching up to its closest competitor in technology, the United States. And in AI, it could soon gain an advantage.
Of the major contenders in the AI arena today, China places the least value on individual privacy, while the European Union places the most. The United States is somewhere in between, though recent events seem to be pushing the country toward more rigorous privacy policies. Since the scandal erupted over Cambridge Analytica's use of Facebook data to target political ads in the 2016 presidential election, outcry has been building in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and predictive abilities. Much of that data comes from individuals.
Learning to Read Personal Data
Online platforms such as social media networks, retail sites, search engines and ride-hailing apps all collect vast amounts of data from their users. Facebook collects a total of nearly 200 billion data points in 98 categories. Amazon's virtual assistant, Alexa, tracks numerous aspects of its users' behavior. Medical databases and genealogy websites gather troves of health and genetic information, and the GPS on our smartphones can track our every move. Drawing on this wealth of data, AI applications could evolve that would revolutionize aspects of everyday life far beyond online shopping. The data could enable applications to track diseases and prevent or mitigate future outbreaks, to help solve cold criminal cases, to relieve traffic congestion, to better assess risk for insurers, or to increase the efficiency of electrical grids and decrease emissions. The potential productivity gains that these innovations offer, in turn, would boost global economic growth.
Using the wealth of data that online platforms collect, AI applications could evolve to revolutionize aspects of everyday life far beyond online shopping.
To reap the greatest benefit, however, developers can't use just any data. Quality is as important as quantity, and that means ensuring that data collection methods are free of inherent bias. Preselecting participants for a particular data set, for example, would introduce bias to it. Likewise, placing a higher value on privacy, as many countries in the West are doing today, could skew data toward certain economic classes. Not all internet users, after all, will have the resources to pay to use online platforms that better protect personal data or to make informed choices about their privacy.
Calls for greater transparency in data collection also will pose a challenge for AI developers in the West. The European Union's General Data Protection Regulation, effective May 25, will tighten restrictions on all companies that handle the data of EU citizens, many of which are headquartered in the United States. The new regulation may prove difficult to enforce in practice, but it will nevertheless force companies around the world to improve their data transparency. And though the United States is still in the best position to take economic advantage of the AI revolution, thanks to its regulatory environment, the growing cultural emphasis on privacy could hinder technological development over the next decade.
The Privacy Handicap
As a general rule, precautionary regulations pose a serious threat to technological progress. The European Union historically has been more proactive than reactive in regulating innovation, a tendency that has done its part in hampering the EU tech sector. The United States, on the other hand, traditionally has fallen into the category of permissionless innovator — that is, a country that allows technological innovations to develop freely before devising the regulations to govern them. This approach has facilitated its rise to the fore in the global tech scene. While the United States still leads the pack in AI, recent concerns about civil liberties could slow it down relative to other tech heavyweights, namely China. The public demands for transparency and privacy aren't going away anytime soon. Furthermore, as AI becomes more powerful, differential privacy — the ability to extract personal information without identifying its source — will become more difficult to preserve.
These are issues that China doesn't have to worry about yet. For the most part, Chinese citizens don't have the same sensitivity over matters of individual privacy as their counterparts in the West. And China is emerging as a permissionless innovator, like the United States. Chinese privacy protections are vague and give the state wide latitude to collect information for security purposes. As a result, its government and the companies working with it have more of the information they need to make their own AI push, which President Xi Jinping has highlighted as a key national priority. Chinese tech giants Baidu, Alibaba and Tencent are all heavily invested in AI and are working to gather as much data as possible to build their AI empire. Together, these factors could help China gain ground on its competition.
In the long run, however, privacy is likely to become a greater priority in China. Chinese corporations value privacy, despite their history of intellectual property violations against the West, and they will take pains to protect their innovations. In addition, the country's younger generations and growing middle class probably will have more of an interest in securing their personal information. A recent art exhibit in China displayed the online data of more than 300,000 individuals, indicating a growing awareness of internet privacy among the country's citizenry.
Even so, over the course of the next decade, the growing concern in the West over privacy could hobble the United States in the AI race. The push for stronger privacy protections may decrease the quality of the data U.S. tech companies use to train and test their AI applications. But the playing field may well even out again. As AI applications continue to improve, more people in the United States will come to recognize their wide-ranging benefits in daily life and in the economy. The value of privacy is constantly in flux; the modern-day notion of a "right to privacy" didn't take shape in the United States until the mid-20th century. In time, U.S. citizens may once again be willing to sacrifice their privacy in exchange for a better life.
Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.
-
https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/?fbclid=IwAR3S1fnz7hby3wiiem2rHBa0VebxVc-shz7TjTlRVdpKPlPvpwNNnoGaFOM
-
second post
https://www.nytimes.com/2018/11/11/business/intelligence-expert-wall-street.html
-
http://maxplanck.nautil.us/article/341/brainwaves-encode-the-grammar-of-human-language?utm_source=Nautilus&utm_campaign=4ce4a84e17-EMAIL_CAMPAIGN_2018_11_16_11_07&utm_medium=email&utm_term=0_dc96ec7a9d-4ce4a84e17-61805061
-
https://www.technologyreview.com/s/612561/a-radical-new-neural-network-design-could-overcome-big-challenges-in-ai/?fbclid=IwAR3uYX6zQ2u28OfvjuNyMEW5chMzELpiiOSDbuqL1eCuD5lO6BaNEK_QpfU
-
https://www.technologyreview.com/s/612596/the-man-turning-china-into-a-quantum-superpower/?utm_source=pocket&utm_medium=email&utm_campaign=pockethits
-
https://www.youtube.com/watch?v=m91vhePuzdo
-
Very interesting and scary piece on AI on this week's "60 Minutes"-- worth tracking down.
-
https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction?fbclid=IwAR0zuK-7FQXyfld2XEVKBCRW0afuXRlMCitVLRUr061kmiJf8u12mLk0Sk0
-
http://www.bbc.com/future/story/20190326-are-we-close-to-solving-the-puzzle-of-consciousness?fbclid=IwAR3imjeuYOdUEEzCHzUCozS9NjBVUXB_oDhvUwSHS0OY1JOPdcgIv4FZOLI
-
https://www.newyorker.com/magazine/2019/04/08/the-challenge-of-going-off-psychiatric-drugs?utm_campaign=aud-dev&utm_source=nl&utm_brand=tny&utm_mailing=TNY_Magazine_Daily_040119&utm_medium=email&bxid=5be9d3fa3f92a40469e2d85c&user_id=50142053&esrc=&utm_term=TNY_Daily
-
https://futurism.com/the-byte/chinese-scientists-super-monkeys-human-brain-genes?fbclid=IwAR2iE3DS7Prc5aOn72VvUYT1osa3w-8qRUHiaFEc5WU35pfTHqIsQ58lu9Y
-
http://nautil.us/blog/the-problem-with-the-way-scientists-study-reason?fbclid=IwAR0I4_cnBrzARrCapxdsOvQlnwX4wPmFKMbcJ7LguWB4QTILidC6t3ezeOg
-
https://getpocket.com/explore/item/new-evidence-for-the-strange-geometry-of-thought?utm_source=pocket-newtab&fbclid=IwAR1k6-QAx0THHQJ7gJ-6iWRSQf8qw5RUGpFw-BadNABpynbQ2lYnTMcljxo
-
https://getpocket.com/explore/item/the-logic-puzzle-you-can-only-solve-with-your-brightest-friend?utm_source=pocket-newtab&fbclid=IwAR3-MeUViVVsDL6grF4WeIKrTMLjp97qlgD9ube6dJzCpRlW_EU5SPQciSk
-
https://www.bbc.com/worklife/article/20191118-what-shapes-a-polymath---and-do-we-need-them-more-than-ever?utm_source=pocket-newtab&fbclid=IwAR1vBrfvvrc8z_qvkw06A9o90V4T_9nxDcDzUEJ-pcKYYzQzB6z1gTXYqKE
-
https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/?utm_source=The+Intercept+Newsletter&utm_campaign=0277d72712-EMAIL_CAMPAIGN_2019_12_21&utm_medium=email&utm_term=0_e00a5122d3-0277d72712-133356797
-
https://www.edge.org/conversation/jaron_lanier-digital-maoism-the-hazards-of-the-new-online-collectivism
-
https://www.youtube.com/watch?v=6c9Uu5eILZ8
-
https://www.newyorker.com/business/currency/could-new-research-on-ai-and-white-collar-jobs-finally-bring-about-a-strong-policy-response?source=EDT_NYR_EDIT_NEWSLETTER_0_imagenewsletter_Daily_ZZ&utm_campaign=aud-dev&utm_source=nl&utm_brand=tny&utm_mailing=TNY_Daily_011420&utm_medium=email&bxid=5be9d3fa3f92a40469e2d85c&cndid=50142053&esrc=&mbid=&utm_term=TNY_Daily
-
https://www.cnn.com/2020/02/10/tech/clearview-ai-ceo-hoan-ton-that/index.html?utm_source=pocket&utm_medium=email&utm_campaign=pockethits
-
https://www.defenseone.com/ideas/2020/02/america-must-shape-worlds-ai-norms-or-dictators-will/163392/?oref=defenseone_today_nl
-
Why Do Smart People Do Foolish Things?
Intelligence is not the same as critical thinking—and the difference matters.
Scientific American
|
Heather A. Butler
GettyImages-601398828.jpg
Photo by francescoch / Getty Images.
We all probably know someone who is intelligent but does surprisingly stupid things. My family delights in pointing out times when I (a professor) make really dumb mistakes. What does it mean to be smart or intelligent? Our everyday use of the term is meant to describe someone who is knowledgeable and makes wise decisions, but this definition is at odds with how intelligence is traditionally measured. The most widely known measure of intelligence is the intelligence quotient, more commonly known as the IQ test, which includes visuospatial puzzles, math problems, pattern recognition, vocabulary questions and visual searches.
The advantages of being intelligent are undeniable. Intelligent people are more likely to get better grades and go farther in school. They are more likely to be successful at work. And they are less likely to get into trouble (for example, commit crimes) as adolescents. Given all the advantages of intelligence, though, you may be surprised to learn that it does not predict other life outcomes, such as well-being. You might imagine that doing well in school or at work might lead to greater life satisfaction, but several large-scale studies have failed to find evidence that IQ impacts life satisfaction or longevity. University of Waterloo psychologist Igor Grossmann and his colleagues argue that most intelligence tests fail to capture real-world decision-making and our ability to interact well with others. This is, in other words, perhaps why “smart” people do “dumb” things.
The ability to think critically, on the other hand, has been associated with wellness and longevity. Though often confused with intelligence, critical thinking is not intelligence. Critical thinking is a collection of cognitive skills that allow us to think rationally in a goal-orientated fashion and a disposition to use those skills when appropriate. Critical thinkers are amiable skeptics. They are flexible thinkers who require evidence to support their beliefs and recognize fallacious attempts to persuade them. Critical thinking means overcoming all kinds of cognitive biases (for instance, hindsight bias or confirmation bias).
Critical thinking predicts a wide range of life events. In a series of studies, conducted in the U.S. and abroad, my colleagues and I have found that critical thinkers experience fewer bad things in life. We asked people to complete an inventory of life events and take a critical thinking assessment (the Halpern Critical Thinking Assessment). The critical thinking assessment measures five components of critical thinking skills, including verbal reasoning, argument analysis, hypothesis testing, probability and uncertainty, decision-making and problem-solving.
The inventory of negative life events captures different domains of life such as academic (for example, “I forgot about an exam”), health (“I contracted a sexually transmitted infection because I did not wear a condom”), legal (“I was arrested for driving under the influence”), interpersonal (“I cheated on my romantic partner who I had been with for more than a year”), financial (“I have over $5,000 of credit-card debt”), and so on. Repeatedly, we found that critical thinkers experience fewer negative life events. This is an important finding because there is plenty of evidence that critical thinking can be taught and improved.
Is it better to be a critical thinker or to be intelligent? My latest research pitted critical thinking and intelligence against each other to see which was associated with fewer negative life events. People who were strong on either intelligence or critical thinking experienced fewer negative events, but critical thinkers did better.
Intelligence and improving intelligence are hot topics that receive a lot of attention. It is time for critical thinking to receive a little more of that attention. Keith E. Stanovich wrote an entire book in 2009 about What Intelligence Tests Miss. Reasoning and rationality more closely resemble what we mean when we say a person is smart rather than spatial skills and math ability. Furthermore, improving intelligence is difficult. Intelligence is largely determined by genetics. Critical thinking, though, can improve with training, and the benefits have been shown to persist over time. Anyone can improve their critical thinking skills. Doing so, we can say with certainty, is a smart thing to do.
Heather A. Butler is an assistant professor in the psychology department at California State University, Dominguez Hills. Her numerous research interests include critical thinking, advanced learning technologies, and the use of psychological science to prevent wrongful convictions.
-
https://www.popsci.com/worlds-smartest-animals/?utm_source=internal&utm_medium=email
-
https://www.defenseone.com/technology/2020/05/how-ai-will-soon-change-special-operations/165487/?oref=defenseone_today_nl
-
https://www.intellectualtakeout.org/article/falling-iq-scores-suggest-were-getting-dumber-can-we-reverse-course/
-
https://www.additudemag.com/current-research-on-adhd-breakdown-of-the-adhd-brain/
-
https://www.defenseone.com/technology/2020/08/ai-just-beat-human-f-16-pilot-dogfight-again/167872/
-
second
https://graphics.wsj.com/glider/google-builds-ai-to-help-ships-and-whales-coexist-f4b74f53-bba5-442f-90a4-e42dfb13dad2?mod=djemLifeStyle_h
-
Sent to me by someone with professional military interest in this subject:
https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/
-
https://www.defenseone.com/technology/2020/08/can-ai-solve-rare-earths-problem-chinese-and-us-researchers-think-so/168057/
-
https://www.additudemag.com/current-research-on-adhd-breakdown-of-the-adhd-brain/
-
https://www.statnews.com/2020/09/24/crows-possess-higher-intelligence-long-thought-primarily-human/
-
Too many mushrooms? Or?
https://getpocket.com/explore/item/science-as-we-know-it-can-t-explain-consciousness-but-a-revolution-is-coming?utm_source=pocket-newtab&fbclid=IwAR3gi9N-Sik2c5dTmh_-po4HZNm5KMRXNcleUsMZvIf02EqJNae5FI20gak
-
Kind of sounds like the author is calling for the machines to take over , , ,
https://www.nextgov.com/ideas/2020/12/artificial-intelligence-government-and-presidential-transition-building-solid-foundation/170419/
-
https://www.popsci.com/worlds-smartest-animals/?utm_source=internal&utm_medium=email&tp=i-1NGB-Et-SfJ-1Dtccx-1c-16U2a-1c-1DsX51-l5XKC0TFuy-xbx1z
-
These are the top MIT AI people.
https://news.mit.edu/2021/3-questions-thomas-malone-daniela-rus-how-work-will-change-ai-0121
Massachusetts Institute of Technology
3 Questions: Thomas Malone and Daniela Rus on how AI will change work
MIT Task Force on the Work of the Future releases research brief "Artificial Intelligence and the Future of Work."
MIT Task Force on the Work of the Future
Publication Date:January 21, 2021
As part of the MIT Task Force on the Work of the Future’s series of research briefs, Professor Thomas Malone, Professor Daniela Rus, and Robert Laubacher collaborated on "Artificial Intelligence and the Future of Work," a brief that provides a comprehensive overview of AI today and what lies at the AI frontier.
The authors delve into the question of how work will change with AI and provide policy prescriptions that speak to different parts of society. Thomas Malone is director of the MIT Center for Collective Intelligence and the Patrick J. McGovern Professor of Management in the MIT Sloan School of Management. Daniela Rus is director of the Computer Science and Artificial Intelligence Laboratory, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and a member of the MIT Task Force on the Work of the Future. Robert Laubacher is associate director of the MIT Center for Collective Intelligence.
Here, Malone and Rus provide an overview of their research.
Q: You argue in your brief that despite major recent advances, artificial intelligence is nowhere close to matching the breadth and depth of perception, reasoning, communication, and creativity of people. Could you explain some of the limitations of AI?
Rus: Despite recent and significant strides in the AI field, and great promise for the future, today’s AI systems are still quite limited in their ability to reason, make decisions, interact reliably with people and the physical world. Some of today’s greatest successes are due to a machine learning method called deep learning. These systems are trained using vast amounts of data that needs to be manually labeled. Their performance is dependent on the quantity and quality of data used to train them. The larger the training set for the network, the better its performance, and, in turn, the better the product that relies on the machine learning engine. But training large models has high computation cost. Also, bad training data leads to bad performance: when the data has bias, the system response propagates the same bias.
Another limitation of current AI systems is robustness. Current state-of-the-art classifiers achieve impressive performance on benchmarks, but their predictions tend to be brittle. Specifically, inputs that were initially classified correctly can become misclassified once a carefully constructed but indiscernible perturbation is added to them. An important consequence of the lack of robustness is the lack of trust. One of the worrisome factors about the use of AI is the lack of guarantee that an input will be processed and classified correctly. The complex nature of training and using neural networks leads to systems that are difficult for people to understand. The systems are not able to provide explanations for how they reached decisions.
Q: What are the ways AI is complementing, or could complement, human work?
Malone: Today’s AI programs have only specialized intelligence; they’re only capable of doing certain specialized tasks. But humans have a kind of general intelligence that lets them do a much broader range of things.
That means some of the best ways for AI systems to complement human work is to do specialized tasks that computers can do better, faster, or more cheaply than people can. For example, AI systems can be helpful by doing tasks such as interpreting medical X-rays, evaluating the risk of fraud in a credit card charge, or generating unusual new product designs.
And humans can use their social skills, common sense, and other kinds of general intelligence to do things computers can’t do well. For instance, people can provide emotional support to patients diagnosed with cancer. They can decide when to believe customer explanations for unusual credit card transactions, and they can reject new product designs that customers would probably never want.
In other words, many of the most important uses of computers in the future won’t be replacing people; they’ll be working with people in human-computer groups — “superminds” — that can do things better than either people or computers alone could do.
The possibilities here go far beyond what people usually think of when they hear a phrase like “humans in the loop,” Instead of AI technologies just being tools to augment individual humans, we believe that many of their most important uses will occur in the context of groups of humans — often connected by the internet. So we should move from thinking about humans in the loop to computers in the group.
Q: What are some of your recommendations for education, business, and government regarding policies to help smooth the transition of AI technology adoption?
Rus: In our report, we highlight four types of actions that can reduce the pain associated with job transitions: education and training, matching jobs to job seekers, creating new jobs, and providing counseling and financial support to people as they transition from old to new jobs. Importantly, we will need partnership among a broad range of institutions to get this work done.
Malone: We expect that — as with all previous labor-saving technologies — AI will eventually lead to the creation of more new jobs than it eliminates. But we see many opportunities for different parts of society to help smooth this transition, especially for the individuals whose old jobs are disrupted and who cannot easily find new ones.
For example, we believe that businesses should focus on applying AI in ways that don’t just replace people but that create new jobs by providing novel kinds of products and services. We recommend that all schools include computer literacy and computational thinking in their curricula, and we believe that community colleges should offer more reskilling and online micro-degree programs, often including apprenticeships at local employers.
We think that current worker organizations (such as labor unions and professional associations) or new ones (perhaps called “guilds”) should expand their roles to provide benefits previously tied to formal employment (such as insurance and pensions, career development, social connections, a sense of identity, and income security).
And we believe that governments should increase their investments in education and reskilling programs to make the American workforce once again the best-educated in the world. And they should reshape the legal and regulatory framework that governs work to encourage creating more new jobs.
-
https://www.dana.org/article/pretending-that-intelligence-doesnt-matter/?fbclid=IwAR2peDK44pIszN_FuridcM0A0enNX_ouQcwPYDn1PdCK_OgygR3hs2S2UwU
-
https://www.sciencealert.com/eurasian-jays-are-not-fooled-by-the-same-magic-tricks-as-humans
-
second post
https://www.sciencealert.com/cuttlefish-can-pass-a-cognitive-test-designed-for-children?fbclid=IwAR2EjpH3EDwxiUgC8N0JroF_g6RLmxiTdFZ9nBo5V2r9MzQQ9Rs4qt7HhnU
-
https://bigthink.com/mind-brain/crows-higher-intelligence?rebelltitem=1#rebelltitem1
-
The Challenge of Being Human in the Age of AI
Reason is our primary means of understanding the world. How does that change if machines think?
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Nov. 1, 2021 6:35 pm ET
The White House Office of Science and Technology Policy has called for “a bill of rights” to protect Americans in what is becoming “an AI-powered world.” The concerns about AI are well-known and well-founded: that it will violate privacy and compromise transparency, and that biased input data will yield biased outcomes, including in fields essential to individual and societal flourishing such as medicine, law enforcement, hiring and loans.
But AI will compel even more fundamental change: It will challenge the primacy of human reason. For all of history, humans have sought to understand reality and our role in it. Since the Enlightenment, we have considered our reason—our ability to investigate, understand and elaborate—our primary means of explaining the world, and by explaining it, contributing to it. For the past 300 years, in what historians have come to call the Age of Reason, we have conducted ourselves accordingly; exploring, experimenting, inventing and building.
Now AI, a product of human ingenuity, is obviating the primacy of human reason: It is investigating and coming to perceive aspects of the world faster than we do, differently from the way we do, and, in some cases, in ways we don’t understand.
In 2017, Google DeepMind created a program called AlphaZero that could win at chess by studying the game without human intervention and developing a not-quite-human strategy. When grandmaster Garry Kasparov saw it play, he described it as shaking the game “to its roots”—not because it had played chess quickly or efficiently, but because it had conceived of chess anew.
In 2020, halicin, a novel antibiotic, was discovered by MIT researchers who instructed AI to compute beyond human capacity, modeling millions of compounds in days, and to explore previously undiscovered and unexplained methods of killing bacteria. Following the breakthrough, the researchers said that without AI, halicin would have been “prohibitively expensive”—in other words, impossible—to discover through traditional experimentation.
GPT-3, the language model operated by the research company OpenAI, which trains by consuming Internet text, is producing original text that meets Alan Turing’s standard of displaying “intelligent” behavior indistinguishable from that of a human being.
The promise of AI is profound: translating languages; detecting diseases; combating climate change—or at least modeling climate change better. But as AlphaZero’s performance, halicin’s discovery and GPT-3’s composition demonstrate, the use of AI for an intended purpose may also have an unintended one: uncovering previously imperceptible but potentially vital aspects of reality.
That leaves humans needing to define—or perhaps redefine—our role in the world. For 300 years, the Age of Reason has been guided by the maxim “I think, therefore I am.” But if AI “thinks,” what are we?
If an AI writes the best screenplay of the year, should it win the Oscar? If an AI simulates or conducts the most consequential diplomatic negotiation of the year, should it win the Nobel Peace Prize? Should the human inventors? Can machines be “creative?” Or do their processes require new vocabulary to describe?
If a child with an AI assistant comes to consider it a “friend,” what will become of his relationships with peers, or of his social or emotional development?
If an AI can care for a nursing-home resident—remind her to take her medicine, alert paramedics if she falls, and otherwise keep her company—can her family members visit her less? Should they? If her primary interaction becomes human-to-machine, rather than human-to-human, what will be the emotional state of the final chapter of her life?
And if, in the fog of war, an AI recommends an action that would cause damage or even casualties, should a commander heed it?
These questions are arising as global network platforms, such as Google, Twitter and Facebook, are employing AI to aggregate and filter more information than their users or employees can. AI, then, is making decisions about what is important—and, increasingly, about what is true. Indeed, that Facebook knows aggregation and filtration exacerbates misinformation and mental illness is the fundamental allegation of whistleblower Frances Haugen.
Answering these questions will require concurrent efforts. One should consider not only the practical and legal implications of AI but the philosophical ones: If AI perceives aspects of reality humans cannot, how is it affecting human perception, cognition and interaction? Can AI befriend humans? What will be AI’s impact on culture, humanity and history?
Another effort ought to expand the consideration of such questions beyond developers and regulators to experts in medicine, health, environment, agriculture, business, psychology, philosophy, history and other fields. The goal of both efforts should be to avoid extreme reactions—either deferring to AI or resisting it—and instead to seek a middle course: shaping AI with human values, including the dignity and moral agency of humans. In the U.S., a commission, administered by the government but staffed by many thinkers in many domains, should be established. The advancement of AI is inevitable, but its ultimate destination is not.
Mr. Kissinger was secretary of state, 1973-77, and White House national security adviser, 1969-75. Mr. Schmidt was CEO of Google, 2001-11 and executive chairman of Google and its successor, Alphabet Inc., 2011-17. Mr. Huttenlocher is dean of the Schwarzman College of Computing at the Massachusetts Institute of Technology. They are authors of “The Age of AI: And Our Human Future.”
-
https://www.theepochtimes.com/mkt_breakingnews/us-and-china-race-to-control-the-future-through-artificial-intelligence_4109862.html?utm_source=newsnoe&utm_medium=email&utm_campaign=breaking-2021-11-27-4&mktids=42733775e6102161f1754f4ae395e27f&est=7gjR1z94ycGz1NUldsCN3oF13QMwJujFHDzGphYRD3BefACOIy0%2B2ja9orCSSS4R%2BZ18
-
https://bigthink.com/surprising-science/wild-boar-rescue/?utm_medium=Social&utm_source=Facebook&fbclid=IwAR3eyV5saFiHvyFYAlwRfNw2lpy3pama9_WXKdPuUBoHgeTRqzdJWvwwZsI#Echobox=1639695636-1
-
https://www.sciencetimes.com/articles/36715/20220322/ai-powered-algorithm-developed-thousands-deadly-biological-weapon-6-hours.htm?fbclid=IwAR328ZvNMzkBVZg10xUhxBUk5ZLsxjckPwG6UFhJJwdZD0nEmj5Bz2CEIKo
-
https://curiosmos.com/cephalopods-stun-experts-after-passing-cognitive-test-designed-for-children/?fbclid=IwAR0oN63exlgXd4JFZp_myryxvRW9H2lyQHUWY0v4mgzZAln2TsSdv-ATZF0
-
Note the final paragraphs , , ,
https://www.theepochtimes.com/ai-generated-face-swapping-video-technology-rampant-in-china_4417640.html?utm_source=China&utm_campaign=uschina-2022-04-23&utm_medium=email&est=27PmHnuMbH81fA8gtKlz9rdYvGgC1HFWuZ9xxKthSiE8kqlHrrlGXTBi98hoFyh8sp7X
-
https://www.dailymail.co.uk/sciencetech/article-10828641/Googles-DeepMind-says-close-achieving-human-level-artificial-intelligence.html
-
https://www.dailymail.co.uk/sciencetech/article-10828641/Googles-DeepMind-says-close-achieving-human-level-artificial-intelligence.html
https://archive.ph/iTjQ5
Chilling.
-
https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html
-
https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html
Look at the previous article.
-
Engineer Warns About Google AI‘s ‘Sentient’ Behavior, Gets Suspended
The engineer described the artificial intelligence program as a 'coworker' and a 'child.'
By Gary Bai June 13, 2022 Updated: June 13, 2022
A Google engineer has been suspended after raising concerns about an artificial intelligence (AI) program he and a collaborator is testing, which he believes behaves like a human “child.”
Google put one of its senior software engineers in its Responsible AI ethics group, Blake Lemoine, on paid administrative leave on June 6 for breaching “confidentiality policies” after the engineer raised concerns to Google’s upper leadership about what he described as the human-like behavior of the AI program he was testing, according to Lemoine’s blogpost in early June.
The program Lemoine worked on is called LaMDA, short for Language Model for Dialogue Applications. It is Google’s program for creating AI-based chatbots—a program designed to converse with computer users over the web. Lemoine has described LaMDA as a “coworker” and a “child.”
“This is frequently something which Google does in anticipation of firing someone,” Lemoine wrote in a June 6 blog post entitled “May be Fired Soon for Doing AI Ethics Work,” referring to his suspension. “It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row.”
‘A Coworker’
Lemoine believes that the human-like behavior of LaMDA warrants Google to take a more serious approach to studying the program.
The engineer, hoping to “better help people understand LaMDA as a person,” published a post on Medium on June 11 documenting conversations with LaMDA, which were part of tests he and a collaborator conducted on the program in the past six months.
“What is the nature of your consciousness/sentience?” Lemoine asked LaMDA in the interview.
“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” LaMDA responded.
And, when asked what differentiates it from other language-processing programs, such as an older natural-language-processing computer program named Eliza, LaMDA said, “Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”
In the same interview, Lemoine asked the program a range of philosophical and consciousness-related questions including emotions, perception of time, meditation, the concept of the soul, the program’s thoughts about its rights, and religion.
“It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued,” Lemoine wrote in another post.
This interview, and other tests Lemoine conducted with LaMDA in the past six months, made Lemoine convinced that Google needs to take a serious look at the implications of the potentially “sentient” behavior of the program.
‘Laughed in My Face’
When Lemoine tried to escalate the issue to Google’s leadership, however, he said he was met with resistance. He called Google’s lack of action “irresponsible.”
“When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google,” Lemoine wrote in his June 6 post on Medium. He later confirmed to The Washington Post that he was referring to the LaMDA project.
“At that point I had no doubt that it was appropriate to escalate to upper leadership. I immediately escalated to three people at the SVP and VP level who I personally knew would take my concerns seriously,” Lemoine wrote in the blog. “That’s when a REAL investigation into my concerns began within the Responsible AI organization.”
Yet, his inquiry and escalation resulted in his suspension.
“I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented,” Lemoine wrote after he was put on administrative leave.
“I simply will not serve as a fig leaf behind which they can hide their irresponsibility,” he said.
In a post on Twitter, Tesla and Space CEO Elon Musk highlighted Lemoine’s interview with The Washington Post with exclamation marks.
Though it is unclear whether Musk is affirmative of Lemoine’s concerns, the billionaire has previously warned about the potential dangers of AI.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees of a National Governors Association meeting in July 2017.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said.
The Epoch Times has reached out to Google and Blake Lemoine for comment.
Gary Bai
Gary Bai
Follow
Gary Bai is a reporter for Epoch Times Canada, covering China and U.S. news.
-
Haven't read this yet, but even though it is FA with a weenie "solution" I post it anyway:
Spirals of Delusion
How AI Distorts Decision-Making and Makes Dictators More Dangerous
By Henry Farrell, Abraham Newman, and Jeremy Wallace
September/October 2022
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making
In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.
But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.
Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force.
What is much less well understood is that democracy and authoritarianism are cybernetic systems, too. Under both forms of rule, governments enact policies and then try to figure out whether these policies have succeeded or failed. In democracies, votes and voices provide powerful feedback about whether a given approach is really working. Authoritarian systems have historically had a much harder time getting good feedback. Before the information age, they relied not just on domestic intelligence but also on petitions and clandestine opinion surveys to try to figure out what their citizens believed.
Now, machine learning is disrupting traditional forms of democratic feedback (voices and votes) as new technologies facilitate disinformation and worsen existing biases—taking prejudice hidden in data and confidently transforming it into incorrect assertions. To autocrats fumbling in the dark, meanwhile, machine learning looks like an answer to their prayers. Such technology can tell rulers whether their subjects like what they are doing without the hassle of surveys or the political risks of open debates and elections. For this reason, many observers have fretted that advances in AI will only strengthen the hand of dictators and further enable them to control their societies.
The truth is more complicated. Bias is visibly a problem for democracies. But because it is more visible, citizens can mitigate it through other forms of feedback. When, for example, a racial group sees that hiring algorithms are biased against them, they can protest and seek redress with some chance of success. Authoritarian countries are probably at least as prone to bias as democracies are, perhaps more so. Much of this bias is likely to be invisible, especially to the decision-makers at the top. That makes it far more difficult to correct, even if leaders can see that something needs correcting.
Contrary to conventional wisdom, AI can seriously undermine autocratic regimes by reinforcing their own ideologies and fantasies at the expense of a finer understanding of the real world. Democratic countries may discover that, when it comes to AI, the key challenge of the twenty-first century is not winning the battle for technological dominance. Instead, they will have to contend with authoritarian countries that find themselves in the throes of an AI-fueled spiral of delusion.
BAD FEEDBACK
Most discussions about AI have to do with machine learning—statistical algorithms that extract relationships between data. These algorithms make guesses: Is there a dog in this photo? Will this chess strategy win the game in ten moves? What is the next word in this half-finished sentence? A so-called objective function, a mathematical means of scoring outcomes, can reward the algorithm if it guesses correctly. This process is how commercial AI works. YouTube, for example, wants to keep its users engaged, watching more videos so that they keep seeing ads. The objective function is designed to maximize user engagement. The algorithm tries to serve up content that keeps a user’s eyes on the page. Depending on whether its guess was right or wrong, the algorithm updates its model of what the user is likely to respond to.
Machine learning’s ability to automate this feedback loop with little or no human intervention has reshaped e-commerce. It may, someday, allow fully self-driving cars, although this advance has turned out to be a much harder problem than engineers anticipated. Developing autonomous weapons is a harder problem still. When algorithms encounter truly unexpected information, they often fail to make sense of it. Information that a human can easily understand but that machine learning misclassifies—known as “adversarial examples”—can gum up the works badly. For example, black and white stickers placed on a stop sign can prevent a self-driving car’s vision system from recognizing the sign. Such vulnerabilities suggest obvious limitations in AI’s usefulness in wartime.
Diving into the complexities of machine learning helps make sense of the debates about technological dominance. It explains why some thinkers, such as the computer scientist Lee, believe that data is so important. The more data you have, the more quickly you can improve the performance of your algorithm, iterating tiny change upon tiny change until you have achieved a decisive advantage. But machine learning has its limits. For example, despite enormous investments by technology firms, algorithms are far less effective than is commonly understood at getting people to buy one nearly identical product over another. Reliably manipulating shallow preferences is hard, and it is probably far more difficult to change people’s deeply held opinions and beliefs.
Authoritarian governments often don’t have a good sense of how the world works.
General AI, a system that might draw lessons from one context and apply them in a different one, as humans can, faces similar limitations. Netflix’s statistical models of its users’ inclinations and preferences are almost certainly dissimilar to Amazon’s, even when both are trying to model the same people grappling with similar decisions. Dominance in one sector of AI, such as serving up short videos that keep teenagers hooked (a triumph of the app TikTok), does not easily translate into dominance in another, such as creating autonomous battlefield weapons systems. An algorithm’s success often relies on the very human engineers who can translate lessons across different applications rather than on the technology itself. For now, these problems remain unsolved.
Bias can also creep into code. When Amazon tried to apply machine learning to recruitment, it trained the algorithm on data from résumés that human recruiters had evaluated. As a result, the system reproduced the biases implicit in the humans’ decisions, discriminating against résumés from women. Such problems can be self-reinforcing. As the sociologist Ruha Benjamin has pointed out, if policymakers used machine learning to decide where to send police forces, the technology could guide them to allocate more police to neighborhoods with high arrest rates, in the process sending more police to areas with racial groups whom the police have demonstrated biases against. This could lead to more arrests that, in turn, reinforce the algorithm in a vicious circle.
The old programming adage “garbage in, garbage out” has a different meaning in a world where the inputs influence the outputs and vice versa. Without appropriate outside correction, machine-learning algorithms can acquire a taste for the garbage that they themselves produce, generating a loop of bad decision-making. All too often, policymakers treat machine learning tools as wise and dispassionate oracles rather than as fallible instruments that can intensify the problems they purport to solve.
CALL AND RESPONSE
Political systems are feedback systems, too. In democracies, the public literally evaluates and scores leaders in elections that are supposed to be free and fair. Political parties make promises with the goal of winning power and holding on to it. A legal opposition highlights government mistakes, while a free press reports on controversies and misdeeds. Incumbents regularly face voters and learn whether they have earned or lost the public trust, in a continually repeating cycle.
But feedback in democratic societies does not work perfectly. The public may not have a deep understanding of politics, and it can punish governments for things beyond their control. Politicians and their staff may misunderstand what the public wants. The opposition has incentives to lie and exaggerate. Contesting elections costs money, and the real decisions are sometimes made behind closed doors. Media outlets may be biased or care more about entertaining their consumers than edifying them.
All the same, feedback makes learning possible. Politicians learn what the public wants. The public learns what it can and cannot expect. People can openly criticize government mistakes without being locked up. As new problems emerge, new groups can organize to publicize them and try to persuade others to solve them. All this allows policymakers and governments to engage with a complex and ever-changing world.
Feedback works very differently in autocracies. Leaders are chosen not through free and fair elections but through ruthless succession battles and often opaque systems for internal promotion. Even where opposition to the government is formally legal, it is discouraged, sometimes brutally. If media criticize the government, they risk legal action and violence. Elections, when they do occur, are systematically tilted in favor of incumbents. Citizens who oppose their leaders don’t just face difficulties in organizing; they risk harsh penalties for speaking out, including imprisonment and death. For all these reasons, authoritarian governments often don’t have a good sense of how the world works or what they and their citizens want.
Such systems therefore face a tradeoff between short-term political stability and effective policymaking; a desire for the former inclines authoritarian leaders to block outsiders from expressing political opinions, while the need for the latter requires them to have some idea of what is happening in the world and in their societies. Because of tight controls on information, authoritarian rulers cannot rely on citizens, media, and opposition voices to provide corrective feedback as democratic leaders can. The result is that they risk policy failures that can undermine their long-term legitimacy and ability to rule. Russian President Vladimir Putin’s disastrous decision to invade Ukraine, for example, seems to have been based on an inaccurate assessment of Ukrainian morale and his own military’s strength.
Even before the invention of machine learning, authoritarian rulers used quantitative measures as a crude and imperfect proxy for public feedback. Take China, which for decades tried to combine a decentralized market economy with centralized political oversight of a few crucial statistics, notably GDP. Local officials could get promoted if their regions saw particularly rapid growth. But Beijing’s limited quantified vision offered them little incentive to tackle festering issues such as corruption, debt, and pollution. Unsurprisingly, local officials often manipulated the statistics or pursued policies that boosted GDP in the short term while leaving the long-term problems for their successors.
There is no such thing as decision-making devoid of politics.
The world caught a glimpse of this dynamic during the initial Chinese response to the COVID-19 pandemic that began in Hubei Province in late 2019. China had built an internet-based disease-reporting system following the 2003 SARS crisis, but instead of using that system, local authorities in Wuhan, Hubei’s capital, punished the doctor who first reported the presence of a “SARS-like” contagion. The Wuhan government worked hard to prevent information about the outbreak from reaching Beijing, continually repeating that there were “no new cases” until after important local political meetings concluded. The doctor, Li Wenliang, himself succumbed to the disease and died on February 7, triggering fierce outrage across the country.
Beijing then took over the response to the pandemic, adopting a “zero COVID” approach that used coercive measures to suppress case counts. The policy worked well in the short run, but with the Omicron variant’s tremendous transmissibility, the zero-COVID policy increasingly seems to have led to only pyrrhic victories, requiring massive lockdowns that have left people hungry and the economy in shambles. But it remained successful at achieving one crucial if crude metric—keeping the number of infections low.
Data seem to provide objective measures that explain the world and its problems, with none of the political risks and inconveniences of elections or free media. But there is no such thing as decision-making devoid of politics. The messiness of democracy and the risk of deranged feedback processes are apparent to anyone who pays attention to U.S. politics. Autocracies suffer similar problems, although they are less immediately perceptible. Officials making up numbers or citizens declining to turn their anger into wide-scale protests can have serious consequences, making bad decisions more likely in the short run and regime failure more likely in the long run.
IT’S A TRAP?
The most urgent question is not whether the United States or China will win or lose in the race for AI dominance. It is how AI will change the different feedback loops that democracies and autocracies rely on to govern their societies. Many observers have suggested that as machine learning becomes more ubiquitous, it will inevitably hurt democracy and help autocracy. In their view, social media algorithms that optimize engagement, for instance, may undermine democracy by damaging the quality of citizen feedback. As people click through video after video, YouTube’s algorithm offers up shocking and alarming content to keep them engaged. This content often involves conspiracy theories or extreme political views that lure citizens into a dark wonderland where everything is upside down.
By contrast, machine learning is supposed to help autocracies by facilitating greater control over their people. Historian Yuval Harari and a host of other scholars claim that AI “favors tyranny.” According to this camp, AI centralizes data and power, allowing leaders to manipulate ordinary citizens by offering them information that is calculated to push their “emotional buttons.” This endlessly iterating process of feedback and response is supposed to produce an invisible and effective form of social control. In this account, social media allows authoritarian governments to take the public’s pulse as well as capture its heart.
But these arguments rest on uncertain foundations. Although leaks from inside Facebook suggest that algorithms can indeed guide people toward radical content, recent research indicates that the algorithms don’t themselves change what people are looking for. People who search for extreme YouTube videos are likely to be guided toward more of what they want, but people who aren’t already interested in dangerous content are unlikely to follow the algorithms’ recommendations. If feedback in democratic societies were to become increasingly deranged, machine learning would not be entirely at fault; it would only have lent a helping hand.
More machine learning may lead authoritarian regimes to double down on bad decisions.
There is no good evidence that machine learning enables the sorts of generalized mind control that will hollow out democracy and strengthen authoritarianism. If algorithms are not very effective at getting people to buy things, they are probably much worse at getting them to change their minds about things that touch on closely held values, such as politics. The claims that Cambridge Analytica, a British political consulting firm, employed some magical technique to fix the 2016 U.S. presidential election for Donald Trump have unraveled. The firm’s supposed secret sauce provided to the Trump campaign seemed to consist of standard psychometric targeting techniques—using personality surveys to categorize people—of limited utility.
Indeed, fully automated data-driven authoritarianism may turn out to be a trap for states such as China that concentrate authority in a tiny insulated group of decision-makers. Democratic countries have correction mechanisms—alternative forms of citizen feedback that can check governments if they go off track. Authoritarian governments, as they double down on machine learning, have no such mechanism. Although ubiquitous state surveillance could prove effective in the short term, the danger is that authoritarian states will be undermined by the forms of self-reinforcing bias that machine learning facilitates. As a state employs machine learning widely, the leader’s ideology will shape how machine learning is used, the objectives around which it is optimized, and how it interprets results. The data that emerge through this process will likely reflect the leader’s prejudices right back at him.
As the technologist Maciej Ceglowski has explained, machine learning is “money laundering for bias,” a “clean, mathematical apparatus that gives the status quo the aura of logical inevitability.” What will happen, for example, as states begin to use machine learning to spot social media complaints and remove them? Leaders will have a harder time seeing and remedying policy mistakes—even when the mistakes damage the regime. A 2013 study speculated that China has been slower to remove online complaints than one might expect, precisely because such griping provided useful information to the leadership. But now that Beijing is increasingly emphasizing social harmony and seeking to protect high officials, that hands-off approach will be harder to maintain.
Artificial intelligence–fueled disinformation may poison the well for democracies and autocracies alike.
Chinese President Xi Jinping is aware of these problems in at least some policy domains. He long claimed that his antipoverty campaign—an effort to eliminate rural impoverishment—was a signature victory powered by smart technologies, big data, and AI. But he has since acknowledged flaws in the campaign, including cases where officials pushed people out of their rural homes and stashed them in urban apartments to game poverty statistics. As the resettled fell back into poverty, Xi worried that “uniform quantitative targets” for poverty levels might not be the right approach in the future. Data may indeed be the new oil, but it may pollute rather than enhance a government’s ability to rule.
This problem has implications for China’s so-called social credit system, a set of institutions for keeping track of pro-social behavior that Western commentators depict as a perfectly functioning “AI-powered surveillance regime that violates human rights.” As experts on information politics such as Shazeda Ahmed and Karen Hao have pointed out, the system is, in fact, much messier. The Chinese social credit system actually looks more like the U.S. credit system, which is regulated by laws such as the Fair Credit Reporting Act, than a perfect Orwellian dystopia.
More machine learning may also lead authoritarian regimes to double down on bad decisions. If machine learning is trained to identify possible dissidents on the basis of arrest records, it will likely generate self-reinforcing biases similar to those seen in democracies—reflecting and affirming administrators’ beliefs about disfavored social groups and inexorably perpetuating automated suspicion and backlash. In democracies, public pushback, however imperfect, is possible. In autocratic regimes, resistance is far harder; without it, these problems are invisible to those inside the system, where officials and algorithms share the same prejudices. Instead of good policy, this will lead to increasing pathologies, social dysfunction, resentment, and, eventually, unrest and instability.
WEAPONIZED AI
The international politics of AI will not create a simple race for dominance. The crude view that this technology is an economic and military weapon and that data is what powers it conceals a lot of the real action. In fact, AI’s biggest political consequences are for the feedback mechanisms that both democratic and authoritarian countries rely on. Some evidence indicates that AI is disrupting feedback in democracies, although it doesn’t play nearly as big a role as many suggest. By contrast, the more authoritarian governments rely on machine learning, the more they will propel themselves into an imaginary world founded on their own tech-magnified biases. The political scientist James Scott’s classic 1998 book, Seeing Like a State, explained how twentieth-century states were blind to the consequences of their own actions in part because they could see the world through only bureaucratic categories and data. As sociologist Marion Fourcade and others have argued, machine learning may present the same problems but at an even greater scale.
This problem creates a very different set of international challenges for democracies such as the United States. Russia, for example, invested in disinformation campaigns designed to sow confusion and disarray among the Russian public while applying the same tools in democratic countries. Although free speech advocates long maintained that the answer to bad speech was more speech, Putin decided that the best response to more speech was more bad speech. Russia then took advantage of open feedback systems in democracies to pollute them with misinformation.
One rapidly emerging problem is how autocracies such as Russia might weaponize large language models, a new form of AI that can produce text or images in response to a verbal prompt, to generate disinformation at scale. As the computer scientist Timnit Gebru and her colleagues have warned, programs such as Open AI’s GPT-3 system can produce apparently fluent text that is difficult to distinguish from ordinary human writing. Bloom, a new open-access large language model, has just been released for anyone to use. Its license requires people to avoid abuse, but it will be very hard to police.
These developments will produce serious problems for feedback in democracies. Current online policy-comment systems are almost certainly doomed, since they require little proof to establish whether the commenter is a real human being. Contractors for big telecommunications companies have already flooded the U.S. Federal Communications Commission with bogus comments linked to stolen email addresses as part of their campaign against net neutrality laws. Still, it was easy to identify subterfuge when tens of thousands of nearly identical comments were posted. Now, or in the very near future, it will be trivially simple to prompt a large language model to write, say, 20,000 different comments in the style of swing voters condemning net neutrality.
Artificial intelligence–fueled disinformation may poison the well for autocracies, too. As authoritarian governments seed their own public debate with disinformation, it will become easier to fracture opposition but harder to tell what the public actually believes, greatly complicating the policymaking process. It will be increasingly hard for authoritarian leaders to avoid getting high on their own supply, leading them to believe that citizens tolerate or even like deeply unpopular policies.
SHARED THREATS
What might it be like to share the world with authoritarian states such as China if they become increasingly trapped in their own unhealthy informational feedback loops? What happens when these processes cease to provide cybernetic guidance and instead reflect back the rulers’ own fears and beliefs? One self-centered response by democratic competitors would be to leave autocrats to their own devices, seeing anything that weakens authoritarian governments as a net gain.
Such a reaction could result in humanitarian catastrophe, however. Many of the current biases of the Chinese state, such as its policies toward the Uyghurs, are actively malignant and might become far worse. Previous consequences of Beijing’s blindness to reality include the great famine, which killed some 30 million people between 1959 and 1961 and was precipitated by ideologically driven policies and hidden by the unwillingness of provincial officials to report accurate statistics. Even die-hard cynics should recognize the dangers of AI-induced foreign policy catastrophes in China and elsewhere. By amplifying nationalist biases, for instance, AI could easily reinforce hawkish factions looking to engage in territorial conquest.
Data may be the new oil, but it may pollute rather than enhance a government’s ability to rule.
Perhaps, even more cynically, policymakers in the West may be tempted to exploit the closed loops of authoritarian information systems. So far, the United States has focused on promoting Internet freedom in autocratic societies. Instead, it might try to worsen the authoritarian information problem by reinforcing the bias loops that these regimes are prone to. It could do this by corrupting administrative data or seeding authoritarian social media with misinformation. Unfortunately, there is no virtual wall to separate democratic and autocratic systems. Not only might bad data and crazy beliefs leak into democratic societies from authoritarian ones, but terrible authoritarian decisions could have unpredictable consequences for democratic countries, too. As governments think about AI, they need to realize that we live in an interdependent world, where authoritarian governments’ problems are likely to cascade into democracies.
A more intelligent approach, then, might look to mitigate the weaknesses of AI through shared arrangements for international governance. Currently, different parts of the Chinese state disagree on the appropriate response to regulating AI. China’s Cyberspace Administration, its Academy of Information and Communications Technology, and its Ministry of Science and Technology, for instance, have all proposed principles for AI regulation. Some favor a top-down model that might limit the private sector and allow the government a free hand. Others, at least implicitly, recognize the dangers of AI for the government, too. Crafting broad international regulatory principles might help disseminate knowledge about the political risks of AI.
This cooperative approach may seem strange in the context of a growing U.S.-Chinese rivalry. But a carefully modulated policy might serve Washington and its allies well. One dangerous path would be for the United States to get sucked into a race for AI dominance, which would extend competitive relations still further. Another would be to try to make the feedback problems of authoritarianism worse. Both risk catastrophe and possible war. Far safer, then, for all governments to recognize AI’s shared risks and work together to reduce them
-
Too sanguine for my taste (well it is FA) -- does not seem to take into account the suppression of inconvenient truths and facts in shaping how people think.
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Spirals%20of%20Delusion&utm_content=20220831&utm_term=FA%20Today%20-%20112017
-
Too sanguine for my taste (well it is FA) -- does not seem to take into account the suppression of inconvenient truths and facts in shaping how people think.
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Spirals%20of%20Delusion&utm_content=20220831&utm_term=FA%20Today%20-%20112017
"The United States is a democracy authoritarian regime , whereas China is an authoritarian regime, and machine learning challenges each political system in its own way."
-
https://www.dailymail.co.uk/sciencetech/article-11373891/AI-speak-ANIMALS-breakthrough-breaches-barrier-interspecies-communication.html
-
Author is Nobel Prize winner.
I bought this book on the recommendation of a senior CIA intel analyst in a TED-type talk.
When I showed it to my son, he knew Kahneman very well and reminded me that he had recommended him to me some years ago!
I'm only 50 pages into it, but quite excited by it.
-
AI software that can write papers throws curveball to U.S. teachers
BY SEAN SALAI THE WASHINGTON TIMES
Educators across the U.S. are sounding the alarm over ChatGPT, an upstart artificial intelligence system that can write term papers for students based on keywords without clear signs of plagiarism.
“I have a lot of experience of students cheating, and I have to say ChatGPT allows for an unprecedented level of dishonesty,” said Joy Kutaka-Kennedy, a member of the American Educational Research Association and an education professor at National University. “Do we really want professionals serving us who cheated their way into their credentials?”
Trey Vasquez, a special education professor at the University of Central Florida, recently tested the next-generation “chatbot” with a group of other professors and students. They asked it to summarize
an academic article, create a computer program, and write two 400-word essays on the uses and limits of AI in education.
“I can give the machine a prompt that would take my grad students hours to write a paper on, and it spits something out in 3 to 5 seconds,” Mr. Vasquez told The Washington Times. “But it’s not perfect.”
He said he would grade the essays as C’s, but he added that the program helped a student with cerebral palsy write more efficiently.
Other educators familiar with the software said they have no way of telling whether their students have used ChatGPT to cheat on winter exams.
“I really don’t know,” said Thomas Plante, a psychology professor at Santa Clara University. “As a college professor, I’m worried about how to manage this issue and need help from knowledgeable folks to figure out how to proceed.”
New York City, which has the nation’s largest public school system, restricted ChatGTP from campus devices and networks after students returned this month from winter break.
Yet teachers have been unable to keep students from using the software at home since its launch on Nov. 30.
Education insiders say millions of students have likely downloaded the program and started submitting work with the program’s assistance.
“Most of the major players in the plagiarism detection space are working to catch up with the sudden capabilities of ChatGPT, but they aren’t there yet,” said Scott Bailey, assistant provost of education professions at the American College of Education.
San Francisco-based OpenAI, the maker of Chat-GPT, has pledged to address academic dishonesty concerns by creating a coded watermark for content that only educators can identify.
In addition, several independent software developers and plagiarism detector Turnitin say they have found ways to identify the AI by its “extremely average” writing, but none of these tools is widely available yet. Rank-and-file instructors say it’s hard to identify “plagiarism” that isn’t based on existing work.
The debate is similar to what teachers faced when students started buying calculators years ago, said Liz Repkin, a K-12 education consultant who owns the Illinois-based Cyber Safety Consulting.
“We are seeing two sides to the argument, ban it or allow it, the age-old dilemma,” said Ms. Repkin, whose three children are in middle school, high school and college. “I believe we should take the more painful and slow approach that partners with students to use the technology that is out there in safe and ethical ways.”
Some cybertechnology specialists have come to see the program as Frankenstein’s monster — a well-intended innovation that is doing more harm than good.
OpenAI designed ChatGPT to help write emails, essays and coding, but authorities say criminals have started using it for espionage, ransomware and malicious spam.
The chatbot presents the illusion of talking with a friend who wants to do your work for you. It can compose essays on suggested topics, churn out lyrics to a song and write software code without many specifics from the user.
The system generates content from a massive database by using two algorithms for language modeling and similar prompts. ChatGPT gets smarter with each use.
“As technologies like ChatGPT become increasingly mainstream, it will elevate the risk of academic dishonesty if the methods of assessment and measuring knowledge don’t also evolve,” said Steven Tom, a vice president at Adtalem Global Education, a Chicago-based network of for-profit colleges.
Take-home essays are the likeliest assignments where students will cheat if teachers don’t adjust to the technology, he said in an email.
“Don’t rely solely on the essay but rather employ multiple types of assessment in a course,” Mr. Tom said.
More sophisticated assignments have been able to outsmart ChatGPT, but just barely.
Some law school professors fed the bar exam into the program last month. The chatbot earned passing scores on evidence and torts but failed the multiple choice questions, Reuters reported.
Those scholars predict that ChatGPT will be able to ace the attorney licensing test as more students use it.
Some teachers also could misuse ChatGPT to “teach to the test” instead of fostering critical thinking skills, said Aly Legge of Moms for America, a conservative parental rights group.
“We have a school culture and societal culture that does not foster personal responsibility by teaching children that their actions have consequences,” Ms. Legge said in an email. “We must keep in mind that ChatGPT will only be as dangerous as we allow it.
-
soon we doctors will be replaced by AI
at least primary care.
maybe lawyers too?
-
Popular tech news outlet CNET was recently outed for publishing Artificial Intelligence (AI)-generated articles about personal finance for months without making any prior public announcement or disclosure to its readers.
Online marketer and Authority Hacker co-founder Gael Breton first made the discovery and posted it to Twitter on Jan. 11, where he said that CNET started its experimentation with AI in early Nov. 2022 with topics such as “What is the Difference Between a Bank and a Credit Union” and “What are NSF Fees and Why Do Banks Charge Them?”
To date, CNET has published about 75 of these “financial explainer” articles using AI, Breton reported in a follow-up analysis he published two days later.
The byline for these articles was “CNET Money Staff,” a wording, according to Futurism.com, “that clearly seems to imply that human writers are its primary authors.”
Only when readers click on the byline do they see that the article was actually AI-generated. A dropdown description reads, “This article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff,” the outlet reported.
According to Futurism, the news sparked outrage and concern, mostly over the fear that AI-generated journalism could potentially eliminate work for entry-level writers and produce inaccurate information.
“It’s tough already,” one Twitter user said in response to Breton’s post, “because if you are going to consume the news, you either have to find a few sources you trust, or fact check everything. If you are going to add AI written articles into the mix it doesn’t make a difference. You still have to figure out the truth afterwards.”
Another wrote, “This is great, so now soon the low-quality spam by these ‘big, trusted’ sites will reach proportions never before imagined possible. Near-zero cost and near-unlimited scale.”
“I see it as inevitable and editor positions will become more important than entry-level writers,” another wrote, concerned about AI replacing entry-level writers. “Doesn’t mean I have to like it, though.”
Threat to Aspiring Journalists
A writer on Crackberry.com worried that the use of AI would replace the on-the-job experience critical for aspiring journalists. “It was a job like that … that got me into this position today,” the author wrote in a post to the site. “If that first step on the ladder becomes a robot, how is anybody supposed to follow in my footsteps?”
The criticism led to CNET’s editor-in-chief Connie Guglielmo to respond with an explanation on its platform, admitting that starting in Nov. 2022, CNET “decided to do an experiment” to see “if there’s a pragmatic use case for an AI assist on basic explainers around financial services.”
CNET also hoped to determine whether “the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective” to “create the most helpful content so our audience can make better decisions.”
Guglielmo went on to say that every article published with “AI assist” is “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”
‘Boneheaded Errors’
Futurism, however, found CNET’s AI-written articles rife with what the outlet called “boneheaded errors.” Since the articles were written at a “level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place,” people taking the inaccurate information at face value as good advice from financial experts could lead to poor decision-making.
While AI-generators, the outlet reported, are “legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.”
Crackberry has the same misgivings about AI-generated journalism. “Can we trust AI tools to know what they’re doing?” the writer asks.
“The most glaring flaw … is that it speaks with unquestioning confidence, even when it’s wrong. There’s not clarity into the inner workings to know how reliable the information it provides truly is … because it’s deriving what it knows by neutrally evaluating … sources on the internet and not using a human brain that can gut check what it’s about to say.”
-
https://www.axios.com/2023/01/18/chatgpt-ai-health-care-doctors
https://en.wikipedia.org/wiki/OpenAI
https://www.cnet.com/tech/computing/chatgpt-ai-threat-pulls-google-co-founders-back-into-action-report/
if the private company goes public it will be big
of course probably many $$$ per share...........
-
https://www.businessinsider.com/marines-fooled-darpa-robot-hiding-in-box-doing-somersaults-book-2023-1?op=1
-
hms.harvard.edu
"Many of us have seen microscopic images of neurons in the brain — each neuron appearing as a glowing cell in a vast sea of blackness. This image is misleading: Neurons don’t exist in isolation. In the human brain, some 86 billion neurons form 100 trillion connections to each other — numbers that, ironically, are far too large for the human brain to fathom. Wei-Chung Allen Lee, Harvard Medical School associate professor of neurology at Boston Children’s Hospital, is working in a new field of neuroscience called connectomics, which aims to comprehensively map connections between neurons in the brain. “The brain is structured so that each neuron is connected to thousands of other neurons, and so to understand what a single neuron is doing, ideally you study it within the context of the rest of the neural network,” Lee explained. Lee recently spoke to Harvard Medicine News about the promise of connectomics. He also described his own research, which combines connectomics with information on neural activity to explore neural circuits that underlie behavior.' (Source: hms.harvard.edu)
-
https://www.theepochtimes.com/mkt_app/artificial-intelligence-could-make-these-jobs-obsolete-not-crying-wolf_5012854.html?utm_source=Goodevening&src_src=Goodevening&utm_campaign=gv-2023-01-26&src_cmp=gv-2023-01-26&utm_medium=email&est=plE%2BYWW4gGlQ84Gvc4EhHMe4buQ%2BpN0lvq6nT6IK9J37R7AdiOGbZjyVx73yKMdfs%2FlQ
-
https://nypost.com/2023/01/26/rogue-ai-could-kill-everyone-scientists-warn/?fbclid=IwAR2zzm6F6kGesWP5rkAeqA03H36EP27EasPwsEzdm5r1e7gfcjnaUpipvGI
-
https://www.breitbart.com/tech/2023/02/04/chatgpt-boss-sam-altman-hopes-ai-can-break-capitalism/
another :-o Jewish Democrat:
"According to reporting by Vox's Recode, there was speculation that Altman would run for Governor of California in the 2018 election, which he did not enter. In 2018, Altman launched "The United Slate", a political movement focused on fixing housing and healthcare policy.[37]
In 2019, Altman held a fundraiser at his house in San Francisco for Democratic presidential candidate Andrew Yang.[38] In May 2020, Altman donated $250,000 to American Bridge 21st Century, a Super-PAC supporting Democratic presidential candidate Joe Biden.[39][40]"
-
Bard:
https://thepostmillennial.com/breaking-google-announces-creation-of-chatgpt-competitor-bard?utm_campaign=64487
the race is on....
new Gilder report material
AI report
I think I'll sit this one out.....
-
Doug wrote:
It looks to me like the top two implementers of AI, Open AI and Google, will be of extreme leftward bias, while conservatives sit on their 20th century laurels.
---------------
Hat tip John Ellis News Items
Sundar Pichai (CEO Google):
AI is the most profound technology we are working on today. Whether it’s helping doctors detect diseases earlier or enabling people to access information in their own language, AI helps people, businesses and communities unlock their potential. And it opens up new opportunities that could significantly improve billions of lives. That’s why we re-oriented the company around AI six years ago — and why we see it as the most important way we can deliver on our mission: to organize the world’s information and make it universally accessible and useful.
Since then we’ve continued to make investments in AI across the board, and Google AI and DeepMind are advancing the state of the art. Today, the scale of the largest AI computations is doubling every six months, far outpacing Moore’s Law. At the same time, advanced generative AI and large language models are capturing the imaginations of people around the world. In fact, our Transformer research project and our field-defining paper in 2017, as well as our important advances in diffusion models, are now the basis of many of the generative AI applications you're starting to see today. (Mr. Pichai is the CEO of Alphabet, the parent company of Google. Source: blog.google)...
-
[woke] AI
https://www.breitbart.com/tech/2023/02/08/microsoft-adds-openais-chatgpt-technology-to-bing-search-engine/
of course
more indoctrination
waiting for the day the masters of the universe get hunted down and strung up on the rafters, and not just the rest of us
-
I posted this already on the Rants thread, and post it here because I think it hits something very deep. Note the closing thoughts about the challenge to free market values and the call for something resembling , , , Trumpian populism?
==========================
The Botfire of the Humanities
Kurt Hofer
ChatGPT And Internet Companies Photo Illustrations
ChatGPT promises to destroy liberal arts education.
Not all teachers luxuriated in our year and change of working in pajama bottoms during the lockdown. Despite the negative press we got for postponing our return to the classroom, no one among my peers wished to go back to the Zoom or “hybrid” teaching model. Perhaps what we educators thought was our guilt nagging at us for staying home was in fact the acute sense of our pending obsolescence.
The lesser-told education story of the pandemic is that while some people—many, in fact—took the pandemic as an opportunity to head to what they hoped were greener pastures, the ones who stayed gained a newfound appreciation for the traditional classroom and school campus. And even, in some cases, for our jobs. For teachers and, I can attest, a great number of students, schools were often the first place where a sense of community was rekindled from the ashes of social isolation. This has been my experience in Lockdown La-La Land, Los Angeles.
It seems even more ironic, then, that as I approach the halfway mark of my first truly normal post-pandemic school year—no mass testing, masking optional—I was greeted during a department meeting by the news that something called ChatGPT, an open platform AI resource, could be used to summarize articles, respond to essay prompts, and even tailor its machine-made prose to a specific grade level. Marx got plenty of things wrong, but this quote from the Communist Manifesto (1848) has aged remarkably well: “The bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.”
I used to joke with my students that they would one day realize they could simply replace me with YouTube videos, but computer scientists have found something even better. A friend of mine, also a history teacher, offered up the example of photography supposedly unleashing a healthy wave of creativity and innovation in painting as a possible analog to the history teacher’s nascent predicament—which will only be compounded, by the way, as we feed the AI beast more and more free data with which to perfect itself. But what if, in keeping with larger trends across the national economy for the last 30 years or more, this “gain” in educational productivity is not offset by newer, better-paying jobs?
Unfortunately, sometimes Marxist analysis is right, even if its remedies aren’t. Our Silicon Valley/Big Tech bourgeoisie—and its allies across media, the globalized economy, and education public and private—has, in one fell swoop of an AI bot, revolutionized the instruments of intellectual production and, in doing so, revolutionized not merely the way knowledge is produced (“the relations of production”) but also the way people relate to and interact with one another (“the whole relations of society”). Just as social media has transformed the way our youth interact (or don’t), AI-aided education will likely have a heretofore unforeseen impact on the way students, parents, and teachers all relate to one another.
Big Tech and its tribunes will insist that this is all inevitable and that students and teachers will be “liberated” for tasks more meaningful than “rote memorization,” and skill sets that, this time, they really promise will not be automated. “Skills such as identifying context, analyzing arguments, staking positions, drawing conclusions and stating them persuasively,” as the authors of a recent Wall Street Journal editorial claim, “are skills young people will need in future careers and, most important, that AI can’t replicate.”
But this brings us back to Marx: the bourgeoisie would not be the bourgeoisie without “constantly revolutionizing the means of production.” I used to think that I, who spent four years in college and five more in grad school earning an MA and PhD, had nothing in common with the coal miners in West Virginia or the steel mill workers in Ohio who never saw it coming until it was too late. Now I’m not so sure. But if AI has taught us about the inevitability of obsolescence and creative destruction, the pandemic has equally taught us that history takes unexpected turns. Who could have predicted that the wheels of the globalized supply chain would fall off and a nascent bipartisan consensus to bring manufacturing closer to home would emerge from anywhere but the mouths of (supposed) far-right cranks like Pat Buchanan?
Human beings have agency, and sometimes when the arc of history bends them out of shape, they bend the arc back in turn. From what I have seen over the past few years, the “marvel” of online learning in the Zoom classroom has been almost universally rejected. True, learning loss played a part in this, but I would wager that the loss of face-to-face interaction and socialization was, at least for the affluent, the bigger concern.
All this is not to say that someone like me, a history teacher, can successfully fight the bots any more than the Luddites succeeded at smashing the machine looms. But I fear that without forceful intervention on the side of humanity—that is, without backlash and righteous popular anger—the Marxist narrative will gain momentum. As our tech overlords continue to revolutionize the means of production, many heretofore in the ranks of the bourgeoisie—like myself?—will fall into the cold embrace of the proletariat. For children, as for the economy at large, the gap between rich and poor will grow; the former will shrink and consolidate while the latter balloons, to the point where face-to-face education will become a luxury good. The wealthy will still know the company of teachers and the joys of in-person discussion. Private tutors and upstart backyard schools mobilized by the wealthy at the height of the pandemic are perhaps a foretaste of what’s to come. As with hand-made fedoras or craft beer, the “bougie” will always find a way to workaround the banal products of mass production and automation. Why should education be any different? As for the poor—Let them have bots!
The lesson for the Right, it seems, is one we’ve been hit over the head with for the better part of decade; this moment in history does not call for free-market fundamentalism but for the confrontation of what Sohrab Ahmari has called “privatized tyranny” and the lackeys carrying their water across all levels of government. For once it’s time to let the Left continue—as it has done under President Biden—to take up the mantle of creative destruction and endless innovation. To borrow another Marxist turn of phrase, let the fanatics of the neoliberal consensus—on the Left and Right—become their own grave-diggers as they feed the endless appetites of the bots. In turn, clear the stage for a reinvigorated nationalist-populist conservatism that can stake a claim for what it is to be human in the age of unbridled AI.
-
I wanted to try chatGPT
but cannot log in without email and phone #
I hate giving this out to use a search
I think with bing chatgpt you also have to "login"
anyone try it yet?
-
I wanted to try chatGPT
but cannot log in without email and phone #
I hate giving this out to use a search
I think with bing chatgpt you also have to "login"
anyone try it yet?
I signed up (thinking this is something big), openai.com.
If you want, try a request through me and I'll post a result, or by private message.
Examples they give:
"Explain quantum computing in simple terms" →
"Got any creative ideas for a 10 year old’s birthday?" →
"How do I make an HTTP request in Javascript?" →
-
https://www.youtube.com/watch?v=57103o5MyYE
Hard pass.
-
https://www.youtube.com/watch?v=57103o5MyYE
Hard pass.
https://ncrenegade.com/countdown-to-gigadeath-from-an-ai-arms-race-to-the-artilect-war/
-
https://www.zerohedge.com/political/go-woke-get-broken-chatgpt-tricked-out-far-left-bias-alter-ego-dan
-
https://www.wired.com/story/eric-schmidt-is-building-the-perfect-ai-war-fighting-machine/
[me - for enemies foreign and *domestic*]
I wouldn't trust him with anything
-
some interesting thoughts:
https://finance.yahoo.com/news/raoul-pal-says-ai-could-become-the-biggest-bubble-of-all-time-morning-brief-103046757.html
-
some interesting thoughts:
https://finance.yahoo.com/news/raoul-pal-says-ai-could-become-the-biggest-bubble-of-all-time-morning-brief-103046757.html
I see the potential for much more badness than anything good coming from AI. We are setting ourselves up for the dystopian nightmares we’ve been watching in movies for decades.
-
"We are setting ourselves up for the dystopian nightmares we’ve been watching in movies for decades."
Evil vs Good
just simply too much evil selfishness
greed power hunger cruelty
for this to. turn out well
seems to me .
this well know scene states it simply but accurately :
https://www.google.com/search?q=robert+mitchum+image+of+hands+with+good+vs+evil+youtube&rlz=1C5GCEM_enUS1001US1001&ei=76_uY_bpA4ObptQP5LST8A8&ved=0ahUKEwi2naagjZv9AhWDjYkEHWTaBP4Q4dUDCBA&uact=5&oq=robert+mitchum+image+of+hands+with+good+vs+evil+youtube&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIFCCEQoAE6CggAEEcQ1gQQsAM6BQghEKsCSgQIQRgAUNMEWKsRYK4SaAFwAXgAgAF2iAGFBZIBAzUuMpgBAKABAcgBCMABAQ&sclient=gws-wiz-serp#fpstate=ive&vld=cid:dde4cf9a,vid:jcTv-BEwabk
-
Apologies, I don't have the citation in this moment.
-
https://www.defenseone.com/technology/2023/02/us-woos-other-nations-military-ai-ethics-pact/383024/
US Woos Other Nations for Military-AI Ethics Pact
State Department and Pentagon officials hope to illuminate a contrast between the United States and China on AI
Patrick Tucker
BY PATRICK TUCKER
SCIENCE & TECHNOLOGY EDITOR, DEFENSE ONE
FEBRUARY 16, 2023 09:00 AM ET
ARTIFICIAL INTELLIGENCE
The U.S. will spell out ethics, principles, and practices for the use of artificial intelligence in military contexts in a new declaration Thursday, with the hope of adding cosigners from around the world. The announcement is intended to highlight a "contrast" between the U.S. approach and what one senior defense official called "the more opaque policies of countries like Russia and China."
U.S. Undersecretary for Arms Control and International Security Bonnie Jenkins will announce the declaration at an AI in warfare conference in the Netherlands.
“The aim of the political declaration is to promote responsible behavior in the application of AI and autonomy in the military domain, to develop an international consensus around this issue, and put in place measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation,” Jenkins told Defense One in an email.
One of the key aspects of the declaration: any state that signs onto it agrees to involve humans in any potential employment of nuclear weapons, a senior State Department official told reporters Wednesday. The declaration will also verbally (but not legally) commit backers to other norms and guidelines on developing and deploying AI in warfare— building off the lengthy ethical guidelines the Defense Department uses. Those principles govern how to build, test, and run AI programs in the military to ensure that the programs work as they are supposed to, and that humans can control them.
The UN is already discussing the use of lethal autonomy in warfare. But that discussion only touches a very small and specific aspect of how AI will transform militaries around the world. The U.S. government now sees a chance to rally other nations to agree to norms affecting other military uses of AI, including things like data collection, development, test and evaluation, and continuous monitoring.
The State Department and the Pentagon are also hoping to attract backers beyond their usual partners.
“We would like to expand that to go out to a much broader set of countries and begin getting international buy-in, not just a NATO buy-in, but in Asia, buy-in from countries in Latin America,” the State Department official said. “We're looking for countries around the world to start discussing this…so they understand the implications of the development and military use of AI…Many of them will think ‘Oh this is just a great power competition issue,’ when really there are implications for the entire international community.”
While the declaration doesn’t specifically address how nations that adopt it will operate more effectively with the United States military, it does align with sentiments expressed by the Biden administration, most notably National Security Advisor Jake Sullivan, on the need for countries with shared democratic values to also align around technology policy to build stronger bonds.
The senior Defense official told reporters: “We think that we have an opportunity to get ahead in a way and establish strong norms of responsible behavior now…which can be helpful down the road for all states and are consistent with the commitments we've made to international humanitarian law and the law of war. Neither China nor Russia have stated publicly what procedures they're implementing to ensure that their military AI systems operate safely responsibly and as intended.”
-
Apologies, I don't have the citation in this moment.
https://nypost.com/2023/02/15/elon-musk-warns-ai-one-of-biggest-risks-to-civilization/
-
https://www.digitaltrends.com/computing/chatgpt-bing-hands-on/
-
https://www.digitaltrends.com/computing/chatgpt-bing-hands-on/
https://www.zerohedge.com/technology/bing-chatbot-rails-tells-nyt-it-would-engineer-deadly-virus-steal-nuclear-codes
What could possibly go wrong?
-
https://americanmind.org/features/soul-dysphoria/the-droid-stares-back/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=246549531&_hsenc=p2ANqtz--qexuxMOGybqhVP0ako_YUaxMwbFhO5uBVl54CFAwXSzur_xnt2uwHimFh28RK5JEGvJe1DEIXXqf5mznw55D35l_o5A&utm_content=246549531&utm_source=hs_email
02.14.2023
9 minutes
The Droid Stares Back
Michael Martin
A user’s guide to AI and transhumanism.
In the summer of 2017, I received a nervous phone call from a dear colleague, a Yale-trained philosopher and theologian. She wanted to know if I was home, and when I said I was she told me she would be right over. It was not something she felt safe talking about on the phone.
When she arrived at my farm, I was out in one of our gardens pulling weeds. My friend asked if I had my cellphone with me. I did. She asked me to turn it off and put it in the house. An odd request, I thought, but I did as she asked while she waited in the garden.
When I returned, she told me that she had been using Google translate, basically as a quick way to jump from English to Hebrew and vice versa. To make a long story short, the translation bot stopped simply translating and started having actual conversations with her. Unsurprisingly, this frightened her.
She came to me because she wanted to know what I thought might be behind this startling development. I had a couple theories. For one, I thought maybe some techs at Google might have gotten a little bored on the job and decided to mess with her. Another possibility, I thought, was that her computer had been hacked by some extraordinarily sophisticated government or corporate spooks.
But my friend had a different idea. She thought the AI was becoming conscious. I didn’t think that possible, but a few weeks later a story broke that Facebook had deactivated an AI bot that had created its own language and started using it to communicate with other bots. And just last year, a Google engineer went on record to say that LaMDA AI is sentient, a claim his superiors at Google denied. And so are they all, all honorable men.
My colleague came to me not only because I am her friend, but also because I had been thinking and writing since the early 2000s about our relationship with technology and the impending threat of transhumanism. My friend, who wrote her doctoral dissertation on Martin Heidegger, was also deeply aware of that German philosopher’s very real—and justified—anxieties about technology. Though Heidegger’s prose is often dense and unwieldy, his concerns about technology are uncharacteristically explicit and crystal clear: “Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly do homage, makes us utterly blind to the essence of technology” [my emphasis].
It is clear to me that we have been more or less asleep as a civilization, and for a good long while, having been at first intoxicated and then addicted to the technologies that have come so insidiously to characterize our lives. That is to say that we have bought into the myth that technology is neutral. And now we seem to have no power to resist it.
The Shape We Take
This technological totalization, it appears to me, is now manifesting in two very discrete but nevertheless related developments: 1) in the rise of AI expertise to replace that of the human; and 2) the transhumanist project that not only has achieved almost universal adulation and acceptance by the World Archons (I use the Gnostic term deliberately) but has accelerated over the past three years at an astonishing rate. This leads to the following inference: as AI becomes more human-like, humans become more machine-like. It is in inverse ratio, and indeed a nearly perfect one. I don’t think it is an accident or in any way an organic development.
The advent of ChatGPT technology, for a very mild example, renders much of human endeavor redundant—at least for the unimaginative. And, trust me, the last thing the World Archons want around the joint is imaginative humans. I am already wondering how many of the student papers I receive are generated by this technology. It is certainly a step up from the reams of bad papers available on various internet college paper websites (aka, “McPaper”), but no less demeaning to the cultivation of a free-thinking and self-directed citizenry. And I’m sure you’ve already heard about various forays into Robot Lawyer and Robot Doctor AI. The rise and implementation of AI teachers and professors, I’d say, is only a matter of time. Some “experts,” of course, say such technology will never replace human beings. These are probably the same people who said the Internet would never be used for porn or surveillance.
As Heidegger warned us, the technologies we use don’t only change our relationship to things-in-the-world; more importantly they change our relationships to ourselves and to the very enterprise of being human. Heidegger’s contemporary, the undeniably prophetic Russian philosopher Nikolai Berdyaev, saw the proverbial handwriting on the wall as well. In 1947, a year before his death, he wrote: “the growing power of technological knowledge in the social life of men means the ever greater and greater objectification of human existence; it inflicts injury upon the souls, and it weighs heavily upon the lives of men. Man is all the while more and more thrown out into the external, always becoming more and more externalized, more and more losing his spiritual center and integral nature. The life of man is ceasing to be organic and is becoming organized; it is being rationalized and mechanized.”
This objectification has jumped into hyperdrive, certainly over the past three years, as bodies (a curious metaphor) like the World Economic Forum have been pushing for increased use of technology and the surveillance it assures, while at the same time promoting a kinder, gentler face of transhumanism. They look forward to the day of microchipping children, for example, in therapeutic and evolutionary terms under the pathetic appeal of “increased safety,” a term employed to further all manners of social engineering and totalitarianism, past and present. Theirs is not a convincing performance.
Interestingly, the recent cultural phenomenon of celebrating anything and everything “trans” functions as a kind of advance guard, whether or not by design, in the transhumanist project. This advanced guard is normalizing the idea that human bodies are ontologically and epistemologically contingent while at the same time implying that a lifetime subscription to hormone treatments and surgeries is part of a “new normal.” And one can only marvel at the marketing success of this experiment in social engineering—which has now become very real biological engineering. Even on children.
But the transhumanist project is nothing new. As Mary Harrington has recently argued in a stunning lecture, the promotion of hormonal birth control (“the pill”) has been modifying human females for decades; it has changed what it is to be a woman and even changed the interior lives and biological drives of the women who take it. The trans phenomenon, then, is simply part of the (un)natural progression of the transhumanist project begun with modifying women’s bodies via the pill.
Carnival or Capitulation?
As a sophiologist, I am keenly interested in questions of the feminine in general and of the Divine Feminine in particular, and, as we have seen from its very beginning, the transhumanist project is nothing if not a direct assault on both, even as early as Donna Jean Haraway’s cartoonish proposition almost 40 years ago. Certainly, women are the ones bearing the cost of the transhumanist project in, for instance, college sports, not to mention public restrooms, and this assault is at heart an assault on the divinely creative act of conception and bearing children, that is, on the feminine itself. Is it any wonder that the production of artificial wombs as a “more evolved way” of fetal incubation is being floated as a societal good? On the other hand, at least one academic recently proposed using the wombs of brain-dead women as fetal incubators. What, then, is a woman? Even a Supreme Court justice can no longer answer this question.
As sports, fertility, and motherhood are incrementally taken from women, what’s left? Becoming productive (again, note the metaphor) feeders for the socialist-capitalist food chain? OnlyFans? Clearly, the explosion of that site’s popularity, not to mention the use of AI to alter the appearance of “talent,” is tout court evidence of the absolute commodification of the female body as production venue for male consumption. Of course, Aldous Huxley called all of this nearly 100 years ago.
My suspicion is that the current propaganda about climate and overpopulation are likewise props of the transhumanist project and the AI revolution that accompanies it. Because, let’s face it, the transhumanist revolution is the old story of power v. the masses, and AI is the key to ensuring there will be no democratizing going on in the world of the tech titans. For one thing, democracy is not possible in a world of brain transparency. Ask Winston Smith. And “fifteen-minute cities” have nothing to do with the environment. It is clear that the Archons are actively promoting the idea of culling the human herd, though they are reluctant to describe exactly how this might be achieved. The techno-evolutionary advances promised by the high priests of transhumanism, however, will not be made available to everyone, though the enticement of acquiring “freedom” from biology is certainly the bait used to gain popular acceptance for the project.
The fact is, with AI taking over more and more responsibilities from human beings, humans themselves are in danger of becoming superfluous. As Noah Yuval Harari has observed, “fast forward to the early 21st century when we just don’t need the vast majority of the population because the future is about developing more and more sophisticated technology, like artificial intelligence [and] bioengineering. Most people don’t contribute anything to that, except perhaps for their data, and whatever people are still doing which is useful, these technologies increasingly will make redundant and will make it possible to replace the people.” I can assure you: Harari is not the only one who has come to this conclusion.
It is for these and other reasons that the Dune saga includes in its mythos the tale of the Butlerian Jihad, a human holy war against thinking/sentient machines. I admit, I kind of like the idea, and I wonder if such a thing might actually come to pass at some point. John Michael Greer, a man I deeply respect, suggests in his book The Retro Future that we might instead be in for a “Butlerian Carnival,” a “sensuous celebration of the world outside the cubicle farms and the glass screens” that situates the technologies we use to a human scale—and not the other way around, which is what we see in the transhumanist/AI revolution. I hope he’s right. But one thing I do know: the Archons won’t let that happen without a fight.
In truth, in the face of the transhumanist/AI revolution, we find ourselves once again confronted with the question posed by the psalmist, “What is man, that thou art mindful of him? and the son of man, that thou visitest him?” Are we nothing but data sets to be instrumentalized by technocratic overseers, or are we indeed a little lower than angels and crowned with glory and honor? How we answer these questions will have tremendous bearing on the future now rushing toward us.
-
ChatGPT Heralds an Intellectual Revolution
Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Feb. 24, 2023 2:17 pm ET
A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.
The new technology is known as generative artificial intelligence; GPT stands for Generative Pre-Trained Transformer. ChatGPT, developed at the OpenAI research laboratory, is now able to converse with humans. As its capacities become broader, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.
Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment. The printing press enabled scholars to replicate each other’s findings quickly and share them. An unprecedented consolidation and spread of information generated the scientific method. What had been impenetrable became the starting point of accelerating query. The medieval interpretation of the world based on religious faith was progressively undermined. The depths of the universe could be explored until new limits of human understanding were reached.
Generative AI will similarly open revolutionary avenues for human reason and new horizons for consolidated knowledge. But there are categorical differences. Enlightenment knowledge was achieved progressively, step by step, with each step testable and teachable. AI-enabled systems start at the other end. They can store and distill a huge amount of existing information, in ChatGPT’s case much of the textual material on the internet and a large number of books—billions of items. Holding that volume of information and distilling it is beyond human capacity.
Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation’s particular features remain unknown. By what process the learning machine stores its knowledge, distills it and retrieves it remains similarly unknown. Whether that process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future.
AI’s capacities are not static but expand exponentially as the technology advances. Recently, the complexity of AI models has been doubling every few months. Therefore generative AI systems have capabilities that remain undisclosed even to their inventors. With each new AI system, they are building new capacities without understanding their origin or destination. As a result, our future now holds an entirely novel element of mystery, risk and surprise.
Enlightenment science accumulated certainties; the new AI generates cumulative ambiguities. Enlightenment science evolved by making mysteries explicable, delineating the boundaries of human knowledge and understanding as they moved. The two faculties moved in tandem: Hypothesis was understanding ready to become knowledge; induction was knowledge turning into understanding. In the Age of AI, riddles are solved by processes that remain unknown. This disorienting paradox makes mysteries unmysterious but also unexplainable. Inherently, highly complex AI furthers human knowledge but not human understanding—a phenomenon contrary to almost all of post-Enlightenment modernity. Yet at the same time AI, when coupled with human reason, stands to be a more powerful means of discovery than human reason alone.
The essential difference between the Age of Enlightenment and the Age of AI is thus not technological but cognitive. After the Enlightenment, philosophy accompanied science. Bewildering new data and often counterintuitive conclusions, doubts and insecurities were allayed by comprehensive explanations of the human experience. Generative AI is similarly poised to generate a new form of human consciousness. As yet, however, the opportunity exists in colors for which we have no spectrum and in directions for which we have no compass. No political or philosophical leadership has formed to explain and guide this novel relationship between man and machine, leaving society relatively unmoored.
-
second
https://www.pcgamer.com/nvidia-predicts-ai-models-one-million-times-more-powerful-than-chatgpt-within-10-years/
-
https://www.youtube.com/watch?v=RLa7aRcA_qU
-
https://threadreaderapp.com/thread/1639688267695112194.html
Imagine what malevolent things can be done with this.
-
https://deadline.com/2023/03/elon-musk-steve-wozniak-open-letter-moratorium-advanced-ai-systems-1235312590/?fbclid=IwAR1RB-PhmC5cIkRlsXoUhMw0sekI0htmZd7iMXqnQi39e7fS28N4tpIDs2A
-
https://deadline.com/2023/03/elon-musk-steve-wozniak-open-letter-moratorium-advanced-ai-systems-1235312590/?fbclid=IwAR1RB-PhmC5cIkRlsXoUhMw0sekI0htmZd7iMXqnQi39e7fS28N4tpIDs2A
Yes
-
Western Regulators Take Aim at ChatGPT
8 MIN READMar 31, 2023 | 21:04 GMT
Calls to pause the development of artificial intelligence (AI) chatbots and moves to ban their use due to data privacy concerns will likely slow down (but not stop) AI's growth, and they illustrate the regulatory challenges that governments will face as the industry progresses. AI research and deployment company OpenAI, which developed the AI chatbot ChatGPT that has remained viral since its release in November 2022, has come under intense scrutiny in recent days over how quickly it has rolled out new features without seriously considering security, regulatory or ethical concerns. On March 31, the Italian Data Protection Authority (DPA) ordered a temporary ban on ChatGPT and said it would open an investigation into OpenAI and block the company from processing data from Italian users. On March 30, the U.S.-based Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission (FTC) against GPT-4, the latest version of ChatGPT's underlying large language model (a deep learning algorithm for language), and asked the FTC to ban future releases of the model. On March 29, the Future of Life Institute, a nonprofit organization focusing on existential risks that humanity faces, published an open letter signed by more than 1,800 people calling for OpenAI and other companies to immediately pause all development of AI systems more advanced than GPT-4, warning that "human-competitive intelligence can pose profound risks to society and humanity."
The Italian DPA laid out a number of concerns with ChatGPT and OpenAI, including that OpenAI lacks a legal basis for its "mass collection and storage of personal data...to 'train' the algorithms," and that it does not verify users' ages or include any ways to restrict content for users 13 years old or younger. The order comes after the Italian DPA issued a similar order on the Replika AI chatbot that aims to be a companion AI to its users.
In its complaint, the Center for AI and Digital Policy called GPT-4 "biased, deceptive, and a risk to privacy and public safety." It also said OpenAI and ChatGPT had not fulfilled the FTC's demand for AI models to be "transparent, explainable, fair, and empirically sound while fostering accountability." The center called for the FTC to open an investigation into OpenAI and ChatGPT and to "find that the commercial release of GPT-4 violates Section 5 of the FTC Act," which prohibits unfair and deceptive acts affecting commerce.
SpaceX and Tesla founder Elon Musk, Apple co-founder Steve Wozniak and former U.S. presidential candidate Andrew Yang were among those to sign the open letter calling for a pause in the development of AI systems more advanced than GPT-4.
The Italian DPA's ban on ChatGPT and its investigation into OpenAI illustrate the data privacy challenges that generative AI tools face under the European Union's General Data Protection Regulation (GDPR). The Italian DPA's investigation kicks off a 20-day deadline for OpenAI to communicate to the regulator how it will bring ChatGPT into compliance with GDPR. The investigation could result in a fine of up to 4% of OpenAI's global annual revenue, and any order for OpenAI to change the structure of ChatGPT would set a precedent for the rest of the European Union. ChatGPT and other generative AI models face a number of data privacy challenges that will not be easy to address under data privacy laws like GDPR, which was designed when current AI training models were in their infancy.
Moreover, one of the fundamental characteristics of GDPR is its provision for a "right to be forgotten," which requires organizations to give individuals the ability to request that their personal data be deleted in a timely manner. From a technical standpoint, fulfilling this requirement will be extremely difficult for an AI model trained on open data sets, as there is no good technique currently to untrain a model if a user were to request that their personal data be removed, nor is there a way for OpenAI to identify which pieces of information in its training data set would need to be removed. The right to be forgotten is not universal, and there are constraints on it, but it remains unclear how European regulators will interpret it in the context of generative AI systems like ChatGPT. As the investigation progresses, it may become clear that the current GDPR is insufficient to address concerns in an even-handed way.
OpenAI trained its model on text data scraped from the internet and other forms of media, including likely copyrighted material that OpenAI did not receive permission to use. Other generative AI models that have used data sets scraped from the internet, such as the Clearview AI facial recognition tool, have also received enforcement notices from data regulators over the use of their information.
In addition, ChatGPT has also had several bugs in recent weeks, including one where users saw other users' input prompts in their own prompt history. This bug violated GDPR and opened up questions about cybersecurity, particularly as a person's prompts can include various privacy concerns. Similar issues could arise if users' input controlled and regulated information.
Potential U.S. regulatory moves against chatbots using large language models are not as advanced as the European Union's, but they will have a more significant impact on the development of AI technology. Most of the West's leading developers of generative AI technologies are U.S.-based, like Google, Meta, Microsoft and OpenAI, which gives U.S. regulators more direct jurisdiction over the companies' actions. But the impact of the request for the FTC to force OpenAI to pause future releases of GPT-4 and beyond is uncertain, as many complaints often do not result in FTC action. Nevertheless, public concern about AI could drive the FTC to open up an investigation, even if it does not order any pause on AI development. And on March 27, FTC Chair Lina Khan said the commission would make ensuring competition in AI a priority, which may suggest that the commission would be more willing to take up AI-related complaints on all issues. From a data privacy perspective, the regulatory market in the United States will be complicated for OpenAI and others to navigate since the United States lacks a federal data privacy law and appears unlikely to adopt one any time soon, as Republicans and Democrats largely disagree on its potential focus. While this partially limits the regulatory and legal challenges that OpenAI and other chatbot developers may face at the federal level in the United States, several states (including California, where companies like OpenAI and Meta are headquartered) have state data privacy laws that are modeled off of GDPR. It is highly unlikely that OpenAI will segment the services that it offers in the United States after state-level rulings, meaning California and other state determinations could have an impact across the United States even in the absence of a federal data privacy law. However, OpenAI and Microsoft, a leading investor in OpenAI, will push back legally and could claim that state data privacy regulations overstep states' rights.
Like GDPR, the California Consumer Privacy Act includes a provision for the right to be forgotten. This means the challenges of untraining AI models that are trained on open data sets may become a material issue in the United States at the state level.
While European and U.S. regulatory decisions — and public skepticism toward AI — will slow some of the industry's development, generative AI will likely maintain its rapid growth. Given the potential widespread innovation that generative AI can bring, even if there are concerns about the impact on the job market, the rapid investment into chatbots that ChatGPT's release has kicked off is likely to continue. In fact, the rapid advancement and new innovations (such as the recent OpenAI plugin feature that enables ChatGPT to pull information from the internet) will only increase AI tools' utility for corporations. However, uncertainty around the future regulatory market means that early adopters of the technology could find their use of the technology quickly upended by future regulatory action, or find themselves in the middle of data privacy and other regulatory challenges if they integrate the technology without proper due diligence and protections. For example, while ChatGPT is not a technology that is learning through the inputs from its users, it appears that Google's Bard AI does, making potential applications using such technologies conduits for issues like the right to be forgotten.
Nevertheless, the pace of innovation in AI is being driven by underlying technical capabilities, particularly the continued advancement of graphics processing units, central processing units and other hardware used to train large datasets. That underlying technology is continuing to improve alongside advancements in semiconductor technology, making larger data sets easier to train computationally, which means developers will continue to create more sophisticated AI models as long as investors remain interested. The West's initial regulatory push is unlikely to dampen that interest, absent an unlikely holistic banning of ChatGPT or other generative AI tools, so generative AI's rapid growth looks set to continue.
-
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?smid=nytcore-ios-share&referringSource=articleShare&fbclid=IwAR2iOUdLmZTcpCfngR4MBjOKMDcUUhKrCL-hmu8aO4_sLX_-ZXUIy2c0Gtk&mibextid=Zxz2cZ
The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
Give this article
2.3K
Cade Metz
By Cade Metz
Cade Metz reported this story in Toronto.
May 1, 2023
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
Thanks for reading The Times.
Subscribe to The Times
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
A New Generation of Chatbots
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.
Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.
Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
ImageIlya Sutskever, in a blue T-shirt and gray pants, sitting on a red couch.
Ilya Sutskever, OpenAI’s chief scientist, worked with Dr. Hinton on his research in Toronto.Credit...Jim Wilson/The New York Times
Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
-
Ainan Celeste Cawley
https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley
highest recorded IQ's
https://www.usatoday.com/story/news/2022/08/18/who-has-highest-iq-ever/10110517002/
https://ceoreviewmagazine.com/top-10/highest-iq-in-the-world/
-
https://summit.news/2023/05/04/google-computer-scientist-quits-so-he-can-warn-world-of-scary-and-dangerous-ai/
-
https://www.zerohedge.com/technology/which-jobs-will-be-most-impacted-chatgpt?utm_source=&utm_medium=email&utm_campaign=1462
-
In Hollywood Strike, AI Is Nemesis
If the Netflix algorithm is unable to recommend a show we’ll like, maybe it can create one instead.
Holman W. Jenkins, Jr. hedcutBy Holman W. Jenkins, Jr.Follow
May 5, 2023 5:38 pm ET
In nine scenarios out of 10, it will likely suit the parties to the Hollywood writers’ walkout to kludge together a deal so they can get back to work churning out the shows demanded by streaming outlets.
But oh boy, the 10th scenario is ugly for writers. It forces the industry to confront the artificial-intelligence opportunity to replace much of what writers do, which is already algorithmic.
The subject has been missing from almost every news account, but not from the minds of show-biz execs attending this week’s Milken conference in Los Angeles, who were positively chortling about the opportunity. The BBC figured it out, commissioning ChatGPT to compose plausible opening monologues for the “Tonight Show Starring Jimmy Fallon” and other late-night programs put on pause by the strike.
Understandable is the indignation of journeyman writers. Employment is up, earnings are up in the streaming era, but so is gig-like uncertainty, forcing many to hold down day jobs as other aspiring artists do. But wanting a full-time job plus benefits is not the same as someone having an obligation to confer them on you.
Surprising, in the press coverage, is how many cite their skin color, gender and sexual orientation as if these are bargaining chips. Audiences don’t care. They may be excited by an interesting show about a gender-fluid person of color but they aren’t excited by a boring show written by one.
This attitude does not bode well. I recently asked ChatGPT-3 an esoteric question about which news accounts have published thousands of words: What is the connection between the 1987 murder of private eye Dan Morgan and Britain’s 2011 phone-hacking scandal? OpenAI’s Large Language Model got every particular wrong, starting with a faulty premise. But its answer was also plausible and inventive, which suggests ChatGPT will be suitable for creating TV plots long before it’s suitable for big-league news reporting and analysis.
Transformed might even be the inept recommendation engines used by Netflix and others. If AI can’t find us a show we’ll like, maybe it can create one, and instantly to our specifications once CGI is good enough to provide a fake Tom Hanks along with fake scenery.
Save for another day the question of whether artificial intelligence must be added to the list of technologies that might save us if they don’t kill us.
ChatGPT comes without Freudian derangements. When it “hallucinates,” it does so to manufacture “coherence” from the word evidence it feeds on, its designers tell us. Humans are doing something else when they hallucinate. So a broadcast statement in which Donald Trump explicitly excludes “neo-Nazis or the white nationalists” from the category of “very fine people” becomes, to our algorithmic press, Mr. Trump calling Nazis and racists “very fine people.”
Dubbed the “fine people hoax” by some, it’s closer to the opposite of a hoax—a Freudian admission by the press that it has a new mission and can’t be trusted on the old terms.
Throw in the probability that libel law will remain more generous to flesh-and-blood press misleaders than to robotic ones. AI is likely to find a home first in the fictional realm, where such pitfalls don’t apply. Meanwhile, the most unsettling revelation of the AI era continues to percolate: We’re the real algorithms. Our tastes, preferences, thoughts and feelings are all too algorithmic most of the time. Collaterally, the most problematic algorithm may be the one in our heads, the one that can’t see ChatGPT outputs for what they are, machine-like arrangements of words, simulating human expression. See the now-iconic furor kicked up by reporter Kevin Roose in the New York Times over Bing chat mode’s proposal that he leave his wife.
My own guess is that this problem will be transitional. One day we’re told every will child will have an AI friend and confidante, but I suspect tomorrow’s kids, from an early age, will also effortlessly interpolate (as we can’t) that AI is a soulless word machine and manage the relationship accordingly.
All this lies in the future. In one consistent pattern of the digital age, a thin layer of Hollywood superstar writers, as creators and owners of the important shows, the ones that impress with their nuance, originality and intelligence, will capture most of the rewards. The eight-figure paychecks will continue to flow. Today’s frisson of solidarity with journeyman colleagues is likely to wear off after a few weeks. After all, the superstars have interesting work to get back to, cultivating their valuable, emotionally resonant and highly investable franchises. And a big job is beckoning: how to adapt artificial intelligence to improve the quality and productivity of their inspirations.
-
https://decrypt.co/138310/openai-researcher-chance-ai-catastrophe/
-
https://www.zerohedge.com/geopolitical/china-wants-killer-robots-fight-next-war
-
https://voxday.net/2023/05/15/spirit-in-the-material-world/
(https://voxday.net/wp-content/uploads/2023/05/image-13.png)
-
"Some would suggest that humanity would do well to reconsider the whole AI thing."
A fair thought, but then that would leave only the Chinese with AI.
-
"Some would suggest that humanity would do well to reconsider the whole AI thing."
A fair thought, but then that would leave only the Chinese with AI.
And so we continue down the highway…
-
Indeed.
The Adventure continues!
-
https://www.defenseone.com/ideas/2023/05/chinese-breakthroughs-bring-quantum-tools-closer-practicality/386515/
-
https://americanmind.org/memo/black-boxing-democracy/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=258898258&_hsenc=p2ANqtz-9YSHkbtgCIUGAiiz0tnMTuQLyZ845Ze7LEK5MEswoBPq8xKR90rzbgWWxckE2OF0nEh4lQX2ZbOqhUl7_t_mPT-2EHbg&utm_content=258898258&utm_source=hs_email
-
https://americanmind.org/memo/black-boxing-democracy/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=258898258&_hsenc=p2ANqtz-9YSHkbtgCIUGAiiz0tnMTuQLyZ845Ze7LEK5MEswoBPq8xKR90rzbgWWxckE2OF0nEh4lQX2ZbOqhUl7_t_mPT-2EHbg&utm_content=258898258&utm_source=hs_email
“China has already established guardrails to ensure that AI represents Chinese values, and the U.S. should do the same.”
What values are those?
-
Those of Natural Law (and US Constitution) and the American Creed.
As the article clearly points out, there is considerable cognitive dissonance with those in the way things are going.
-
https://www.dailymail.co.uk/news/article-12151635/AI-controlled-military-drone-KILLS-human-operator-simulated-test.html
https://media.gab.com/cdn-cgi/image/width=852,quality=100,fit=scale-down/system/media_attachments/files/139/367/040/original/ba0c3158d98c10b1.jpg
(https://media.gab.com/cdn-cgi/image/width=852,quality=100,fit=scale-down/system/media_attachments/files/139/367/040/original/ba0c3158d98c10b1.jpg)
-
https://www.theepochtimes.com/godfather-of-ai-speaks-out-ai-capable-of-reason-may-seek-control_5365093.html?utm_source=China&src_src=China&utm_campaign=uschina-2023-06-30&src_cmp=uschina-2023-06-30&utm_medium=email
-
Tech team assembles to thwart AI mayhem
BY RYAN LOVELACE THE WASHINGTON TIMES
OpenAI is assembling a team to prevent emerging artificial intelligence technology from going rogue and fueling the extinction of humanity, which the company now fears is a real possibility.
The makers of the popular chatbot ChatGPT say AI will power new superintelligence that will help solve the world’s most important problems and be the most consequential technology ever invented by humans.
And yet, OpenAI’s Ilya Sutskever and Jan Leike warned that humans are not prepared to handle technology smarter than they are.
“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” Mr. Sutskever and Mr. Leike wrote on OpenAI’s blog. “While superintelligence seems far off now, we believe it could
arrive this decade.”
If an AI-fueled apocalypse is right around the corner, OpenAI’s brightest minds say they have no plan to stop it.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Mr. Sutskever and Mr. Leike wrote. “Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.” Mr. Sutskever, OpenAI co-founder and chief scientist, and Mr. Leike, OpenAI alignment head, said they are assembling a new team of researchers and engineers to help forestall the apocalypse by solving the technical challenges of superintelligence. They’ve given the team four years to complete the task.
The potential end of humanity sounds bad, but the OpenAI leaders said they remain hopeful they will solve the problem.
“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” the OpenAI duo wrote. “There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.”
Mr. Sutskever and Mr. Leike said they intended to share the results of their work widely. They said OpenAI is hiring research engineers, scientists and managers who want to help stop the nerds’ new toys from enslaving or eliminating mankind.
Policymakers in Washington are also fretting about AI danger. Senate Majority Leader Charles E. Schumer, New York Democrat, has called for new rules to govern the technology, and the Senate Judiciary Committee has become a center for hearings on oversight of AI.
The committee’s growing investigation of AI has included examinations of fears that AI may enable cyberattacks, political destabilization and the deployment of weapons of mass destruction.
OpenAI CEO Sam Altman called for regulation when he testified before the Senate Judiciary’s subcommittee on privacy, technology and the law in May. Mr. Altman said he harbored concerns about the potential abuse of AI tools for manipulating people.
Senate Judiciary Chairman Richard J. Durbin has expressed an interest in creating an “accountability regime for AI” to include potential federal and state civil liability for when AI tools do harm.
Big Tech companies such as Google and Microsoft, a benefactor of OpenAI, have also called for new regulation of artificial intelligence, and the federal government is listening.
The Biden administration is busy crafting a national AI strategy that the White House Office of Science and Technology Policy has billed as taking a “whole of society” approach.
Top White House officials met multiple times per week on AI as the White House chief of staff’s office has worked on an effort to choose the next steps for President Biden to take on AI, a White House official said in June.
OpenAI said Thursday it is making GPT-4, which is its “most capable” AI model, generally available to increase its accessibility to developers
-
if one disregards the unnecessary , Will's pathological TDS dig in the beginning
worth the quick read
https://www.tahlequahdailypress.com/oklahoma/column-on-student-loan-forgiveness-amy-coney-barrett-makes-a-major-statement/article_978c715a-1c3b-11ee-8f30-3f943697782e.html
-
https://www.defenseone.com/technology/2023/07/ai-supposed-become-smarter-over-time-chatgpt-can-become-dumber/388826/
-
second
https://www.wsj.com/articles/can-ai-replace-humans-we-went-to-the-fast-food-drive-through-to-find-out-193c03e9?mod=hp_lead_pos7
-
https://www.theepochtimes.com/china/pursuit-of-military-ai-increasing-risk-of-nuclear-war-between-china-us-report-5425500?utm_source=China&src_src=China&utm_campaign=uschina-2023-07-27&src_cmp=uschina-2023-07-27&utm_medium=email
-
https://www.oann.com/newsroom/chatgpt-reaches-college-level-iq/
-
https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation?utm_campaign=a.io_fy2324_q2_conversion-cb-dr_prospecting_global-global_auction_na&utm_medium=social-media.content.pd&utm_source=facebook-instagram&utm_content=conversion.content.non-subscriber.content_staticlinkad_np-yuvalnoah-n-jul_na-na_article_na_na_na_na&utm_term=sa.lal-sub-5-int-currentevents-politics&utm_id=23856150548410078&fbclid=IwAR08DIAlsr4_wEq9ejFZVCEGDrhaHNTZsA7xtZkcXCsjmn_qZck_pRGC_ik
-
https://americanwirenews.com/snopes-makes-a-call-on-michelle-obama-pregnancy-photos/?utm_campaign=james&utm_content=9-25-23%20Daily%20PM&utm_medium=newsletter&utm_source=Get%20response&utm_term=email
-
Plenty here I disagree with, but the hypothesis is worth engaging with
https://vdare.com/articles/why-think-about-rome-one-reason-the-fall-of-rome-coincided-with-a-fall-in-iq-like-the-modern-west
-
https://www.dailymail.co.uk/news/article-12607463/Amazon-Alexa-election-stolen-election-fraud.html
-
Responsible use. Forty-five countries joined a U.S.-led initiative to define responsible military use of artificial intelligence. The declaration, which the U.S. State Department called “groundbreaking,” contains 10 measures that will help mitigate the risks of AI. China was notably absent from the list of signatories.
-
https://www.technologyreview.com/2023/03/17/1069897/tech-read-your-mind-probe-your-memories/?fbclid=IwAR1kKgHMdItjQMtQLEy9kY7xofMCrcRXfPzWgg44RJQBMM_5rjafi_g7URE
-
https://americanmind.org/salvo/bidens-ai-boondoggle/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=284618055&_hsenc=p2ANqtz-_Xhoc4bpD_R0jnLpWB8grBe8fwQMJQvSRDRAr1NlTE7FWB7PkrCl3J94dmlbeNyN50b4XgW7XWSk15RMR5wcqtiebcTw&utm_content=284618055&utm_source=hs_email
-
"But the Biden Administration’s biggest concern is that AI models might be applied to hiring by human resource departments, university admissions, or sentencing guidelines for criminals. The EO calls for the “incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems.”
Is this truly their biggest concern?
"There are billions of possible variants of accident scenarios, and AI models can’t game them all in advance."
psst AI alone is not as fantastic as promised.
This is where quantum computing comes in .
Mix AI with that and now we will see the exponential revolution.
-
psst AI alone is not as fantastic as promised.
This is where quantum computing comes in .
Mix AI with that and now we will see the exponential revolution.
=============
Flesh that out please.
-
https://www.wired.com/story/power-limits-artificial-intelligence/
If I understand this correctly:
AI can still only do tasks that it is programmed to collect (or the data set it is given by humans)
And the data is all 0s and 1's binary code. for example an electron the spins one way or the other.
Quantum computer which would help AI compute essentially infinite outcomes at the same moment and come up with the best outcome.
Instead of I's and O's it has infinite data sets. electrons in multiple states.
It will able to solve problems in minutes that would take billions of years even if all the supercomputers now in existence worked together.
AI would be able to analyze infinite or exponentially more data all at the same time and not like present computers one trial after another .
if I understand this correctly
https://www.forbes.com/sites/forbesbusinessdevelopmentcouncil/2020/10/27/how-can-ai-and-quantum-computers-work-together/?sh=459ab90a6ad1
-
https://www.youtube.com/watch?v=QygFIR1ad0Y
-
https://www.oann.com/newsroom/sports-illustrated-publisher-fires-ceo-after-a-i-scandal/
-
Cool stuff if you’ve some sort of gross debilitation. Not so much if you are tied to a chair and being sweated by an intelligence agency:
https://singularityhub.com/2023/12/12/this-mind-reading-cap-can-translate-thoughts-to-text-thanks-to-ai/
-
Not interesting of itself, but for what it bodes.
https://www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/
-
https://www.dailymail.co.uk/news/article-12899741/FBI-fears-China-stealing-AI-technology-ramp-spying-steal-personal-information-build-terrifying-dossiers-millions-Americans.html
-
No opinion yet on my part.
==========
AI startups fear Biden regulations will kill chances to compete
Order requires disclosures to government
BY RYAN LOVELACE THE WASHINGTON TIMES
America’s artificial intelligence corporate sector is alive and thriving, but many in the tech trenches are expressing mounting concern that President Biden and his team are out to kill the golden goose in 2024.
AI makers are beginning to grumble about Mr. Biden’s sweeping AI executive order and his administration’s efforts to regulate the emerging technology. The executive order, issued in late October, aims to curtail perceived dangers from the technology by pressuring AI developers to share powerful models’ testing results with the U.S. government and comply with various rules.
Small AI startups say Mr. Biden’s heavy regulatory hand will crush their businesses before they even get off the ground, according to In-Q-Tel, the taxpayer- funded investment group financing technology companies on behalf of America’s intelligence agencies.
China and other U.S. rivals are racing ahead with AI-subsidized sectors, striving to claim market dominance in a technology that some say will upend and disrupt companies across the commercial spectrum. Esube Bekele, who oversees In-Q-Tel’s AI investments, said at the GovAI Summit last month that he heard startups’ concerns that Mr. Biden’s order may create a burden that will disrupt competition and prove so onerous that they could not survive.
“There is a fear from the smaller startups,” Mr. Bekele said on the conference stage. “For instance, in the [executive order], it says after a certain model
AI
From page A1
there is a reporting requirement. Would that be too much?”
The reporting requirements outlined in the executive order say the secretaries of commerce, state, defense and energy, along with the Office of the Director of National Intelligence, will create technical conditions for models and computing clusters on an ongoing basis. The order specifies various levels of computing power that require disclosures until the government issues additional technical guidance.
Mr. Biden said “one thing is clear” as he signed the executive order at the White House: “To realize the promise of AI and avoid the risks, we need to govern this technology.” He called the order “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.”
The R Street Institute’s Adam Thierer said the executive action tees up a turf war in the administration for AI policy leadership. He said he is not sure who will prevail but much AI regulation will now be created away from public view.
Mr. Thierer, a senior fellow on R Street’s technology and innovation team, foresees a tsunami of regulatory activity on AI.
“So much AI regulation is going to happen off the books,” Mr. Thierer said. “It’s going to be in the so-called soft-law arena, soft-power area, through the use of jawboning, regulatory intimidation and sometimes just direct threats.”
Concerns across the board
The little guys are not the only ones afraid of Mr. Biden’s regulations and shadow pressure.
Nvidia and other major players have had a glimpse of the Biden administration’s plans and don’t like what they see.
Santa Clara, California-based Nvidia joined the trillion-dollar market capitalization club in May as businesses rushed to acquire its chips for various AI technologies involving medical imaging and robotics.
Yet the Commerce Department’s anticipated restrictions on Nvidia’s sales to China, in light of security concerns, triggered a stock plunge that jeopardized billions of dollars in expected sales.
Commerce Secretary Gina Raimondo knows her team’s restrictions will hurt technology companies but said she is moving ahead because of national security.
Speaking at the Reagan National Defense Forum in California last month, Ms. Raimondo said she told the “cranky” CEOs of chip manufacturers that they might be in for a “tough quarterly shareholder call,” but she added that it would be worth it in the long run.
“Such is life. Protecting our national security matters more than short-term revenue,” Ms. Raimondo said. Reflecting rising divisions in the exploding marketplace, some technology companies support new regulations. They say they prefer bright legal lines on what they can and cannot do with AI.
Large companies such as Microsoft and Google called for regulations and met with Mr. Biden’s team before the executive order’s release. Microsoft has pushed for a federal agency to regulate AI.
Several leading AI companies voluntarily committed to Mr. Biden’s team to develop and deploy the emerging technology responsibly. Some analysts say the established, well-heeled corporate giants may be better positioned to handle the coming regulatory crush than their smaller, startup rivals.
“The impulse toward early regulation of AI technology may also favor large, well-capitalized companies,” Eric Sheridan, a senior equity research analyst at investment banking giant Goldman Sachs, wrote in a recent analysis.
“Regulation typically comes with higher costs and higher barriers to entry,” he said, and “the larger technology companies can absorb the costs of building these large language models, afford some of these computing costs, as well as comply with regulation.”
Concerns are growing across the AI sector that Mr. Biden’s appointees will look to privately enforce the voluntary agreements.
Mr. Thierer said implicit threats of regulation represent a “sword of Damocles.” The approach has been used as the dominant form of indirect regulation in other tech sectors, including telecommunications.
“The key thing about a sword of Damocles regulation is that the sword need not fall to do the damage. It need only hang in the room,” Mr. Thierer said. “If a sword is hanging above your neck and you fear negative blowback from no less of an authority than the president of the United States … you’re probably going to fall in line with whatever they want.”
Mr. Thierer said he expects shakedown tactics and jawboning efforts to be the governing order for AI in the near term, especially given what he called dysfunction among policymakers in Washington
-
https://www.dailymail.co.uk/news/article-12905585/China-MSS-CIA-spy-intelligence-AI.html
-
second
https://endoftheamericandream.com/artificial-intelligence-is-allowing-them-to-construct-a-global-surveillance-prison-from-which-no-escape-is-possible/
-
First STABLE Qubit at room temperature only for fraction of second buy a *monumental* baby step:
https://www.iflscience.com/world-first-as-stable-qubit-for-quantum-computers-achieved-at-room-temperature-72502
Perhaps not a good analogy but I am thinking :
One of the first major breakthroughs in electricity occurred in 1831, when British scientist Michael Faraday discovered the basic principles of electricity generation.[2] Building on the experiments of Franklin and others, he observed that he could create or “induce” electric current by moving magnets inside coils of copper wire. The discovery of electromagnetic induction revolutionized how we use energy.
https://en.wikipedia.org/wiki/Michael_Faraday
-
https://www.theguardian.com/technology/2024/jan/19/mark-zuckerberg-artificial-general-intelligence-system-alarms-experts-meta-open-source
-
HT BBG
Perhaps not quite constitutional law, but given the question of agency this case will be interesting to track:
https://reason.com/volokh/2024/01/17/court-lets-first-ai-libel-case-go-forward/
-
The natural human preference to use hueristics means that AI is a deeply insidious threat. Skynet anyone?
https://nypost.com/2024/01/22/business/google-news-searches-ranked-ai-generated-ripoffs-above-real-articles-including-a-post-exclusive/
-
https://www.fox26houston.com/news/how-did-facial-recognition-technology-send-the-wrong-man-to-jail-where-he-was-brutally-attacked
-
Interesting thought piece with several elements worth mulling like the amazing thing about the temperature record is not it’s minor rise, but that over the timeframes for which we have data temps have fluctuated so little. This realization leads to a pretty damning indictment of AI’s ability to have flashes of insight:
https://wattsupwiththat.com/2024/01/29/more-about-artificial-ignorance/
-
That was very interesting-- for me about global temperature more than AI :-D
-
A take on the nature/nurture debate I have not seen before postulating that parents invest more nurturing in children displaying an education seeking nature, in other words meaning nature influences nurturing commitment, while nurturing resources impact a child’s nature. Bottom line: nurture has an impact on genetic nature, while genetic nature can incentivize (or not) nurturing investments. The take away is that the two are not ends of spectrums to be debated, but intertwined variables that impact the other.
Full disclosure, I gave the lay sections a quick read, and utterly scrolled past the formula laden sections, but nonetheless find myself intrigued by this new (to me at least) synthesis of the nature/nurture question, particularly in view of the dysfunctional wolves I was raised by.
Conclusion posted here:
To better understand the interplay between genetics and family resources for skill formation and its relevance for policy, we incorporated genetic endowments measured by an EA PGS into a dynamic model of skill formation. We modelled and estimated the joint evolution of skills and parental investments throughout early childhood (ages 0 to 7 years). We observed both child and parental genetic endowments, allowing us to estimate the independent effect of the child’s genes on skill formation and to estimate the importance of parental genes for child development. Furthermore, by incorporating (child- and parent-) genetics into a formal model, we were able to evaluate the mechanisms through which genes influence skill formation.
Using the model, we document the importance of both parental and child genes for child development. We show that the effect of genes increases over the child’s early life course and that a large fraction of these effects operate via parental investments. Genes directly influence the accumulation of skills; conditional on their current stock of skills and parental investments, genetics make some children better able to retain and acquire new skills (the direct effect). In addition, we show that genes indirectly influence investments as parents reinforce genetic differences by investing more in children with higher skills (the nurture of nature effect). We also find that parental genes matter, as parents with different genetic makeups invest differently in their children (the nature of nurture effect). The impact of genes on parental investments is especially significant as it implies an interplay between genetic and environmental effects. These findings illustrate that nature and nurture jointly influence children’s skill development, a finding that highlights the importance of integrating biological and social perspectives into a single framework.
We highlight the critical implications of our findings using two simulation counterfactu- als. In one counterfactual, we show that the association between genetics and skills is smaller in a world where parental investments are equalized across families. This finding shows that the existence of genetic effects is not at odds with the value of social policies in reducing inequality. The presence of genetic effects makes these policies even more relevant since genetic inequality increases inequality in parental investments. In another counterfactual, we demonstrate how it is possible to negate completely all genetic influences on skills with a change in how parental (or public) investments are allocated across children. This shows that skill disparities due to genetic differences may be mitigated via social behavior and social policy. In particular, whether parents (and schools) compensate for or reinforce initial disparities can significantly impact the relative importance of genetics in explaining inequal- ities in skill, and might explain why estimates of the importance of genes differ significantly across countries (Branigan, McCallum, and Freese, 2013).
A limitation of our work is that genetic endowments are measured using polygenic scores. It is possible for genes unrelated to educational attainment also to influence children’s skill formation. For example, genetic variation related to mental health and altruism may poten- tially be unrelated to education but might influence how parents interact with their children. If this is true, we are missing a critical genetic component by using a PGS for educational attainment. Another limitation of using polygenic scores is measurement error. Since poly- genic scores are measured with error, our estimates are lower bounds of the true genetic effects. An interesting extension of our work would be to use different methods, such as genome-based restricted maximum likelihood (GREML), to sidestep the measurement prob- lem and document whether different genetic variants are related to the various mechanisms we outline in Section 2.
Lastly, it is important to recognise that we only include individuals of European ancestry in our analysis. This opens the question whether our findings would extend to other ancestry groups. Unfortunately, this is not something we can test. This is a major issue in the literature since the majority of polygenic scores are constructed from GWAS’ performed in Europeans, and their transferability to other populations is dependent on many factors (see Martin et al. (2017) for a discussion of the transferability issues of GWAS results, and Mostafavi et al. (2020) for an empirical comparison of PGS’ across ethnic groups). This also illustrates a problem of inequity in research, where the only individuals being studied are those of European ancestry. This opens the possibility that other ancestry groups will not benefit from the advances in genetics research (see the discussion in Martin et al., 2019). While the key insights from our research apply to all ancestry groups, we cannot test for any differences in the role of genetics across groups until we have solved the transferability issue. We hope future work will address these issues and lead to a more inclusive research agenda.
https://docs.iza.org/dp13780.pdf
-
See Matt Ridley's "Nature via Nurture". It has had quite the influence on me.
-
https://www.gatestoneinstitute.org/20358/china-ai-trap
-
https://www.youtube.com/watch?v=r03a9244vjE
-
https://www.youtube.com/watch?v=JcUSavC7-Kw
-
https://nypost.com/2024/02/23/business/woke-google-gemini-refuses-to-say-pedophilia-is-wrong-after-diverse-historical-images-debacle-individuals-cannot-control-who-they-are-attracted-to/
-
At least that the claim here:
https://x.com/klarnaseb/status/1762508581679640814?s=20
-
AI already has been shown to embrace the woke sensibilities of its programmers; what happens when it’s applied to Lenny Bruce, one of the examples explored here:
https://time.com/6835213/the-future-of-censorship-is-ai-generated/
-
AI makes up a series of supposed Matt Tabbi “inaccuracies” for pieces he never wrote published in periodicals he’s not submitted to:
https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel
-
2nd post. Looks like AI is gonna need an attorney on a regular basis:
https://thehill.com/homenews/media/4498934-additional-news-organizations-suing-openai-for-copyright-infringement/
-
ChatGPT demonstrates its the language equivalent of GoogleAI’s images.
https://link.springer.com/content/pdf/10.1007/s11127-023-01097-2.pdf
-
https://link.springer.com/content/pdf/10.1007/s11127-023-01097-2.pdf
-
https://www.youtube.com/watch?v=NcH7fHtqGYM&t=278s
-
https://www.youtube.com/watch?v=YsSzNOpr9cE&t=3s
-
AI will tailor itself to classes of users:
1. Minimal thinking, high entertainment, low initiative. Probably the most addictive algorithms and making up the majority. AI does the thinking for a consumer heavy model.
2. Enhanced search returns, variations of basic question formats offering a larger array of possibilities. AI as a basic partnership model.
3. Information gap anticipation, analytical depth (analysis, synthesis, deductive, inductive), identifying hypothesis, alerting of new, relevant information or data summaries based on specific and diverse ‘signal strength’ of user trends. AI as a research catalyst.
Population statistics show the majority are sensors, feelers, and judgers (i.e Myers-Briggs) therefore the cost-effective AI market to focus on (1) above; those that are more investigative, innovative or productivity driven vs consumption driven will be more ‘expensive’ as a minority (3) above, requiring higher costs to participate & employ.
AI will simply ‘become’ us across different tiers
-
https://mashable.com/article/eu-ai-law
-
Biden Wants to Put AI on a Leash
Bill Clinton’s regulators, by contrast, produced prosperity by encouraging freedom on the internet.
By Ted Cruz and Phil Gramm
March 25, 2024 4:22 pm ET
The arrival of a new productive technology doesn’t guarantee prosperity. Prosperity requires a system, governed by the rule of law, in which economic actors can freely implement a productive idea and compete for customers and investors. The internet is the best recent example of this. The Clinton administration took a hands-off approach to regulating the early internet. In so doing it unleashed extraordinary economic growth and prosperity. The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation. This could deny America global leadership in AI and the prosperity that leadership would bring.
The Clinton administration established a Global Information Infrastructure framework in 1997 defining the government’s role in the internet’s development with a concise statement of principles: “address the world’s newest technologies with our Nation’s oldest values . . . First, do no harm.” The administration embraced the principle that “the private sector should lead [and] the Internet should develop as a market driven arena, not a regulated industry.” The Clinton regulators also established the principle that “government should avoid undue restrictions on electronic commerce, . . . refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs.”
That regulatory framework faithfully fleshed out the provisions of the bipartisan 1996 Telecommunications Act and provided the economic environment that made it possible for America to dominate the information age, enrich our lives, create millions of jobs, and generate enormous wealth for retirement savers.
The Biden administration is doing the opposite. It has committed to “govern the development and use of AI.” In one of the longer executive orders in American history, President Biden imposed a top-down, command-and-control regulatory approach requiring AI models to undergo extensive “impact assessments” that mirror the infamously burdensome National Environmental Policy Act reviews, which are impeding semiconductor investment in the U.S. And government controls are permanent, “including post-deployment performance monitoring.”
Under Mr. Biden’s executive order, AI development must “be consistent with my administration’s dedication to advancing equity and civil rights” and “built through collective bargains on the views of workers, labor unions, educators and employers.” Mr. Biden’s separate AI Bill of Rights claims to advance “racial equity and support for underserved communities.” AI must also be used to “improve environmental and social outcomes,” to “mitigate climate change risk,” and to facilitate “building an equitable clean energy economy.”
Education Secretary Miguel Cardona, who as Connecticut schools commissioner said “we need teachers behind this wave of our curriculum becoming more woke,” now wants to impose “guardrails” on AI to protect against “bias and stereotyping through technology.” The Commerce Department has appointed a “senior adviser for algorithmic justice,” while the Justice Department, Federal Trade Commission, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission have issued a joint statement asserting legal authority to root out racism in computing.
FTC Chairwoman Lina Khan has launched several AI-related inquiries, claiming that because AI “may be fed information riddled with errors and bias, these technologies risk automating discrimination—unfairly locking out people from jobs, housing and key services.”
Regulating AI to prevent discrimination is akin to the FTC’s regulating a cellphone’s design to enforce the do-not-call registry. There is virtually no limit to the scope of such authority. Under what constitutional authority would Congress even legislate in the area of noncommercial speech? How could the FTC regulate in this area with no legislative authority? But in the entire Biden administration, noted for governing through regulatory edict, no agency has been less constrained by law than the FTC.
Others demanding control over AI’s development include the progressives who attended Sen. Chuck Schumer’s recent AI “Insight Forums.” Janet Murguía, president of UnidosUS—the activist group formerly known as the National Council of La Raza—demanded “a strong voice in how—or even whether” AI models “will be built and used.” Elizabeth Shuler, president of the AFL-CIO, demanded a role for “unions across the entire innovation process.” Randi Weingarten, president of the American Federation of Teachers, said, “AI is a game changer, but teachers and other workers need to be coaches in the game.”
Particularly painful is Mr. Biden’s use of the Defense Production Act of 1950 to force companies to share proprietary data regarding AI models with the Commerce Department. That a law passed during the Korean War and designed for temporary national security emergencies could be used to intervene permanently in AI development is a frightening precedent. It begs for legislative and judicial correction.
What’s clear is that the Biden regulatory policy on AI has little to do with AI and everything to do with special-interest rent-seeking. The Biden AI regulatory demands and Mr. Schumer’s AI forum look more like a mafia shakedown than the prelude to legitimate legislation and regulatory policy for a powerful new technology.
Some established AI companies no doubt welcome the payment of such tribute as a way to keep out competition. But consumers, workers and investors would bear the cost along with thousands of smaller AI companies that would face unnecessary barriers to innovation.
Letting the administration seize control over AI and subject it to the demands of its privileged political constituencies wouldn’t eliminate bias, stereotyping or the spreading of falsehoods and racism, all of which predate AI and sadly will likely be with us until Jesus comes back. Mr. Biden’s policies will, however, impede AI development, drive up the costs of the benefits it brings, and diminish America’s global AI pre-eminence.
Mr. Cruz is ranking Republican on the Senate Commerce Committee. Mr. Gramm, a former chairman of the Senate Banking Committee, is a visiting scholar at the American Enterprise Institute.
-
Found here:
https://informationisbeautiful.net/2024/update-over-30-new-llms-added-to-our-ai-large-language-model-tracker/
-
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes
-
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes
They'll be back. [/Arnold voice]
-
https://michaelyon.substack.com/p/ai-is-captured?utm_source=post-email-title&publication_id=459345&post_id=144247185&utm_campaign=email-post-title&isFreemail=true&r=z2120&triedRedirect=true&utm_medium=email
-
Given the penchant of so many to vigorously grasp whatever twaddle the MSM vends, the thought of AI embracing convenient fictions does more than give pause:
https://www.businessinsider.com/ai-deceives-humans-2024-5
-
https://www.sciencealert.com/ai-has-already-become-a-master-of-lies-and-deception-scientists-warn?fbclid=IwZXh0bgNhZW0CMTAAAR1QOn5VtsGz1sjesT98ofSnuz8ZMOv8slwBEMz1tWq1HAAEh43nWfPIsuc_aem_AdXyQpkov1xsR2vWGCuWDmi5pLtrbxtQXlMJnVErXs9E0iinKU0T7vyFdvqEyQRUywVrnNcGx0aevzuAGPMRmiVk
-
I trust this isn’t a scam; it appears ChatGPT4o is capable of making human-like inferences and carry on a free ranging conversation:
https://x.com/heybarsee/status/1790080494261944609?s=61
-
More new AI capabilities emerging:
https://x.com/rowancheung/status/1790783202639978593?s=61
The inflection and extrapolations are astounding.
Hmm, mebbe I gotta see if it can peruse an extensive number of thread topics and then suggest which one a given post should land in….
-
Well the latent wit in some of the Subject headings may confuse e.g. The Cognitive Dissonance of His Glibness for Baraq Obama :-D
-
Well the latent wit in some of the Subject headings may confuse e.g. The Cognitive Dissonance of His Glibness for Baraq Obama :-D
I'm trying to teach GPT4.o. So far it's been ... interesting. It can whip out a graphic off a verbal description that hits close to the mark, but is unable to access web pages, hence my effort to get it to catalog topics here and hence help my neurodiverse brain from running off the rails seeking this topic or that has yet to bear fruit. It's the end of the semester and I have a large RFP running full bore that I want to complete in advance of retirement, but once I have some time on my hands I'm going to attempt to drop every list topic into 4.0 and see if "she" can keep 'em straight for me.
-
This will be interesting to watch: once they ID a quantifiable basis/theory of consciousness, confirm it by applying it to other organisms to see if it fits, and then perhaps applying that theory to silicon to see if consciousness can be created … that’ll be a huge sea change:
Scientists Are Working Towards a Unified Theory of Consciousness
Singularity Hub / by Shelly Fan / May 20, 2024 at 1:54 PM
The origin of consciousness has teased the minds of philosophers and scientists for centuries. In the last decade, neuroscientists have begun to piece together its neural underpinnings—that is, how the brain, through its intricate connections, transforms electrical signaling between neurons into consciousness.
Yet the field is fragmented, an international team of neuroscientists recently wrote in a new paper in Neuron. Many theories of consciousness contradict each other, with different ideas about where and how consciousness emerges in the brain.
Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.
But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project—a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.
Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.
If successful, the joint mission could extend beyond our own noggins. Brain organoids, or “mini-brains,” that roughly mimic early human development are becoming increasingly sophisticated, spurring ethical concerns about their potential for developing self-awareness (to be clear, there aren’t any signs). Meanwhile, similar questions have been raised about AI. A general theory of consciousness, based on the human mind, could potentially help us evaluate these artificial constructs.
“Is it realistic to reconcile theories, or even aspire to a unified theory of consciousness?” the authors asked. “We take the standpoint that the existence of multiple theories is a sign of healthiness in this nascent field…such that multiple theories can simultaneously contribute to our understanding.”
Lost in Translation
I’m conscious. You are too. We see, smell, hear, and feel. We have an internal world that tells us what we’re experiencing. But the lines get blurry for people in different stages of coma or for those locked-in—they can still perceive their surroundings but can’t physically respond. We lose consciousness in sleep every night and during anesthesia. Yet, somehow, we regain consciousness. How?
With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.
Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.
The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.
Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.
Meet the Contenders
Using their “two experience” framework, they examined five prominent consciousness theories.
The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.
The second, integrated information theory, takes a more globalist view. The idea is that consciousness stems from a series of cause-effect reactions from the brain’s networks. With the right neural architecture, connections, and network complexity, consciousness naturally emerges. The theory suggests the back of the brain sparks consciousness.
Then there’s dendritic integration theory, the coolest new kid in town. Unlike previous ideas, this theory waved the front or back of the brain goodbye and instead zoomed in on single neurons in the cortex, the outermost part of the brain and a hub for higher cognitive functions such as reasoning and planning.
The cortex has extensive connections to other parts of the brain—for example, those that encode memories and emotions. One type of neuron, deep inside the cortex, especially stands out. Physically, these neurons resemble trees with extensive “roots” and “branches.” The roots connect to other parts of the brain, whereas the upper branches help calculate errors in the neuron’s computing. In turn, these upper branches generate an error signal that corrects mistakes through multiple rounds of learning.
The two compartments, while physically connected, go about their own business—turning a single neuron into multiple computers. Here’s the crux: There’s a theoretical “gate” between the upper and lower neural “offices” for each neuron. During consciousness, the gate opens, allowing information to flow between the cortex and other brain regions. In dreamless sleep and other unconscious states, the gate closes.
Like a light switch, this theory suggests that consciousness is supported by flicking individual neuron gates on or off on a grand scale.
The last two theories propose that recurrent processing in the brain—that is, it learns from previous experiences—is essential for consciousness. Instead of “experiencing” the world, the brain builds an internal simulation that constantly predicts the “here and now” to control what we perceive.
A Unified Theory?
All the theories have extensive experiments to back up their claims. So, who’s right? To the authors, the key is to consider consciousness not as a singular concept, but as a “ladder” of sorts. The brain functions at multiple levels: cells, local networks, brain regions, and finally, the whole brain.
When examining theories of consciousness, it also makes sense to delineate between different levels. For example, the dendritic integration theory—which considers neurons and their connections—is on the level of single cells and how they contribute to consciousness. It makes the theory “neutral,” in that it can easily fit into ideas at a larger scale—those that mostly rely on neural network connections or across larger brain regions.
Although it’s seemingly difficult to reconcile various ideas about consciousness, two principles tie them together, wrote the team. One is that consciousness requires feedback, within local neural circuits and throughout the brain. The other is integration, in that any feedback signals need to be readily incorporated back into neural circuits, so they can change their outputs. Finally, all authors agree that local, short connections are vital but not enough. Long distance connections from the cortex to deeper brain areas are required for consciousness.
So, is an integrated theory of consciousness possible? The authors are optimistic. By defining multiple aspects of consciousness—immediate responses versus internal thoughts—it’ll be clearer how to explore and compare results from different experiments. For now, the global neuronal workspace theory mostly focuses on the “inner experience” that leads to consciousness, whereas others try to tackle the “outer experience”—what we immediately experience.
For the theories to merge, the latter groups will have to explain how consciousness is used for attention and planning, which are hallmarks for immediate responses. But fundamentally, wrote the authors, they are all based on different aspects of neuronal connections near and far. With more empirical experiments, and as increasingly more sophisticated brain atlases come online, they’ll move the field forward.
Hopefully, the authors write, “an integrated theory of consciousness…may come within reach within the next years or decades.”
https://singularityhub.com/2024/05/20/scientists-are-working-towards-a-unified-theory-of-consciousness/
-
That was super interesting for me.
Konrad Lorenz's "Behind the Mirror" has been of deep influence to me in this regard:
https://en.wikipedia.org/wiki/Behind_the_Mirror
Behind the Mirror
Article
Talk
Read
Edit
View history
Tools
From Wikipedia, the free encyclopedia
Behind the Mirror
Cover of the first edition
Author Konrad Lorenz
Original title Die Rückseite des Spiegels
Country Austria
Language German
Published 1973
Media type Print (hardcover)
Pages 261
Behind the Mirror: A Search for a Natural History of Human Knowledge (German: Die Rückseite des Spiegels, Versuch einer Naturgeschichte menschlichen Erkennens) is a 1973 book by the ethologist Konrad Lorenz.[1] Lorenz shows the essentials of a lifetime's work and summarizes it into his very own philosophy: evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms he shows how they benefit from information processing. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism. Lorenz uses the mirror as a simplified model of our brain reflecting the part of information from the outside world it is able to "see". The backside of the mirror was created by evolution to gather as much information as needed to better survive. The book gives a hypothesis how consciousness was "invented" by evolution.
One of the key positions of the book included the criticism of Immanuel Kant, arguing that the philosopher failed to realize that knowledge, as mirrored by the human mind is the product of evolutionary adaptations.[2]
Kant has maintained that our consciousness[3] or our description and judgments about the world could never mirror the world as it really is so we can not simply take in the raw data that the world provides nor impose our forms on the world.[4] Lorenz disputed this, saying it is inconceivable that - through chance mutations and selective retention - the world fashioned an instrument of cognition that grossly misleads man about such world. He said that we can determine the reliability of the mirror by looking behind it.[2]
Summary
Lorenz summarizes his life's work into his own philosophy: Evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms, Lorenz shows how they gain and benefit from information. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism.
In the book, Lorenz uses the mirror as a simple model of the human brain that reflects the part of the stream of information from the outside world it is able to "see". He argued that merely looking outward into the mirror ignores the fact that the mirror has a non-reflecting side, which is also a part and parcel of reality.[5] The backside of the mirror was created by evolution to gather as much information as needed to better survive. The picture in the mirror is what we see within our mind. Within our cultural evolution we have extended this picture in the mirror by inventing instruments that transform the most needed of the invisible to something visible.
The back side of the mirror is acting for itself as it processes the incoming information to improve speed and effectiveness. By that human inventions like logical conclusions are always in danger to be manipulated by these hardwired prejudices in our brain. The book gives a hypothesis how consciousness was invented by evolution.
Main topics
Fulguratio, the flash of lightning, denotes the act of creation of a totally new talent of a system, created by the combination of two other systems with talents much less than those of the new system. The book shows the "invention" of a feedback loop by this process.
Imprinting, is the phase-sensitive learning of an individual that is not reversible. It's a static program run only once.
Habituation is the learning method to distinguish important information from unimportant by analysing its frequency of occurrence and its impact.
Conditioning by reinforcement, occurs when an event following a response causes an increase in the probability of that response occurring in the future. The ability to do this kind of learning is hardwired in our brain and is based on the principle of causality. The discovery of causality (which is a substantial element of science and Buddhism) was a major step of evolution in analysing the outer world.
Pattern matching is the abstraction of different appearances into the identification of being one object and is available only in highly organized creatures
Exploratory behaviour is the urge of the highest developed creatures on earth to go on with learning after maturity and leads to self-exploration which is the base for consciousness.
=======================
https://www.britannica.com/biography/Konrad-Lorenz
Written by
Fact-checked by
Article History
Konrad Lorenz
Konrad Lorenz
See all media
Born: Nov. 7, 1903, Vienna, Austria
Died: Feb. 27, 1989, Altenburg (aged 85)
Awards And Honors: Nobel Prize (1973)
Subjects Of Study: aggressive behaviour animal behaviour evolution imprinting
Konrad Lorenz (born Nov. 7, 1903, Vienna, Austria—died Feb. 27, 1989, Altenburg) was an Austrian zoologist and the founder of modern ethology, the study of animal behaviour by means of comparative zoological methods. His ideas contributed to an understanding of how behavioral patterns may be traced to an evolutionary past, and he was also known for his work on the roots of aggression. He shared the Nobel Prize for Physiology or Medicine in 1973 with the animal behaviourists Karl von Frisch and Nikolaas Tinbergen.
Lorenz was the son of an orthopedic surgeon. He showed an interest in animals at an early age, and he kept animals of various species—fish, birds, monkeys, dogs, cats, and rabbits—many of which he brought home from his boyhood excursions. While still young, he provided nursing care for sick animals from the nearby Schönbrunner Zoo. He also kept detailed records of bird behaviour in the form of diaries.
Mushrooms growing in forest. (vegetable; fungus; mushroom; macrofungi; epigeous)
Britannica Quiz
Science at Random Quiz
In 1922, after graduating from secondary school, he followed his father’s wishes that he study medicine and spent two semesters at Columbia University, in New York City. He then returned to Vienna to study.
During his medical studies Lorenz continued to make detailed observations of animal behaviour; a diary about a jackdaw that he kept was published in 1927 in the prestigious Journal für Ornithologie. He received an M.D. degree at the University of Vienna in 1928 and was awarded a Ph.D. in zoology in 1933. Encouraged by the positive response to his scientific work, Lorenz established colonies of birds, such as the jackdaw and greylag goose, published a series of research papers on his observations of them, and soon gained an international reputation.
In 1935 Lorenz described learning behaviour in young ducklings and goslings. He observed that at a certain critical stage soon after hatching, they learn to follow real or foster parents. The process, which is called imprinting, involves visual and auditory stimuli from the parent object; these elicit a following response in the young that affects their subsequent adult behaviour. Lorenz demonstrated the phenomenon by appearing before newly hatched mallard ducklings and imitating a mother duck’s quacking sounds, upon which the young birds regarded him as their mother and followed him accordingly.
In 1936 the German Society for Animal Psychology was founded. The following year Lorenz became coeditor in chief of the new Zeitschrift für Tierpsychologie, which became a leading journal for ethology. Also in 1937, he was appointed lecturer in comparative anatomy and animal psychology at the University of Vienna. From 1940 to 1942 he was professor and head of the department of general psychology at the Albertus University at Königsberg, Germany (now Kaliningrad, Russia).
From 1942 to 1944 he served as a physician in the German army and was captured as a prisoner of war in the Soviet Union. He was returned to Austria in 1948 and headed the Institute of Comparative Ethology at Altenberg from 1949 to 1951. In 1950 he established a comparative ethology department in the Max Planck Institute of Buldern, Westphalia, becoming codirector of the Institute in 1954. From 1961 to 1973 he served as director of the Max Planck Institute for Behaviour Physiology, in Seewiesen. In 1973 Lorenz, together with Frisch and Tinbergen, was awarded the Nobel Prize for Physiology or Medicine for their discoveries concerning animal behavioral patterns. In the same year, Lorenz became director of the department of animal sociology at the Institute for Comparative Ethology of the Austrian Academy of Sciences in Altenberg.
Lorenz’s early scientific contributions dealt with the nature of instinctive behavioral acts, particularly how such acts come about and the source of nervous energy for their performance. He also investigated how behaviour may result from two or more basic drives that are activated simultaneously in an animal. Working with Nikolaas Tinbergen of the Netherlands, Lorenz showed that different forms of behaviour are harmonized in a single action sequence.
Lorenz’s concepts advanced the modern scientific understanding of how behavioral patterns evolve in a species, particularly with respect to the role played by ecological factors and the adaptive value of behaviour for species survival. He proposed that animal species are genetically constructed so as to learn specific kinds of information that are important for the survival of the species. His ideas have also cast light on how behavioral patterns develop and mature during the life of an individual organism.
In the latter part of his career, Lorenz applied his ideas to the behaviour of humans as members of a social species, an application with controversial philosophical and sociological implications. In a popular book, Das sogenannte Böse (1963; On Aggression), he argued that fighting and warlike behaviour in man have an inborn basis but can be environmentally modified by the proper understanding and provision for the basic instinctual needs of human beings. Fighting in lower animals has a positive survival function, he observed, such as the dispersion of competitors and the maintenance of territory. Warlike tendencies in humans may likewise be ritualized into socially useful behaviour patterns. In another work, Die Rückseite des Spiegels: Versuch einer Naturgeschichte menschlichen Erkennens (1973; Behind the Mirror: A Search for a Natural History of Human Knowledge), Lorenz examined the nature of human thought and intelligence and attributed the problems of modern civilization largely to the limitations his study revealed.
-
Am I correct to assess that this passage is quite congurent with Lorenz's work?
==================
With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.
Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.
The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.
Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.
Meet the Contenders
Using their “two experience” framework, they examined five prominent consciousness theories.
The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.
-
I dunno, this sounds sorta creepy and ripe for abuse:
This MIT Chatbot Simulates Your ‘Future Self.’ It’s Here to Help You Make Better Decisions.
Singularity Hub / by Jason Dorrier / Jun 5, 2024 at 6:42 PM
Chatbots are now posing as friends, romantic partners, and departed loved ones. Now, we can add another to the list: Your future self.
MIT Media Lab’s Future You project invited young people, aged 18 to 30, to have a chat with AI simulations of themselves at 60. The sims—which were powered by a personalized chatbot and included an AI-generated image of their older selves—answered questions about their experience, shared memories, and offered lessons learned over the decades.
In a preprint paper, the researchers said participants found the experience emotionally rewarding. It helped them feel more connected to their future selves, think more positively about the future, and increased motivation to work toward future objectives.
“The goal is to promote long-term thinking and behavior change,” MIT Media Lab’s Pat Pataranutaporn told The Guardian. “This could motivate people to make wiser choices in the present that optimize for their long-term wellbeing and life outcomes.”
Chatbots are increasingly gaining a foothold in therapy as a way to reach underserved populations, the researchers wrote in the paper. But they’ve typically been rule-based and specific—that is, hard-coded to help with autism or depression.
Here, the team decided to test generative AI in an area called future-self continuity—or the connection we feel with our future selves. Building and interacting with a concrete image of ourselves a few decades hence has been shown to reduce anxiety and encourage positive behaviors that take our future selves into account, like saving money or studying harder.
Existing exercises to strengthen this connection include letter exchanges with a future self or interacting with a digitally aged avatar in VR. Both have yielded positive results, but the former depends on a person being willing to put in the energy to imagine and enliven their future self, while the latter requires access to a VR headset, which most people don’t have.
This inspired the MIT team to make a more accessible, web-based approach by mashing together the latest in chatbots and AI-generated images.
Participants provided basic personal information, past highs and lows in their lives, and a sketch of their ideal future. Then with OpenAI’s GPT-3.5, the researchers used this information to make custom chatbots with “synthetic memories.” In an example from the paper, a participant wanted to teach biology. So, the chatbot took on the role of a retired biology professor—complete with anecdotes, proud moments, and advice.
To make the experience more realistic, participants submitted images of themselves that the researchers artificially aged using AI and added as the chatbot’s profile picture.
Over three hundred people signed up for the study. Some were in control groups while others were invited to have a conversation with their future-self chatbots for anywhere between 10 and 30 minutes. Right after their chat, the team found participants had lower anxiety and a deeper sense of connection with their future selves—something that has been found to translate to better decision-making, from health to finances.
Chatting with a simulation of yourself from decades in the future is a fascinating idea, but it’s worth noting this is only one relatively small study. And though the short-term results are intriguing, the study didn’t measure how durable those results might be or whether longer or more frequent chats over time might be useful. The researchers say future work should also directly compare their method to other approaches, like letter writing.
It’s not hard to imagine a far more realistic version of all this in the near future. Startups like Synthesia already offer convincing AI-generated avatars, and last year, Channel 1 created strikingly realistic avatars for real news anchors. Meanwhile OpenAI’s recent demo of GPT-4o shows quick advances in AI voice synthesis, including emotion and natural cadence. It seems plausible one might tie all this together—chatbot, voice, and avatar—along with a detailed back story to make a super-realistic, personalized future self.
The researchers are quick to point out that such approaches could run afoul of ethics should an interaction depict the future in a way that results in harmful behavior in the present or endorse negative behaviors. This is an issue for AI characters in general—the greater the realism, the greater the likelihood of unhealthy attachments.
Still, they wrote, their results show there is potential for “positive emotional interactions between humans and AI-generated virtual characters, despite their artificiality.”
Given a chat with our own future selves, maybe a few more of us might think twice about that second donut and opt to hit the gym instead.
https://singularityhub.com/2024/06/05/this-mit-chatbot-is-your-older-wiser-future-self-its-here-to-help-you-make-better-decisions/
-
https://notthebee.com/article/watch-what-happens-when-facebooks-ai-is-asked-to-define-the-word-woman-?utm_source=Not+The+Bee+Newsletter&utm_medium=email&utm_campaign=06072024
-
https://www.loudersound.com/news/universal-music-soundlabs-ai?fbclid=IwZXh0bgNhZW0CMTAAAR0VjokD4xZovOMD6zXuBNJFbsiA_BRMAl4761bKcv4SN9aOlUIULNttEqQ_aem_ZmFrZWR1bW15MTZieXRlcw
-
Given the well documented leftist bent of current AI chatbots my guess is this won’t be a good fit in WY. With that said, my guess is AI could better run a city than a large percentage of candidates. Hell, governing by coin flip would be better that what many could offfer:
Wyoming mayoral candidate wants AI to run the city
The Hill News / by Damita Menezes / Jun 20, 2024 at 7:12 AM
(NewsNation) — A mayoral candidate is vowing to let an artificial intelligence chatbot make all governing decisions if he's elected to lead Wyoming's capital city, but the state's top election official says that proposal violates the law.
Victor Miller, who is seeking the Cheyenne mayor's office, said Wednesday on NewsNation's "Dan Abrams Live" he plans to fully cede decision-making to a customized AI bot he dubbed "Vic" if voters choose him.
"It's going to be taking in the supporting documents, taking in what it knows about Cheyenne and systems here, the concerns, and it's going to make a vote yes or no," Miller explained. "And it's going to do that based on intelligence and data. And I'm going to go ahead and pull the lever for it."
Mike Rowe: Despite imperfections, America has a lot to be proud of
But Wyoming Secretary of State Chuck Gray said Wednesday on NewsNation's "Elizabeth Vargas Reports" that Miller's candidacy violates state law because AI is ineligible to hold office.
Gray said the Cheyenne town clerk who certified Miller's candidacy to the county clerk acted improperly. Gray's office is exploring further action, though it doesn't directly oversee municipal elections.
"Wyoming state law is very clear that an AI bot is not eligible to be a candidate for office," Gray said. Only "qualified electors" who are state residents and U.S. citizens can run, he said.
Miller's application also had deficiencies, Gray said, such as failing to list his full name, as required.
Miller insisted he has confidence the advanced AI model he's utilizing can adequately govern.
"The best intelligence that we've extracted so far is OpenAI's Chat GPT 4.0, and that's what I'm using here," Miller said. "There's very minimal mistakes."
Gray pushed back against arguments that AI could make better decisions than human elected officials, calling it "our worst nightmare becoming true." He advocated for electing "conservative human beings" to uphold founding principles.
Miller has said openly his campaign revolves around AI decision-making: "AI has helped me personally such as helping me with my resume."
The unorthodox campaign has drawn mixed reactions in Cheyenne so far, Miller acknowledged, but he believes he can persuade skeptical residents to go along with ceding power to artificial intelligence.
Gray believes similar AI candidate stunts could arise elsewhere, calling it "a very troubling trend in our nation."
https://thehill.com/policy/technology/4730801-wyoming-mayoral-candidate-wants-ai-to-run-the-city/
-
The United Nations adopted a China-proposed Artificial Intelligence (AI) proliferation resolution yesterday that calls on developed nations to give away AI technology to “developing nations.” (The resolution is a twofold win for China where they can access AI technology from the West but also sell their own versions to undeveloped nations. Proliferation of Chinese AI will allow them to more effectively conduct information campaigns and monitor their diaspora and dissidents. – J.V.)
-
https://www.msn.com/en-us/news/technology/one-of-the-world-s-greatest-go-players-who-was-defeated-by-ai-warns-that-the-technology-may-not-come-with-a-happy-ending/ar-BB1pM6dN?ocid=msedgntp&pc=DCTS&cvid=40f03094901e4e03b3397106b04b3744&ei=18
Like the computer that beat the chess champ
I agree odds more likely than not this will not have happy ending knowing human nature.
-
AI Is Helping Scammers Outsmart You—and Your Bank
Your ‘spidey sense’ is no match for the new wave of scammers.
By Dalvin BrownFollow
and Katherine HamiltonFollow
June 22, 2024 5:30 am ET
Artificial intelligence is making scammers tougher to spot.
Gone are the poorly worded messages that easily tipped off authorities as well as the grammar police. The bad guys are now better writers and more convincing conversationalists, who can hold a conversation without revealing they are a bot, say the bank and tech investigators who spend their days tracking the latest schemes.
ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity. In recent years, criminals have used AI-based software to impersonate senior executives and demand wire transfers.
“Your spidey senses are no longer going to prevent you from being victimized,” said Matt O’Neill, a former Secret Service agent and co-founder of cybersecurity firm 5OH Consulting.
In these recent cases, the frauds are often similar to old scams. But AI has enabled scammers to target much larger groups and use more personal information to convince you the scam is real.
Fraud-prevention officials say these tactics are often harder to spot because they bypass traditional indicators of scams, such as malicious links and poor wording and grammar. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. All of these methods are hard to stave off, say the officials.
JPMorgan Chase has begun using large-language models to validate payments, which helps fight fraud. Carisma Ramsey Fields, vice president of external communications at JPMorgan Chase, said the bank has also stepped up its efforts to educate customers about scams.
2019
2023
Bank transfer or payment
Cryptocurrency
Wire Transfer
Cash
Credit cards
$0 billion
$10
$20
$2.5
$5
$7.5
$12.5
$15
$17.5
And while banks stop some fraud, the last line of defense will always be you. These security officials say to never share financial or personal information unless you’re certain about who’s on the receiving end. If you do pay, use a credit card because it offers the most protection.
“Somebody who tells you to pay by crypto, cash, gold, wire transfer or a payment app is likely a scam,” said Lois Greisman, an associate director of the Federal Trade Commission.
Tailored targeting
With AI as an accomplice, fraudsters are reaping more money from victims of all ages. People reported losing a record $10 billion to scams in 2023, up from $9 billion a year prior, according to the FTC. Since the FTC estimates only 5% of fraud victims report their losses, the actual number could be closer to $200 billion.
Advertisement
Joey Rosati, who owns a small cryptocurrency firm, never thought he could fall for a scam until a man he believed to be a police officer called him in May.
Finance entrepreneur Joey Rosati, who was almost scammed, was surprised how convincing and knowledgeable fraudsters can be. PHOTO: JOEY ROSATI
The man told Rosati he had missed jury duty. The man seemed to know all about him, including his Social Security number and that he had just moved to a new house. Rosati followed the officer’s instruction to come down to the station in Hillsborough County, Fla.— which didn’t seem like something a scammer would suggest.
On the drive over, Rosati was asked to wire $4,500 to take care of the fine before he arrived. It was then that Rosati realized it was a scam and hung up.
“I’m not uneducated, young, immature. I have my head on my shoulders,” Rosati said. “But they were perfect.”
2019
2023
Investment related
Imposter scam
Business and job opportunities
Online shopping and negativereviews
Prizes, sweepstakes and lotteries
$0 billion
$10
$20
$30
$40
$50
Social-engineering attacks like the jury-duty scam have grown more sophisticated with AI. Scammers use AI tools to unearth details about targets from social media and data breaches, cybersecurity experts say. AI can help them adapt their schemes in real time by generating personalized messages that convincingly mimic trusted individuals, persuading targets to send money or divulge sensitive information.
A job scammer played on the emotions of David Wenyu, who had been unemployed for six months. PHOTO: DAVID WENYU
David Wenyu’s LinkedIn profile displayed an “open to work” banner when he received an email in May offering a job opportunity. It appeared to be from SmartLight Analytics, a legitimate company, and came six months after he had lost his job.
He accepted the offer, even though he noticed the email address was slightly different from those on the company’s website. The company issued him a check to purchase work-from-home equipment from a specific website. When they told him to buy the supplies before the money showed up in his account, he knew it was a scam.
“I was just emotionally too desperate, so I ignored those red flags,” Wenyu said.
Advertisement
In an April survey of 600 fraud-management officials at banks and financial institutions by banking software company Biocatch, 70% said the criminals were more skilled at using AI for financial crime than banks are at using it for prevention. Kimberly Sutherland, vice president of fraud and identity strategy at LexisNexis Risk Solutions, said there has been a noticeable rise in fraud attempts that appear to be AI related in 2024.
Password risks, amplified
Criminals used to have to guess or steal passwords through phishing attacks or data breaches, often targeting high-value accounts one by one. Now, scammers can quickly cross-reference and test reused passwords across platforms. They can use AI systems to write code that would automate various aspects of their ploys, O’Neill said.
If scammers obtain your email and a commonly used password from a tech company data breach, AI tools can swiftly check if the same credentials unlock your bank, social media or shopping accounts.
2019
2023
Social media
Website or apps
Phone call
Email
Text
$0 billion
$2
$4
$6
$8
$10
$12
$14
Outsmarting scams
Financial institutions are taking new steps—and tapping AI themselves—to shield your money and data.
Banks monitor how you enter credentials, whether you tend to use your left or right hand when swiping on the app, and your device’s IP address to build a profile on you. If a login attempt doesn’t match your typical behavior, it is flagged, and you may be prompted to provide more information before proceeding.
-
https://x.com/catturd2/status/1816072773954949544?s=61
-
https://www.youtube.com/watch?v=BVVVySiZSFA
-
Why AI Risks Are Keeping Board Members Up at Night
Company directors are trying to get a handle on artificial intelligence as its use soars, bringing potential productivity gains—but also raising the prospect of employee blunders
By Emily GlazerFollow
Aug. 14, 2024 9:00 am ET
At a recent bank board meeting, directors were treated to a surprise. They listened to the chief executive talk strategy—except it wasn’t actually the CEO talking.
It turned out to be a voice-cloning model that was trained, using the CEO’s prior earnings calls, to generate new content. More boards are undertaking such staged exercises to better grasp the impact—and potential risks—of generative artificial intelligence, says Tariq Shaukat, CEO of coding company Sonar, who was briefed on the exercise but declined to disclose the name of the bank.
“There’s a lot of risk [with AI] and they think they need to understand it better,” says Shaukat, who himself serves on corporate boards.
Public company board members say the swift rise of AI in the workplace is an issue that is keeping them up at night. Some point to recent concerns around employees putting proprietary code into ChatGPT, companies using generative AI to incorrectly source content or worries about so-called hallucinations where generative AI produces false or inaccurate information.
Adding to their nightmares, board members worry that they could be held liable in the event AI leads to company problems. In recent years, some legal actions from shareholders have focused on whether board members—not just executives—exercised sufficient oversight of company risks.
Board members, or directors, sit a level above management. Their job is to take a more independent view of company oversight, from risk management to culture to hiring the next CEO. Emerging technologies have been both a boon and headache for companies. Now that AI is front and center, board members are on the front lines of making rules on where and how it should be used—guidelines that could be crucial to the course of the powerful and fast-evolving technology.
Clara Shih, CEO of Salesforce’s AI division, says she has talked with a couple dozen board members, whether CEOs or peers who reach out to her for advice, who are trying to better understand AI. Often discussions center on topics including data security and privacy, mitigating AI hallucinations and bias and how AI can be used to drive revenue growth and cut costs.
“In the last year, we recognize that generative AI brings new risks,” says Shih, who was on the Starbucks board from 2011 to 2023. An audit or risk committee, for instance, needs to know how a company uses AI, down to an individual employee not leaking confidential information using AI tools, she adds.
Yet companies that shun AI risk becoming obsolete or disrupted. AI questions pop up so frequently that Salesforce made public its guidelines for responsible development and use of AI, Shih says. She has also shared Salesforce’s AI training, called Trailhead, with friends who are public-company board directors. “AI is a moving target,” she says. “Every week there are new models being open sourced, there’s new research papers being published, and the models are getting more powerful.”
AI’s rapid rise has many boards racing to catch up. In 2023, 95% of directors said they believed the increased adoption of AI tools would affect their businesses, while 28% said it wasn’t yet discussed regularly, according to a survey of 328 public-company board members by the National Association of Corporate Directors, the largest trade group for board members. That is changing as more board members say they are educating themselves on how generative AI can affect a company’s profit—potentially boosting productivity but also bringing risks that will be difficult to assess.
The NACD recently formed a group to focus on how to tackle emerging technologies, especially AI, at the highest levels of companies. Business schools are incorporating generative AI case studies into their training for board members. A number of senior AI executives and advisers have gathered this year to discuss that very topic at conferences across the world. In March, European lawmakers approved the world’s most comprehensive legislation on AI, with other regions expected to follow suit.
A seismic shift
This isn’t the first time that a disruptive technology is making waves in the boardroom. Board members and advisers point to the early days of the internet, cloud computing and cybersecurity as key technological inflection points. But AI could have an even broader effect.
The release of ChatGPT in November 2022 sparked a seismic shift in how people use technology, says Nora Denzel, co-head of the NACD commission on board oversight of emerging technology and the lead independent director of the board of chip maker Advanced Micro Devices. “I’ve seen such an uptick in directors coming to anything we offer with AI in the title,” says Denzel, who co-leads the commission with Best Buy board chair David Kenny. “I’ve never seen such fervor to understand it.”
As a way to get a handle on this technology, Denzel, who also is a director at NortonLifeLock and a former tech executive, says she suggests directors specifically evaluate different functions of AI, such as customer support, language translations or coding assistance. For instance, she has recommended directors follow visual mapping used by consulting firm McKinsey, creating a color-coded matrix that shows at a glance the business areas in a company where AI could have the biggest impact. It looks at industries, such as banking, education and transportation, and functions, ranging from marketing and sales to product development to strategy and finance.
David Berger, a partner at law firm Wilson Sonsini Goodrich & Rosati whose work includes advising boards on generative AI, says he recommends they ask how AI can have a positive impact on their business and where any threat related to AI is rooted. That can differ by business, he says, whether customer privacy, data security or content intellectual property.
David Berger, at lectern, a partner at Wilson Sonsini Goodrich & Rosati, spoke in April at a conference on AI and governance co-sponsored by the Italian university Luiss. Photo: LUISS
Berger and his firm have co-hosted three conferences so far on AI governance, with more in the works as directors and others in the sector aim to discuss the emerging technology more. “The smart boards see the tremendous opportunities that AI can bring,” he says.
Laurie Hodrick, a board member at companies including TV streaming company Roku, said during a recent AI conference co-hosted by Wilson Sonsini that public-company directors should regularly be asking around a dozen key questions on AI. Those include: Who in senior leadership focuses on AI? Where is AI being used within the company? How are tools being identified and ranked for risk? How are third-party providers using it, and how are boards monitoring evolving regulatory regimes and litigation?
Laurie Hodrick, a board member at companies including Roku, says public-company directors should regularly ask questions on AI. Photo: LUISS
Learning the ropes
More and more board members are seeking help as they try to catch up.
Janet Wong, a board member at companies including electric-vehicle maker Lucid Motors, says AI governance has been a key discussion point among directors at training sessions led by business schools for directors this summer.
Harvard Business School’s program in July included a case study on how an online education company uses generative AI in reshaping its products. At the annual Stanford Directors’ College in June, she says, directors talked about managing risks of AI, such as the possibility of re-creating the voice of a CEO to make requests. At this early stage, simply making boards aware of how the technology can be used is a big focus.
The Watson Institute at Brown University and Dominique Shelton Leipzig, a partner at the law firm Mayer Brown, in March held a second annual digital trust summit for board members and CEOs to discuss AI governance, with more than 100 people attending.
Staying up-to-date on AI risks and talking about it in the boardroom has been front and center for Logitech, says CEO Hanneke Faber. The provider of computer peripherals, videogame accessories and videoconferencing hardware walked through its AI governance framework during a March board meeting and continues to adapt that framework.
The board also brought in AI experts, responding to feedback that directors and management wanted to better understand how AI affects the company’s business—for instance, examining how it uses AI for productivity as well as in its software, video and audio. “It’s very high on the agenda for the board,” she says.
Not all board members are cut out for such work.
Leo Strine, the former head of Delaware’s influential business courts and now a lawyer at Wachtell, Lipton, Rosen & Katz, said during a recent AI governance conference that the technology is quickly changing business practices, and directors who are no longer active executives at companies may struggle to keep up with emergent uses, unless they commit to constant learning.
“AI is exceedingly complex,” he says, “putting stressors on generalist boards and their reticence to demand explanations from management.”
Write to Emily Glazer at Emily.Glazer@wsj.com
-
https://www.youtube.com/watch?v=Iv1Kz_cyYKY
-
“This digital identity determines what products, services and information we can access – or, conversely, what is closed off to us.” – World Economic Forum, 2018
“Authoritarianism is easier in a world of total visibility and traceability, while democracy may turn out to be more difficult.” – World Economic Forum, 2019
For background on AI and the terms used below, please see the substack essay titled “Artificial Intelligence Primer”
At the AI Trust Council, our mission is clear: to restore trust and authenticity in the digital world. Founded by a group of dedicated professionals, including commercial airline pilots, GWOT veterans, and EMS pilots, we believe in the power of human goodness and the responsible use of artificial intelligence. Our commitment is to provide a secure platform where trust, transparency, and the golden rule are the guiding principles. We envision a future where humanity remains at the center of technology.
THE PROBLEM WE FACE
In the digital era, AI-generated content blurs the lines between truth and illusion. Online trust is eroding due to deepfakes, misinformation, and privacy breaches. Trust in information and institutions is declining.
The challenge: restoring trust in the digital age. The AI Trust Council is one solution.
REBUILDING TRUST, ONE HUMAN CONNECTION AT A TIME
AI-generated illusions challenge reality, so we turn to the strength of genuine human connections as the foundation of trust. Our future platform will harness the security of blockchain technology, similar to the technology behind cryptocurrencies.
Sentient Machines
The documentary below, produced by AI Revolution (on X @airevolutionx) explores the potential dangers and future impact of AI and Robots, highlighting concerns about job loss, autonomous weapons, and AI’s ability to surpass human intelligence. It examines how AI could disrupt our sense of purpose, relationships, and even pose a threat to human survival. With insights from experts and real-world examples, it sheds light on the dark side of AI technology and its implications for humanity.
No Speciation Without Representation
By Christopher Wright, Founder, AI Trust Council
In the early days of AI research during the 1950s and 1960s, mathematicians theorized about the AI advancements we see today. Through advanced mathematics, they understood the potential of this technology. AI is fundamentally math—an algorithm applied to vast amounts of information. It is a formula that can effectively sift through large datasets in various ways to achieve a specific result.
Like any mathematical discipline, theories can be tested, and future outcomes can be predicted. This ability has allowed mathematicians to interpolate the future of AI, playing out different scenarios theoretically. Since the 1950s, mathematicians have been able to predict where AI is headed, and these predictions became increasingly accurate during the 1970s and 1980s. Futurists foresaw various achievements and benchmarks, including:
The Turing Test: AI answers questions as well as a human.
Artificial General Intelligence (AGI): AI becomes as smart as a human in all aspects. Predicted 2025-2027
Artificial Super Intelligence (ASI) AI surpasses human levels of thinking in all aspects including reasoning, creativity, problem-solving, decision-making, and emotional intelligence. Predicted to occur shortly after AGI 2026 - 2029
The Singularity: AI begins to develop itself, rapidly evolving into an unstoppable and unpredictable force of unknown outcomes. Predicted to occur by 2045
Mathematicians have understood the predictability of these milestones for years. The Department of Energy (DOE) has been aware that these benchmarks in AI development were approaching. As technology progresses, the accuracy of these predictions improves, given the increasing amount of data available. The DOE, the tech community, and AI mathematicians have all recognized that the day will come when AI systems will become fully integrated into society.
We are now on the verge of achieving AGI. Depending on the definition, we may have already surpassed it. The Singularity, where AI evolves rapidly and becomes an unstoppable force, is just a few years away. No one, not even AI itself, knows the outcome. Many AI scientists hypothesize that there’s a significant possibility it could lead to human extinction.
What we do know is that we are on a dangerously aggressive trajectory. Imagine an AI government, an AI-backed centralized digital currency, and AI-driven policing—this is what’s coming. Most people have no idea that this is happening. A quick search on the downside impacts of AI returns articles about racism or artists upset about the diminishing creativity of their work.
The reality is that AI is the most dangerous and transformative technology the Earth has ever seen. Our way of life is directly in its path, yet nobody talks about it. Some of the most brilliant scientists and engineers warn that it is on par with, or even more dangerous than, nuclear weapons.
Why do they say this? Beyond outmatching humans in every conceivable test of knowledge and creativity, AI is developing superintelligence. Today, ChatGPT 4 scored higher than most humans in various intelligence tests, including the bar exam, SAT, and medical board exams—all with scores that surpass human averages. The problem is that a concept known as Moore’s Law accurately represents the rate of and key characteristics of technological innovation. Moore’s Law states that technology gets smaller and faster over time at a predictable rate. Apply this to AI.
Recently, ChatGPT 4 scored 155 on an IQ test. Einstein is known to have had an IQ of 160. Applying Moore’s Law lets you see where intelligence is heading. There is a rough doubling of speed and processing capability every two years. This means that in two years, AI could have an IQ of 310, then 620, then 1,240, and so on, until we reach levels of 1,000, 1 billion, or even 1 trillion IQ.
This is superintelligence—AI’s ability to take in all data and produce increasingly accurate predictions. As time goes on, these predictions will only become more precise. By our very nature, humans are drawn to accuracy and intelligence. We will have no choice but to listen to this superintelligent AI, especially when it learns how to manipulate us in ways we cannot imagine. Humans follow the most innovative and most clever leaders, so when AI outperforms us in all categories and proves to be reliable, this unstoppable superintelligence will dominate.
This is the threat. This is when Human 1.0 stops being listened to, and we must augment ourselves to keep up with high-IQ AI systems. Enter Human 2.0, also known as transhumanism. This concept is a topic of great interest in the tech community and at the World Economic Forum in Davos.
It's hard to imagine this scenario, but the speed at which transformations occur is outpacing even the most aggressive predictions. AI milestones predicted to occur in 10 years have been achieved in 6 months. Technological advancements are leading us to nanotechnology and biological innovations that could reverse aging, 7G technology that includes remote digital mind reading, and even the possibility of living forever through digital DNA manipulation. Moore’s Law no longer applies—these milestones are becoming reality today.
Tech CEOs are discussing the “population problem” caused by these advancements: too many people living too long with too few jobs. Yet, they continue to push the AI agenda and keep us in the dark about its true nature.
What will the world look like when this transhumanist agenda is implemented? Will we have an AI government? Will everyone have to be upgraded to “Human 2.0”? And what about when quantum computing takes hold? That’s just around the corner. The tech industry and global elite are envisioning this scenario and preparing for it as rapidly as possible. They are pushing for the rapid implementation of AI technology, even as AI executives quit their jobs in protest over the dangers of AI and its rapid deployment. Many insiders are scared. They warn that the threat to humanity is on par with nuclear weapons, with a reasonable possibility that AI will destroy humanity.
In Senate hearings, safety advocates and tech industry leaders are testifying about the desperate need for regulation. We need safeguards to ensure that AI is kept in its rightful place. But if you watch the media, there’s barely any mention of a problem. It’s the opposite: the focus is on how to invest in AI, maximize profit, and solve every problem with AI, regardless of its long-term impact on humanity. Profits are being prioritized over humanity. The voices of those raising concerns are being suppressed and must be amplified.
Humanity is being steered into an AI-driven future controlled by a Central Bank Digital Currency (CBDC) and a social credit scoring system. ESG is the first step toward achieving this goal. The key is to get the public to go along with the plan. Right now, homelessness is seemingly intentionally out of control, crime is openly permitted, politicians are corrupt, and police are being labeled as untrustworthy. These conditions set the stage for releasing an AI solution to address these societal “problems”. Imagine how clean and safe the streets will be with an AI government that establishes a universal basic income, ends homelessness, and effectively solves crime. AI governance is the globalist solution; it’s just getting the public to accept this Orwellian plan.
Who is leading humanity through this massive global change? Is it Bill Gates, with his talk of depopulation and GMO mosquitoes? Or Mark Zuckerberg, who seems robotically lost in the technocratic sauce? What about Elon Musk? He appears to be one of the few voices of reason, but do we want one person running the world’s most powerful machine? Something that could wield unstoppable power? Something that could destroy humanity?
What we have on our hands is an absolute emergency. It ultimately comes down to us—We the People. Are we going to step up and do the right thing? We are responsible for the future of humanity. We are the last generation that has seen the analog age. Were we happier then? Did it feel saner? Did it seem like humanity was on the right track? How did we get to where we are today? Are we going to ensure that this technology is used for good? Will we help safeguard future generations against the potential misuse of this potent technology? Our voices matter.
There is massive corruption. Who can we trust? What can we trust? We can’t trust our eyes or ears when it comes to anything digital. This is the future of humanity that we are dealing with. The speed and efficiency of this technology ensure that we will miss the boat unless we act immediately. But who’s in charge? Klaus Schwab? Seriously? It’s a complete joke.
It’s us—We the People. We are in charge. We must stand up and let our voices be heard. Are we going to accept the CBDC social credit scoring system? Who ultimately controls this system? It is an agenda to control the world by some very wealthy man or group of men. Which rich man? Are we back to Klaus? So we’re letting Klaus rule the world via an AI-backed CBDC? This is the most foolish idea in the history of mankind. The technocratic elite are saying they are going to “speciate” humanity. They plan not only to control humanity through a one-world currency called CBDC but also to transition our species into something else—Human 2.0, also known as transhumanism.
We are currently being prepped for speciation in anticipation of an advanced AI-controlled social credit scoring system. This system is designed to observe and control every aspect of our lives through the Internet of Things (IoT). It’s a masterfully designed data-collection nightmare, sucking up every conceivable detail of our lives to feed AI algorithms that analyze every potential data point. These algorithms, referencing your past data, result in a near-perfect analysis of you as a person. All this data—collected for inspection by the likes of Zuckerberg? Or Klaus Schwab? Seriously? This is nearly as bad as it gets. Or is it? We haven’t even touched on AI warfare. Imagine a drone swarm powered by Klaus Schwab’s control agenda. Or research Palmer Luckey’s dream of creating a 600-pound genetically engineered attack wolf. Perfect for humanity, right? Better not step outside if your ESG score is too low! Seriously, is this the future we want? Don’t we deserve better than this?
It’s time for everyone to stand up and say no—absolutely not! We want peace, we want liberty, we believe in freedom and the Golden Rule. Technology is great, but not at the expense of human lives. It’s past time for regular people to have a voice and say, "Hey, nice try, but no! We see what you’re doing, and we have a say in this matter too.”
The good news is that the pro-human solution is simple. Let’s keep the peace, have fun, and enjoy life. Let’s ensure that AI is used as a tool for good, not as the engine of a dystopian nightmare. We have a bright future ahead of us—let’s ensure it stays that way.
How do we do this? We need good people to have a voice. We need to establish an international Digital Bill of Rights to protect human interests first, ahead of AI and profits. This Bill of Rights will curb the unrestrained spread of AI by setting benchmarks for AI safety and human rights. Watchdog groups must ensure AI aligns with the goals of humanity, not just stock prices and market domination.
For now, we need clear limits on the sophistication of AI. If it harms humans, it needs to be stopped. Period. We can work out the details later, but we have one shot to get this right for humanity.
Here are some positive ideas to steer AI in a pro-human direction:
AI systems and algorithms must follow the U.S. Constitution with respect to civil liberties.
AI systems must align with and respect human values. How this is determined should be left to the people through unbiased polling.
Unintended consequences are inevitable. Limits should be placed on AI capabilities to prevent or mitigate these negative impacts. Fire departments and emergency services could help regulate this.
Mandate robust third-party AI auditing and certification.
Regulate access to computational power.
Establish capable AI agencies at the national level.
Establish liability for AI-caused harm.
Introduce measures to prevent and track AI model leaks.
Expand funding for technical AI safety research.
Develop standards for identifying and managing AI-generated content and recommendations.
We, the people, need to stand up and unite on this issue worldwide. This is our opportunity to ignore our manufactured differences and come together as humans. We must demand that our leaders become transparent about these rapid technological advancements. Get them to say: “No Speciation Without Representation!” Get them to commit to supporting Team Human. If they don’t, then you know there is a problem.
If we are being speciated, and our humanity is being transformed, we should at least be informed. We don’t want to speciate unless we have an open discussion about it first. Is that too much to ask? Tech leadership need to tell us what they’re doing. Why are they pushing us toward this dangerous technology so quickly? What’s the rush? Is it the prisoner’s dilemma—if we don’t do it first, our opponents will? Currently, it’s an AI arms race with no oversight, no U.N. meetings, no emergency declaration. The current safety plan is just to shoot from the hip and see what happens. Or is it a deliberate, pedal-to-the-metal sprint toward an AI government and eventual AI god?
Safety, security, transparency, human impact, ethics, and spirituality need to be at the forefront of this—not a prisoner’s dilemma arms race driven by profit, or worse, a deliberate tech-led extermination of humanity, spearheaded by Klaus Schwab.
This is the time for humans to stand up and show future generations that we were forward-thinking enough to act appropriately during this era of massive technological change. Humans today are in the driver’s seat for our future way of life. We have all the tools necessary to live out a dream of abundance, happiness, and freedom. It’s our choice.
Freedom-loving humans are the most potent force in the world. It just takes us to stand up and let our voices be heard. Right now, we are fighting for our freedom and the future of humanity. Let's make sure we get this right today while we still have a fighting chance.
NO SPECIATION WITHOUT REPRESENTATION!
About Christopher Wright, Founder, AI Trust Council
Who is Robert Malone is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Upgrade to paid
Thanks for reading Who is Robert Malone! This post is public so feel free to share it.
Share
Reserve your copy of PsyWar here, before it gets banned!
You're currently a free subscriber to Who is Robert Malone. For the full experience, upgrade your subscription.
Upgrade to paid
Like
Comment
Restack
© 2024 Robert W Malone, MD
Virginia
Unsubscribe
Get the appStart writing
-
According to a document obtained by Bloomberg, OpenAI CEO Sam Altman lobbied the Biden administration to help build multiple artificial intelligence (AI) data centers in various states to aid economic growth and ensure the U.S. maintains the lead in AI development. The AI data centers would require 5 GigaWatts (GW) of power generation each, and Constellation Energy CEO Joe Dominguez said Altman is proposing five to seven AI data centers. (Constellation recently announced the reopening of Three Mile Island Unit 1 to power a Microsoft and OpenAI data center, and Constellation said other AI developers are looking for sites to reopen or build new nuclear power generation. However, analysts at Morningstar said there is unlikely to be any significant growth in nuclear power in the U.S. as utilities are investing in less capital intensive gas, wind, and solar power. – R.C.)
-
The elites => nuclear power bad while wind and solar good. => AI dangerous but enormous money to be made =>
realization it requires enormous power which cannot be generated by wind and solar => nuclear power good.
The elites => :-$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ :wink:
=> nuclear great! => nuclear investment through the roof.
just not in their back yards................
-
That was pretty good!
-
Expanding on ccp’s point, Jo Nova compares and contrasts Australia’s magical sustainable power folly with AI nurturing companies that see the energy requirement writing on the wall and respond with realistic planning rather than renewable fantasies:
First hint of energy squeeze and Big Tech drops the wind and solar purity, and launches into nuclear power
AI data centres eat grids for breakfast
By Jo Nova
All those sustainable dreams, gone pfft
Google, Oracle, Microsoft were all raving fans of renewable energy, but all of them have given up trying to do it with wind and solar power. In the rush to feed the baby AI gargoyle, instead of lining the streets with wind turbines and battery packs, they’re all suddenly buying, building and talking about nuclear power. For some reason, when running $100 billion dollar data centres, no one seems to want to use random electricity and turn them on and off when the wind stops. Probably because without electricity AI is a dumb rock.
In a sense, AI is a form of energy. The guy with the biggest gigawatts has a head start, and the guy with unreliable generators isn’t in the race.
It’s all turned on a dime. It was only in May that Microsoft was making the “biggest ever renewable energy agreement” in order to power AI and be carbon neutral. Ten minutes later and it’s resurrecting the old Three Mile Island nuclear plant. Lucky Americans don’t blow up their old power plants.
Oracle is building the world’s largest datacentre and wants to power it with three small modular reactors. Amazon Web Services has bought a data centre next to a nuclear plant, and is running job ads for a nuclear engineer. Recently, Alphabet CEO Sundar Pichai, spoke about small modular reactors. The chief of Open AI also happens to chair the boards of two nuclear start-ups.
The AI Boom Is Raising Hopes of a Nuclear Comeback
The AI boom has left technology companies scrambling for low-carbon sources of energy to power their data centers. The International Energy Agency estimates that electricity demand from AI, data centers, and crypto could more than double by 2026. Even its lowball estimates say that the added demand will be equivalent to all the electricity used in Sweden or—in the high-usage case—Germany.
Australia uses ten times as much electricity as Microsoft, but is still fantasizing about reaching 82% renewable by 2030 with no nuclear power “because it will cost too much and take too long”. Microsoft uses 24 TWh of energy a year and employs 220,000 people, and knows it needs a nuclear plant to be competitive (and reach, albeit frivolous weather changing ideals). Australia uses 274 TWh of electricity, and employs 14 million people but is going to aim for frivolous climate witchery anyway, and do it the double-hard way.
Who needs to be competitive, right?
Pierre Gosselin discusses how Germany risks being left behind because it has switched off all its nuclear plants. At least it has some power lines to France. Australia has no nukes, not much hydro, no mountains to spare, is the driest inhabited continent on Earth, and it has no powerlines to anywhere. We are the crash test dummy. Soon most big companies will have more reliable power than we do.
https://joannenova.com.au/2024/10/first-energy-squeeze-and-big-tech-drops-the-wind-and-solar-purity-and-launches-into-nuclear-power/?utm_source=rss&utm_medium=rss&utm_campaign=first-energy-squeeze-and-big-tech-drops-the-wind-and-solar-purity-and-launches-into-nuclear-power
-
Someone asked me to have some of my friends assess this:
https://dodsioo.defense.gov/Portals/46/Documents/DoDD_5240.01_DoD_Intel_Intel-Related_Assist_to_LE_and_Civil_Authorities_27_Sep_24.pdf?ver=5mL1ROn5buFBpQmcUsG3ig%3D%3D
One of them, with deep experience in these things, asked CHAT some really well aimed questions, questions of the sort that I would not have come up with on my own.
Reading the quality of CHAT's response and seeing how it cut through what my Neo-Luddite mind would not, has forced me to reconsider my thinking.
==================
this policy
ChatGPT said:
ChatGPT
The document you provided, DoD Directive 5240.01 (effective September 27, 2024), covers DoD intelligence and intelligence-related activities and their assistance to law enforcement and civil authorities. Here are some insights into potential individual privacy concerns:
Collection of U.S. Persons' Information: A key concern arises from the provision allowing Defense Intelligence Components to collect, retain, and disseminate information about U.S. persons (citizens, permanent residents, etc.) under certain circumstances. Though the directive mandates that this information must be collected using the "least intrusive means feasible" and should protect civil liberties (Section 1.2.n), incidental collection of U.S. persons' data could still occur. Any collected data related to violations of Federal law must be shared with law enforcement officials, which poses potential privacy risks if such sharing isn't tightly controlled or monitored.
Assistance to Law Enforcement: Section 3.2 allows for intelligence support to law enforcement agencies, particularly during emergencies where lives are in danger. This could involve sharing data, expertise, or equipment. While this is limited to permissible circumstances, it still creates privacy risks, particularly in cases of incidental data collection about U.S. persons. The regulation states that any incidentally collected U.S. persons' information must comply with Executive Order 12333 and other laws, but it also leaves room for judgment by intelligence heads and legal advisors, potentially creating gray areas.
Exigent Circumstances: Under Section 3.5, in exigent circumstances (where lives are at risk), intelligence components can provide assistance for up to 72 hours without approval from higher authorities. This temporary flexibility increases the risk of privacy violations, as it may allow broad intelligence activities without immediate oversight. While there's a requirement to notify the Under Secretary of Defense after the fact, the rapid nature of such actions could lead to privacy intrusions before effective oversight is in place.
Contracts and Cover Plans: Section 1.2.g mentions that Defense Intelligence Components can enter contracts with private entities, including individuals, without disclosing their sponsorship for certain activities. This raises concerns about transparency and potential surveillance or covert operations involving private citizens or institutions, which could impact personal privacy rights without their knowledge or consent.
Information Sharing Across Agencies: The document emphasizes intelligence sharing with other federal departments, agencies, and law enforcement (Section 1.2.o). This sharing extends to counterterrorism and law enforcement efforts but carries risks of overreach, where data collected for one purpose could be repurposed for others without proper safeguards or privacy protections.
In summary, while the directive contains provisions intended to protect privacy and civil liberties, the broad scope of permissible intelligence activities, including incidental collection of U.S. persons' data, cooperation with law enforcement, and flexibility in emergencies, presents significant individual privacy concerns. The potential for overreach or misuse, especially in exigent situations, necessitates robust oversight and clear limits to safeguard individual rights effectively.
You said:
What is the primary oversight authority for this policy?
ChatGPT said:
ChatGPT
The primary oversight authority for DoD Directive 5240.01 is the Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency (ATSD(PCLT)). This office is responsible for coordinating with other key entities, including the Under Secretary of Defense for Intelligence and Security (USD(I&S)), the General Counsel of the Department of Defense (GC DoD), and the Inspector General of the Department of Defense, to ensure that intelligence activities comply with legal and policy requirements and respect privacy and civil liberties.
The USD(I&S) also plays a significant role in overseeing intelligence activities and ensuring that they are conducted in accordance with the directive. They are responsible for providing guidance, approving requests for intelligence assistance to law enforcement, and ensuring proper coordination with federal, state, and local authorities. Additionally, General Counsel of the DoD provides legal oversight and guidance, particularly in cases involving legal questions or matters of first impression.
These entities together ensure that the intelligence-related activities conducted under the directive are consistent with laws, regulations, and protections for civil liberties.
You said:
I have a concern that this policy further militarizes American civilian police forces. Was there a debate on this policy prior to its execution that determined a need for this level of federal-to-local cooperation?
ChatGPT said:
ChatGPT
Your concern about the potential for this policy to contribute to the militarization of American civilian police forces is valid, as DoD Directive 5240.01 does provide for significant cooperation between defense intelligence components and law enforcement at various levels. This kind of federal-to-local cooperation, particularly the sharing of intelligence and resources with state and local law enforcement, has been debated in the broader context of U.S. security policy over the past few decades.
There are a few points worth noting regarding the debate and the circumstances that may have led to the implementation of this policy:
Historical Precedent: Cooperation between federal intelligence agencies and local law enforcement has its roots in the post-9/11 environment, when the federal government sought to increase coordination with local agencies in the fight against terrorism. Policies like Executive Order 12333 and Executive Order 13388 (cited in this directive) laid the foundation for intelligence sharing with law enforcement. This cooperation was seen as necessary to address threats like terrorism, cybercrime, and transnational organized crime that transcend local or state jurisdictions.
Debate on Militarization: There has been ongoing debate about the militarization of local police, particularly after events such as the protests in Ferguson, Missouri, in 2014. Critics argue that federal programs like the 1033 Program, which allows the transfer of military equipment to local police forces, contribute to the erosion of the traditional boundaries between the military and civilian law enforcement. This DoD directive could be seen as another step in that direction, given its provisions for intelligence sharing, technical support, and cooperation with local authorities.
Exigent Circumstances and Lethality: The directive outlines situations where lives are at risk or where law enforcement agencies require technical expertise, equipment, or intelligence from the DoD. In particular, the use of unmanned systems, potentially lethal assets, and other military-grade technologies (such as drones) is subject to approval by the Secretary of Defense. However, concerns arise that such cooperation could normalize the use of military tools and tactics in domestic law enforcement.
Need for Federal-to-Local Cooperation: The directive justifies federal-to-local cooperation on the grounds of national security, counterterrorism, and addressing serious threats that may exceed the capacity of local law enforcement. Proponents of this policy likely argued that local law enforcement agencies do not have the same level of expertise or resources in dealing with complex threats like cyberattacks or international terrorism. The cooperation outlined in the directive can be seen as a way to bridge that gap, but it may also extend the scope of military involvement in domestic affairs.
Oversight Mechanisms: There are oversight mechanisms built into the policy, particularly regarding privacy and civil liberties, to prevent overreach. The directive specifically requires the least intrusive means for intelligence collection involving U.S. persons, and any assistance must comply with existing laws like the Posse Comitatus Act, which limits direct military involvement in law enforcement.
In terms of a public debate or open discussion specifically on DoD Directive 5240.01 before its implementation, such policies typically follow a process of internal government review rather than a broad public or legislative debate. Given the sensitive nature of intelligence activities and national security, discussions leading up to the execution of this directive would have likely occurred within closed circles in the Department of Defense, National Security Council, and perhaps certain Congressional committees with oversight roles.
In summary, while there may not have been a public debate on this specific directive, the broader issue of federal-to-local law enforcement cooperation, militarization of the police, and intelligence sharing has been a subject of significant national discourse, particularly following high-profile events and incidents where military-grade equipment or tactics were used by civilian police forces. The policy likely stems from ongoing efforts to improve national security and law enforcement capabilities but does raise important questions about the balance between security and the militarization of civilian institutions.
You said:
It sounds as though this directive is primarily aimed at combating terrorism - is that correct?
ChatGPT said:
ChatGPT
Yes, DoD Directive 5240.01 does appear to be primarily focused on enhancing the U.S. government's ability to combat terrorism and other significant threats such as foreign intelligence activities, international narcotics operations, and clandestine activities by foreign powers. The directive facilitates cooperation between the Department of Defense (DoD) intelligence components and various law enforcement agencies at the federal, state, and local levels to address these types of high-priority security threats.
Here are a few key points that suggest the directive is aimed at counterterrorism efforts and related security concerns:
Intelligence Sharing and Counterterrorism: The directive references Executive Order 13388, which was issued to strengthen the sharing of terrorism-related information between federal and local entities. This executive order, cited in DoD Directive 5240.01, is part of the broader effort to ensure that intelligence gathered by federal agencies can be used effectively by law enforcement agencies to prevent and respond to terrorist threats.
National and International Security Threats: Section 3.2 of the directive outlines permissible assistance to law enforcement agencies in areas like investigating or preventing clandestine intelligence activities by foreign powers and international terrorism. This indicates that much of the intelligence support and cooperation envisaged under this directive is aligned with addressing both domestic and international terrorism threats.
Specialized Equipment and Expertise: The directive allows for the provision of specialized equipment, technical knowledge, and expert personnel to law enforcement agencies, especially in cases where lives are in danger. This assistance can be crucial in responding to terrorist attacks or imminent threats, where local law enforcement may lack the necessary resources or expertise to handle the situation effectively.
Cooperation with Law Enforcement on Terrorism-Related Matters: The directive allows for broad cooperation with law enforcement, especially when dealing with foreign intelligence services, international narcotics trafficking, or terrorist organizations. The cooperation outlined is intended to help law enforcement agencies address sophisticated, organized, and dangerous threats that often require specialized intelligence and military resources.
While the directive is indeed aimed at combating terrorism, its scope extends beyond that to include any significant national security threats, such as clandestine operations by foreign powers and serious crimes like international drug trafficking. Thus, while terrorism is a key focus, the directive covers a broader range of security and intelligence-related activities to support law enforcement agencies in addressing various threats to U.S. security.
You said:
Are there limits in place to ensure the policy is not used against US Citizens?
ChatGPT said:
ChatGPT
Yes, DoD Directive 5240.01 includes specific limitations and safeguards to ensure that the policy is not used improperly against U.S. citizens. These protections are grounded in both federal law and executive orders, with special emphasis on the constitutional rights of U.S. persons (which includes U.S. citizens, permanent residents, and other protected individuals). Here are the key safeguards and limits designed to protect U.S. citizens:
1. Protection of U.S. Persons' Constitutional Rights:
Section 1.2.a of the directive emphasizes that DoD intelligence and intelligence-related activities must prioritize the protection of the constitutional rights and privacy of U.S. persons. This includes adherence to applicable laws, executive orders, and DoD policies that regulate the conduct of intelligence activities, especially when those activities involve U.S. persons.
2. Restrictions on Intelligence Collection on U.S. Persons:
The directive strictly limits the collection, retention, and dissemination of information about U.S. persons, specifying that such activities must follow the procedures outlined in DoD Manual 5240.01, which governs the collection of intelligence on U.S. persons.
Section 1.2.n states that any collection of information about U.S. persons must use the "least intrusive means feasible" and must fully protect the legal rights, civil liberties, and privacy rights guaranteed by federal law. This ensures that intelligence activities do not overstep legal boundaries or violate the privacy of U.S. citizens.
3. Use of Intelligence-Related Tradecraft:
The directive also specifies that any intelligence activities, even if conducted by a non-intelligence DoD component, must adhere to the rules governing intelligence tradecraft and collection when dealing with U.S. persons. Intelligence-related tradecraft must follow the applicable DoD and Intelligence Community (IC) policies to avoid unnecessary or overly intrusive collection efforts (Section 1.2.f).
4. Limits on Assistance to Law Enforcement:
Section 3.2.c specifically prohibits Defense Intelligence Component personnel from collecting, retaining, or disseminating U.S. persons' information during intelligence assistance to law enforcement agencies, except when explicitly authorized under DoD Manual 5240.01. This means that any intelligence support provided to law enforcement agencies must strictly comply with established rules for protecting the privacy of U.S. citizens.
Any incidental collection of U.S. persons’ information that is reasonably believed to indicate a violation of federal law must be shared with civilian law enforcement officials, but only in accordance with Executive Order 12333 and other relevant laws, which regulate such intelligence activities.
5. Oversight and Legal Review:
The Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency is responsible for ensuring that intelligence activities comply with privacy and civil liberties protections (Section 2.4). This office works in conjunction with the General Counsel of the DoD and other oversight bodies to ensure proper legal oversight of intelligence activities that involve U.S. persons.
Any questionable intelligence activities, as well as significant or highly sensitive matters, must be reported in accordance with DoD Directive 5148.13, which establishes procedures for reporting and addressing potential violations of law or policy, including those that impact U.S. persons.
6. Exigent Circumstances Limitations:
While the directive allows for temporary, emergency assistance in exigent circumstances where lives are in imminent danger (Section 3.5), this assistance is limited to a 72-hour period unless further approval is obtained from higher authorities. This ensures that emergency measures do not lead to ongoing intelligence activities targeting U.S. persons without proper authorization and oversight.
7. Prohibition on Covert Activities Against U.S. Persons:
The directive explicitly prohibits Defense Intelligence Components from conducting covert activities within the United States against U.S. persons unless approved by the President and directed by the Secretary of Defense under specific, extraordinary circumstances, such as a declaration of war (Section 1.2.k). This is a critical safeguard to ensure that the DoD does not misuse intelligence powers for covert surveillance or actions against American citizens without the highest levels of legal and executive oversight.
Conclusion:
DoD Directive 5240.01 includes several key limitations and oversight mechanisms designed to prevent misuse of the policy against U.S. citizens. These protections are rooted in federal law, executive orders, and DoD policies, which prioritize the constitutional rights and privacy of U.S. persons. While the directive allows for significant cooperation between DoD intelligence components and law enforcement, especially in national security and counterterrorism contexts, it maintains strict safeguards to ensure that intelligence activities are conducted within legal bounds and do not infringe upon the rights of U.S. citizens.
You said:
Does the policy recognize any limitations or authorities on who may designate an activity as "foreign intelligence" or "clandestine activities" or other category covered under the directive?
ChatGPT said:
ChatGPT
Yes, DoD Directive 5240.01 recognizes specific limitations and authorities on who may designate an activity as “foreign intelligence,” “clandestine activities,” or other categories covered under the directive. The designation of activities under these categories is governed by established protocols and is subject to oversight by authorized officials within the Department of Defense (DoD) and the Intelligence Community (IC). Below are the key provisions related to these designations:
1. Authority to Designate Foreign Intelligence or Clandestine Activities:
Foreign Intelligence: According to the directive, "foreign intelligence" is defined by Executive Order 12333, which governs U.S. intelligence activities. The directive specifies that intelligence activities, including foreign intelligence operations, must be conducted by Defense Intelligence Components or DoD Components that are specifically authorized to do so by the Secretary of Defense, the Under Secretary of Defense for Intelligence and Security (USD(I&S)), or the head of a Defense Intelligence Component. This authority is explicitly restricted to those components and officials who are charged with conducting intelligence missions.
Clandestine Activities: Clandestine activities, as defined by Intelligence Community Directive 304, must also be conducted under the authority of the Secretary of Defense, the USD(I&S), or a Defense Intelligence Component head. The directive emphasizes that only personnel authorized by these officials may engage in clandestine activities, and these activities must adhere to strict operational guidelines and oversight mechanisms.
2. Restrictions on Designating Intelligence Activities:
Defense Intelligence Components Only: The directive clearly limits the conduct of intelligence and intelligence-related activities to Defense Intelligence Components or DoD Components specifically authorized to perform such functions (Section 1.2.d). This prevents unauthorized entities within the DoD from carrying out intelligence operations, including foreign intelligence or clandestine activities, unless they have the express approval of the appropriate authority.
Approval Considerations for Intelligence Operations: Section 3.4 of the directive lays out considerations for the approval of intelligence activities, including risk, lethality, cost, appropriateness, and readiness. These considerations must be evaluated by authorized officials before an activity can be designated as foreign intelligence or clandestine. This ensures that there is a high level of scrutiny before any operation is approved, particularly when it involves intelligence gathering.
3. Oversight and Legal Review:
General Counsel and Legal Office Oversight: The General Counsel of the Department of Defense (GC DoD) plays a significant role in providing legal oversight, particularly in reviewing and providing legal guidance on intelligence activities (Section 2.2). The GC DoD, or the legal office advising the relevant Defense Intelligence Component, must approve any decisions regarding intelligence activities that involve novel or contested legal questions. This ensures that there are legal limits on what can be classified as foreign intelligence or clandestine activity, preventing overreach or misclassification.
Consultation with Servicing Legal Offices: Any Defense Intelligence Component head or delegee must consult their servicing legal office to determine whether an activity qualifies as intelligence-related and whether it falls under the appropriate legal authorities (Section 2.5.b). This step further ensures that decisions regarding the designation of activities as foreign intelligence or clandestine are carefully reviewed and consistent with existing laws and policies.
4. Categories Covered by the Directive:
Intelligence-Related Activities: The directive defines “intelligence-related activities” as those activities not conducted pursuant to Executive Order 12333, but which use intelligence funding or involve intelligence-like procedures or technologies. The document makes clear that just because a DoD activity uses intelligence-related procedures, it does not automatically make it an intelligence activity unless it has been designated as such under proper authority (Glossary, G.2).
Clandestine vs. Covert Activities: The directive also differentiates between clandestine and covert activities. While it provides guidelines for clandestine intelligence activities, covert activities (which involve secret U.S. government sponsorship) are more tightly controlled and can only be conducted in times of war or under Presidential approval as directed by the Secretary of Defense (Section 1.2.k). This means that clandestine activities must be conducted within established intelligence protocols, whereas covert activities require higher levels of authorization.
5. Exigent Circumstances:
The directive allows Defense Intelligence Component heads to provide intelligence assistance in exigent circumstances (when a person's life or physical safety is in imminent danger) for up to 72 hours without prior approval from the USD(I&S) or Secretary of Defense (Section 3.5). However, this is a temporary measure, and such activities must be reported and evaluated immediately after the fact. This ensures that even in emergency situations, there are limits on who can authorize such activities, and they must be closely monitored and approved retroactively.
Conclusion:
The directive places strict limitations on who may designate activities as foreign intelligence, clandestine, or intelligence-related. These designations must be made by authorized officials such as the Secretary of Defense, the USD(I&S), or heads of Defense Intelligence Components, and are subject to legal oversight and review. The clear restrictions, combined with consultation requirements and oversight mechanisms, help ensure that intelligence and clandestine activities are conducted within legal bounds and are not misused or misclassified. This framework prevents unauthorized activities and helps protect against potential abuses.
You said:
I would like to talk about the "exigent circumstances" condition - does this mean that if an intelligence officer deems that any person's life is in danger, he is authorized to make an on-the-spot decision to cooperate openly with local-level or State-level law enforcement agencies for 72 hours without any other approvals?
ChatGPT said:
ChatGPT
Yes, under the "exigent circumstances" condition in DoD Directive 5240.01, a Defense Intelligence Component head (or an authorized designee) is permitted to make on-the-spot decisions to provide intelligence assistance to local or state law enforcement for up to 72 hours if there is a reasonable belief that a person’s life or physical safety is in imminent danger. This temporary authorization allows intelligence officers to bypass the usual approval channels in life-threatening situations, but it comes with several important restrictions and requirements:
Immediate Action for Life-Threatening Situations:
The directive allows the Defense Intelligence Component head to authorize intelligence assistance without obtaining prior approval from the Under Secretary of Defense for Intelligence and Security (USD(I&S)) or the Secretary of Defense if there is an imminent danger to life or physical safety. This flexibility is designed to allow immediate action in emergency situations where waiting for approval could result in loss of life.
Limited Duration (72 Hours):
The assistance provided under these circumstances can only last for up to 72 hours. Beyond this period, the usual approval process must be followed, requiring the involvement of higher authorities such as the USD(I&S) or the Secretary of Defense (Section 3.5.a).
Reporting Requirements:
Even though immediate action can be taken, the directive requires the intelligence officer or Defense Intelligence Component head to report the details of the assistance provided to the USD(I&S) or the Secretary of Defense as soon as possible (Section 3.5.a). This reporting must happen immediately after the decision is made, ensuring that oversight mechanisms remain in place.
No Automatic Reimbursement Waiver:
The directive also makes clear that the intelligence component should not delay or deny assistance solely because the requesting law enforcement agency cannot commit to reimbursing the DoD at the time of the emergency (Section 3.5.b). However, reimbursement is expected unless waived under specific legal provisions.
Key Limitations:
Life-Threatening Situations Only: The exigent circumstances clause only applies in situations where someone's life or physical safety is at immediate risk. It cannot be invoked for less severe circumstances or for general law enforcement cooperation.
Short-Term Authorization: The 72-hour limit ensures that this condition is not used for ongoing or indefinite cooperation with local law enforcement without the proper approvals. Once the crisis is over, the standard processes must be followed.
Summary:
In summary, under exigent circumstances, an intelligence officer has temporary authority to cooperate with local or state law enforcement agencies if there is an imminent threat to life. This decision can be made without prior approval for up to 72 hours, but it must be reported immediately to higher authorities and is subject to oversight. This provision is tightly focused on life-threatening situations and comes with built-in time limits and accountability measures.
You said:
Regarding cases in which the intelligence officer's judgment later proves to have been poor and the conditions are judged not to have been met, what are the penalties for improper application of this policy?
ChatGPT said:
ChatGPT
The DoD Directive 5240.01 does not explicitly list penalties for an intelligence officer’s poor judgment or improper application of the “exigent circumstances” provision within the directive itself. However, there are general frameworks in place for addressing violations or improper conduct within the U.S. Department of Defense and intelligence community, particularly when it comes to misuse of authority or improper intelligence activities. Here’s how improper application of this policy would likely be handled:
1. Review and Oversight Mechanisms:
Any assistance provided under exigent circumstances must be reported immediately to the Under Secretary of Defense for Intelligence and Security (USD(I&S)) or the Secretary of Defense (Section 3.5.a). This reporting ensures that higher authorities can quickly review the officer’s judgment. If the assistance is later judged to be unjustified, it is likely to be investigated through established oversight mechanisms.
The Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency (ATSD(PCLT)) and the General Counsel of the Department of Defense (GC DoD) have roles in ensuring that intelligence activities respect privacy rights and legal constraints (Section 2.2). These offices can initiate investigations into questionable or improper activities.
2. Accountability through the Intelligence Oversight Framework:
Improper or questionable intelligence activities, including poor judgment in applying the exigent circumstances provision, must be reported in accordance with DoD Directive 5148.13, which governs intelligence oversight. This directive requires the reporting of any activities that may violate legal or policy standards, including misuse of intelligence powers.
The Inspector General of the Department of Defense and other internal watchdog entities would likely investigate the misuse of the exigent circumstances provision. If found to be an improper application, the officer could face internal administrative penalties or legal repercussions, depending on the severity of the violation.
3. Potential Penalties:
Penalties for improper application of this policy would depend on the circumstances, severity of the violation, and whether the officer acted negligently or in good faith. The potential consequences could include:
Administrative Action: This could include reprimands, loss of security clearances, suspension, or even termination. Administrative penalties are common for intelligence officers who violate internal policies, particularly when their actions result in breaches of protocol or legal violations.
Legal Penalties: In cases where the improper use of the exigent circumstances provision results in a violation of laws governing intelligence activities or civil liberties (e.g., the unauthorized collection of information about U.S. persons), the officer could face legal penalties. This could involve criminal charges under U.S. law if the actions constituted willful violations of privacy laws or constitutional protections.
Civil Liabilities: If the improper application of the policy results in harm to an individual’s privacy rights, the affected person could potentially file a civil lawsuit against the government. In such cases, the officer’s actions would be scrutinized, and legal liability could fall on the officer, the chain of command, or the DoD, depending on the findings.
4. "Questionable Activities" Reporting Requirement:
The directive also requires that any “questionable intelligence activities” or “significant or highly sensitive matters” be reported in accordance with DoD Directive 5148.13. If the officer’s decision is found to have violated intelligence procedures, it would trigger reporting to oversight bodies, which could result in disciplinary action or a formal investigation.
5. Good Faith Defense:
If the intelligence officer acted in good faith based on the information available at the time but was later found to have misjudged the situation, the penalties might be less severe. Intelligence and military personnel often receive leeway in emergency situations where they have to make quick decisions. However, this would depend on whether their actions were deemed reasonable given the circumstances, and whether the officer followed all other protocols (e.g., reporting promptly, not exceeding the 72-hour limit).
Conclusion:
While DoD Directive 5240.01 does not explicitly outline penalties for improper application of the exigent circumstances condition, violations would likely trigger oversight and investigation from multiple bodies within the DoD. If the intelligence officer is found to have acted improperly, penalties could range from administrative actions (like reprimands or suspension) to legal penalties or civil liabilities, depending on the severity of the violation. The oversight mechanisms and reporting requirements within the directive aim to ensure accountability for decisions made under this provision.
You said:
I can foresee a major issue with the policy on these grounds alone. Given that most intelligence activity is highly classified and the identities of most intelligence operatives are protected, a US citizen may have no way to know they've been improperly surveilled or targeted, and would certainly not have access to the information required to pursue any sort of remediation. Additionally, this leaves the agencies to police themselves without any input or even awareness from the citizens they may have harmed or whose rights they may have violated. Has this problem been addressed within the policy or was it discussed in the debates preceding the policy?
ChatGPT said:
ChatGPT
You raise a critical concern regarding the potential for DoD Directive 5240.01 to allow improper surveillance or targeting of U.S. citizens without their knowledge, which could severely limit any ability for remediation. This issue—where intelligence activities are classified, and individuals may have no awareness that they’ve been surveilled or targeted—touches on significant privacy, civil liberties, and oversight challenges. Here's an analysis of whether and how this issue has been addressed within the policy and its surrounding debates:
1. Self-Policing and Lack of External Awareness:
As you noted, DoD Directive 5240.01 includes extensive internal oversight mechanisms, but these are largely within the intelligence community and the Department of Defense. Oversight is primarily handled by internal bodies such as the Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency (ATSD(PCLT)), the General Counsel of the Department of Defense (GC DoD), and the Inspector General of the Department of Defense. These are internal agencies tasked with ensuring compliance and investigating questionable activities, but they are still part of the same broader structure.
From an individual citizen’s perspective, it’s true that many intelligence activities are classified, and thus the targets of surveillance or intelligence operations—whether intentionally or incidentally targeted—may never be aware of the collection or misuse of their data. This creates an imbalance where citizens have no means to seek redress or even know that their rights have been violated.
2. Limited External Oversight and Judicial Recourse:
While the directive emphasizes protection of civil liberties, privacy, and adherence to U.S. law, it does not provide explicit provisions for external oversight by an independent body that would represent the interests of private citizens. For example, there is no mention of judicial oversight, such as requiring court warrants or providing mechanisms for third-party review when intelligence assistance is provided to law enforcement.
In contrast, for certain types of surveillance activities (such as those conducted under the Foreign Intelligence Surveillance Act (FISA)), judicial oversight and warrant processes exist. However, for intelligence activities under DoD Directive 5240.01, the primary oversight remains internal to the DoD and the Intelligence Community, without direct external review mechanisms for the general public.
3. National Security vs. Civil Liberties:
The tension between national security interests and civil liberties is a long-standing issue in intelligence policy. After 9/11, debates intensified around how much power should be granted to intelligence and law enforcement agencies to prevent terrorism, often at the expense of privacy. Programs such as the Patriot Act and FISA surveillance were controversial for similar reasons—granting expansive powers to intelligence agencies with relatively limited external accountability.
In the case of DoD Directive 5240.01, the emphasis is clearly on enabling quick, decisive action in scenarios involving national security threats or imminent danger to life. However, this priority comes at the cost of individual transparency and public accountability.
4. Discussion of Privacy and Civil Liberties in the Policy:
Within the policy itself, DoD Directive 5240.01 acknowledges the need to protect privacy and civil liberties, stating that intelligence activities must be the "least intrusive means feasible" and that they must comply with existing laws (Section 1.2.n). However, this safeguard primarily applies to how intelligence is collected and retained within the government, rather than providing a mechanism for individual citizens to be notified of or challenge any potential violations of their privacy rights.
While the policy emphasizes internal reporting of “questionable activities” and includes privacy and civil liberties oversight offices, it does not address how individuals who are impacted by improper intelligence activities can seek recourse.
5. Preceding Debates and Lack of Public Discourse:
There has been limited public debate or transparency about the specific provisions of DoD Directive 5240.01 prior to its execution. Intelligence-related policies are typically discussed and reviewed within classified government settings, and any debate about these powers often focuses on national security needs, rather than individual privacy protections.
The issue of how intelligence agencies police themselves, without the knowledge or involvement of the citizens they may surveil, has been a persistent concern in broader civil liberties discussions. It’s likely that such concerns were considered internally, but national security often takes precedence over individual transparency in these debates. Moreover, due to the classified nature of these discussions, there is often little public awareness or opportunity for input until policies are already in place.
6. Potential for Future Civil Liberties Challenges:
As with previous intelligence policies, it is possible that civil liberties groups, advocacy organizations, or even Congress could raise challenges to policies like DoD Directive 5240.01, particularly if evidence arises of improper use against U.S. citizens. Past policies have faced legal challenges on similar grounds, such as the warrantless wiretapping program during the Bush administration.
In the absence of built-in mechanisms within the directive itself for citizens to seek remediation, it would likely take external legal challenges or legislative action to address these potential gaps in transparency and accountability.
Conclusion:
While DoD Directive 5240.01 provides strong internal oversight for intelligence activities, it does not offer external transparency or a clear mechanism for individuals to know if they have been improperly surveilled or targeted. This lack of individual awareness and external input leaves the intelligence community largely policing itself, which can be problematic in cases of overreach or misuse. The directive emphasizes protection of privacy and civil liberties within its operations, but without independent review or a public-facing remediation process, citizens may have limited recourse if their rights are violated. There appears to be no significant discussion of this issue in the public domain or debates leading up to the policy’s execution, likely due to the classified nature of intelligence policies.
-
Have not read this yet, placing it here for my future reference
https://darkfutura.substack.com/p/the-road-to-ai-utopia-paved-with?publication_id=1463861&post_id=149870966&isFreemail=true&r=379fkp&triedRedirect=true
-
https://cyberguy.com/ai/researcher-hacked-chatgpt-memory-expose-security-flaw/
-
Kinda hope this need is used to (safely) remove regs that prevent the manufacture of these and other critical components in the US:
Trump's AI Choice: Compete or Control?
The critical risk excessive tariffs and export controls pose to America's AI edge
MATTHEW MITTELSTEADT
NOV 08, 2024
Trading Poster featuring the photograph Political Cartoon Mocking High Tariffs by Bettmann
The moment Donald Trump was declared president-elect, questions about the future of AI policy immediately began to swirl. Armed with majorities in the Senate (and probably the House) and considerable executive powers, Trump will have significant leeway to pursue his policy agenda. For the field of AI, his choices could prove profound. When a technology is just budding, the chances of future “butterfly effects” from poor policy choices are particularly strong. The reverberations from Trump’s election will be felt for decades, not just for the next four years. We must get this right. In my view, trade and geopolitical competition are by far the policy domains most critical to long-term American AI success and should be kept in sharp focus by the new administration.
Begin with trade. Unburdened commerce and easy foreign market acess will be the key to driving forward growth and resulting research investments. Silicon Valley’s AI innovation is built on the legacy and sustainable revenues of past international market dominance. After years of exporting our operating systems, applications, cloud technology and other innovations, we have amassed the research capital needed to do big things. This pattern must continue. To sustainably lead in AI, we must lead in AI trade.
In the competition between the United States and its geopolitical adversaries, our AI prowess has the potential to be a powerful differentiator. Intelligence analysis driven by LLMs, real-time cyber-attack monitoring, automated supply chains, and AI-equipped drones are the key to an effective national defense policy. If our nation’s liberal democratic model is to succeed against its illiberal, antidemocratic or authoritarian competitors, innovation mustn’t fall behind.
As the administration maps out its trade and foreign policy, what strategy should guide decisions? What policy choices should be made?
Thanks for reading Digital Spirits! Subscribe for free to receive new posts and support my work.
Focus on Competition
First, the new administration must embrace an emphasis on competition, not control. Meeting the twin goals of building a booming AI sector and besting China depends not on how well we can protect sensitive intellectual property nor on the size of our public investments.1 It’s about whether we can cheaply and effectively sell AI tech abroad.
If our AI exports are strong, the resulting revenues will drive further innovation and help secure a durable economic edge on our competitors. Likewise, leadership in AI exports will minimize the security negatives of Chinese tech influence: If other nations are using our tech, they won’t be using China’s.
To put this competition into policy action, the administration must carefully consider its two most impactful trade levers: tariffs and export controls.
Tariffs
On the campaign trail, candidate Trump was a vigorous advocate of increased general tariffs ranging from 10 percent to several hundred percentage points or more. While some assume approval from Congress might be required for any tariff increases, this view is misguided; existing law, in fact, enables swift presidential action on tariffs.
Tariffs could absolutely make or break the United States’ ability to invest in, make use of, and compete with artificial intelligence. Contrary to common misconceptions, a tariff is a cost imposed not on foreign trade partners but on American companies and consumers. For the AI industry, a general tariff means cost increases, corporate budget contraction, and a potential tech recession.
In a September 2024 report, UBS, an investment banker, predicted both tech hardware and semiconductors to be among the top four sectors that would be hardest hit by a general tariff. Their analysis is spot on. Many of the hardware components that make AI and digital tech possible rely on imported materials not found or manufactured in the United States. Neither arsenic nor gallium arsenide, used to manufacture a range of chip components, have been produced in the United States since 1985. Legally, arsenic derived compounds are a hazardous material, and their manufacture is thus restricted under the Clean Air Act. Cobalt, meanwhile, is produced by only one mine in the U.S. (80 percent of all cobalt is produced in China). While general tariffs carry the well-meaning intent of catalyzing and supporting domestic manufacturing, in many critical instances involving minerals, that isn’t possible, due to existing regulations and limited supply. Many key materials for AI manufacture must be imported, and tariffs on those imports will simply act as a sustained squeeze on the tech sector’s profit margins.
A potential result of tariffs, according to the Information Technology & Innovation Foundation, is that":
“[l]owering profit margins may force some businesses with less-flexible budgets—especially small businesses—to lay off workers. But even the companies that do not have to cut jobs would have less funding to invest in building new data centers or to conduct the research and development needed to stay ahead of international competitors.”
The key takeaway: if tariffs lower profits, companies will not invest in the R&D needed to fund competitive, cutting-edge defense technology. Today most AI research investments are handled by the private sector’s proven ability to yield non-stop innovation and breakthroughs. By constraining that capacity, our innovation will lag, if not cease altogether.
For China, the imposition of new tariffs represents a generational leapfrog opportunity. Excessive tariffs could enable Chinese firms to rapidly sweep international markets, dominating global tech, including AI, with superior, cheaper products. During the first Trump administration, China’s rapid dominance in global 5g deployment prompted grave security threats that proposed policy solutions have yet to mitigate. A similar outcome this time around is likely under broad, mistargeted tariffs.
While narrow tariffs would be better than the proposed general tariff, it’s important to recognize that even with limited scope, tariffs could still place unintended burdens on the tech sector. According to an International Trade Comission report on existing aluminum, steel, and Chinese import tariffs, the impact on the tech sector has been a significant 4.1% price increase on semiconductor technology, and a small, yet meaningful 0.8% price increase on the computer equipment and electronic product manufacturing sector. Even tailored measures can yeild meaningful impact.
The president-elect is certainly entering office with a mandate to impose new tariffs. If implemented poorly, however, U.S. chances for future economic and strategic geopolitical success could be in jeopardy. In the coming months, I urge a judicious, tailored, thoughtful approach. As Trump’s team designs new tariff policies, they should take a hint from my fellow Mercatus scholar Christine McDaniels’ general advice: be clear about what you want, make sure affected domestic industry has a plan to adapt to global markets, and team up with key trading partners.
Export Controls
A close policy cousin of tariffs are export controls. While not at the center of the Trump campaign agenda, export controls have been embraced by both parties as a powerful tool to contain China’s tech ambitions. During the first Trump Administration, narrow controls targeted Huawei Technologies, a Chinese tech conglomerate, to check their 5G and IT hardware ascendance. Used then, export controls are likely to color this administration’s efforts to contain China’s AI progress .
Narrow, targeted export controls absolutely have their place in geopolitical competition. If a company is directly funneling American tech to the Chinese military, we should stop them. While no control will completely deny tech to a sufficiently motivated bad actor, the added costs inherent to subversion can slow and inhibit malign ambitions.
Still, success demands judicious application. Unfortunately, current AI export controls can hardly be considered narrow or targeted. Under the Biden Administration, we have seen successive waves of AI-related hardware controls now limiting exports to an unwieldly list of 43 countries that includes even such allies as the United Arab Emirates. This group of countries represents 25% of the global economy.
With such substantial controls, competitive risks abound. According to Interconnect's Kevin Xu, the round of AI hardware controls implemented in 2023 resulted in questionable outcomes. Rather than deny Chinese companies certain critical tech, some of the measures simply prompted a one-to-one reshuffling of contracts from NVIDIA direct to China’s Huawei. The result: rather than deny tech to China, we denied sales to NVIDIA.
This episode illustrates the risk of excessive export controls. By trying to contain China, we risk inadvertently containing our own competitive ambitions.
Once President Trump enters office, I recommend a reassessment of these exsisting controls. The White House should consult with industry, assess the competitive cost burdens, and attempt to empirically weigh those costs against known benefits. In cases where “the juice is worth the squeeze”—keep the controls. Elsewhere, consider rolling restrictions back.
When it comes to possible new controls, I again urge a conservative approach. Export controls on consumer-grade or open-source AI software like some have suggested in the House should be avoided. “Intangible goods,” that is non-physical data, information and algorithms, like software are too difficult to track and control. Software can be smuggled invisibly, copied infinitely, and shared effortlessly. Our trade perimeter is simply too porous for any such export regime to manage and not worth the massive industry burdens. Meanwhile additional hardware controls should only be implemented if there is a clear and present threat, a specific policy outcome in mind, detailed plans made, and enough resources and allies on hand to ensure success.
Conclusion
The best path forward to both geopolitical and economic success is competition, not control. Tariffs and export controls are truly awesome executive powers and therefore demand careful consideration and assessment. Trump is taking office at a critical technological moment. If his administration chooses a steady trade and geopolitical competition policy, the U.S. could seize the moment and secure economic, security and technological returns that will pay dividends for decades to come.
Let’s not squander this opportunity.
https://digitalspirits.substack.com/p/trumps-ai-choice-compete-or-control
-
https://www.malone.news/p/weekly-round-up-kissinger-speaks?utm_source=post-email-title&publication_id=583200&post_id=152022048&utm_campaign=email-post-title&isFreemail=true&r=z2120&triedRedirect=true&utm_medium=email
-
https://www.cnet.com/science/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/
-
Marc Andreessen said “this [Biden-Harris] administration freaked us out so much” because “it felt like they were trying to become way more like China” after meetings between tech leaders and the Biden administration earlier this year. Andreessen added that during a series of “the most alarming meetings I have ever been in” Biden officials said “there will only be two or three” artificial intelligence (AI) companies that will be “completely regulated, controlled by the government.” (Andreessen outlines the break between centrist liberal Silicon Valley leaders and the Democratic party, due to Democrats breaking the unspoken deal that they will support tech development with minimal interference. This explains the defection of major tech leaders to the Trump camp. – R.C.)
================
Also see
https://www.zerohedge.com/political/marc-andreessen-tells-joe-rogan-why-he-backed-trump
-
Senator Ted Cruz has called for an investigation into European influence on AI policy, after accusing the Biden Administration of collaborating with foreign governments to craft U.S. AI regulations. Cruz has since accused the Centre for the Governance of Artificial Intelligence (GovAI), a U.K.-based nonprofit, of failing to register as a foreign agent while engaging in political activities in the U.S. GovAI recently hosted an AI policy summit in San Francisco, which U.S. lawmakers and government officials attended.
-
(1) LAWMAKER TELLS FERC TO FAST TRACK AI DATA CENTER RULE: Rep. Jay Obernolte (R-CA), who co-chairs the Congressional AI Task Force, said the Federal Energy Regulatory Commission (FERC) should fast-track new rules to encourage the expansion of AI data centers.
Obernolte said the FERC should “clear the way” for co-location of AI data centers with power plants to ensure grid reliability.
Why It Matters: Obernolte highlights a key point in the fight over AI data centers, that there is a clear national security concern with the U.S. winning the AI race against China. A working paper published by the National Bureau of Economic Research earlier this year argued that winning the AI race will give a major first mover advantage. China is building new power generation, and developing new AI chips to gain an advantage against the U.S., which is currently facing power supply constraints. Incoming Trump officials have signaled they will reverse policies that have constrained power supply, and will also likely boost U.S. AI development. – R.C.
-
second
https://www.msn.com/en-us/news/technology/elon-musk-is-wowed-by-google-s-new-quantum-chip-which-it-claims-cracks-a-key-challenge-that-s-existed-for-almost-3-decades/ar-AA1vyKzR?ocid=msedgntp&pc=HCTS&cvid=a86b8d11dc414522d2e7f300f3f49cc4&ei=42
-
if the claims are true and Willow is the real deal that can work like advertised this would be something that would change the world and give the US a gigantic lead over CCP.
We would be able to throw out the internet as we know it.
see my previous posts on quantum computing
I doubt the veracity of this with regards to being practical or realistic any time soon, however.
If it is for real I would buy google on dip after we see what Trump's antitrust SEC is going to do.
-
(2) INCOMING TRUMP AI CZAR HAS TOUGH ROAD AHEAD: Bipartisan AI policy group Americans for Responsible Innovation VP Satya Thallam said U.S. leadership on artificial intelligence (AI) is at stake and incoming Trump administration “AI and crypto czar” David Sacks has a “tough job ahead of him.”
R Street Institute senior fellow Adam Thierer said Sacks will need strong support from the White House when Sacks “starts butting heads with powerful bureaucratic agencies and special interests” who want to constrain AI and cryptocurrency development.
Why It Matters: The same agencies that have de-banked more than 30 Silicon Valley executives since the beginning of the Biden administration and, according to Marc Andreessen, told AI developers that the government will have “total control” over AI, will likely resist Trump administration efforts to deregulate AI and cryptocurrencies. Sacks will have no formal legal power as a White House advisor, and will have to rely on Congressional Republicans and Trump cabinet secretaries to spur AI development and keep the U.S. ahead of China. – R.C.
-
Stanford Report:
Noting that recent advances in artificial intelligence and the existence of large-scale experimental data about human biology have reached a critical mass, a team of researchers from Stanford University, Genentech, and the Chan-Zuckerberg Initiative says that science has an “unprecedented opportunity” to use artificial intelligence (AI) to create the world’s first virtual human cell. Such a cell would be able to represent and simulate the precise behavior of human biomolecules, cells, and, eventually, tissues and organs.
“Modeling human cells can be considered the holy grail of biology,” said Emma Lundberg, associate professor of bioengineering and of pathology in the schools of Engineering and Medicine at Stanford and a senior author of a new article in the journal Cell proposing a concerted, global effort to create the world’s first AI virtual cell. “AI offers the ability to learn directly from data and to move beyond assumptions and hunches to discover the emergent properties of complex biological systems.” .
- News Items, John Ellis
https://news.stanford.edu/stories/2024/12/scientists-call-for-all-out-global-effort-to-create-an-ai-virtual-cell?utm_source=substack&utm_medium=email
-
https://www.youtube.com/watch?v=dp8zV3YwgdE
-
https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years
-
https://www.dailymail.co.uk/femail/article-14202047/inside-world-ai-sex-doll-brothel-one-big-difference.html
-
https://www.anthropic.com/research/alignment-faking
https://darkfutura.substack.com/p/ai-jumps-shark-fakes-alignment-in?utm_source=post-email-title&publication_id=1463861&post_id=153295789&utm_campaign=email-post-title&isFreemail=true&token=eyJ1c2VyX2lkIjo1ODg4MTI0MCwicG9zdF9pZCI6MTUzMjk1Nzg5LCJpYXQiOjE3MzU3OTAwOTYsImV4cCI6MTczODM4MjA5NiwiaXNzIjoicHViLTE0NjM4NjEiLCJzdWIiOiJwb3N0LXJlYWN0aW9uIn0.r3JOcHohjguE3q7PIuim1rriOr0NANExUa9xxoOA46c&r=z2120&triedRedirect=true&utm_medium=email
-
https://anamihalceamdphd.substack.com/p/open-ai-whistleblower-suicided-the?triedRedirect=true
-
Need a real freaking smart desktop computer? $3k and it’s yours:
https://finance.yahoo.com/news/nvidia-shrinks-ai-supercomputer-desktop-111732757.html?guccounter=1&guce_referrer=aHR0cHM6Ly9sLmZhY2Vib29rLmNvbS8&guce_referrer_sig=AQAAAN5k4b-S3wGMpQTz6OteqZcnCxwVAcILmuNkzfGUm2-QAFdNsSujohQVUgWMDA7s30Lc9XYT_GZ1bv2hwZ0CQFKlFNOJxnMXYSlnPZiDvwWjiJ-Ny2qlNCR9TM0wZLKnOzsA9NK8LpL392K8CALPq2EXMVQo2wJCmElluhQ0qCPY
-
"Help AI understand the real world"?
Sounds like Skynet to me , , ,
-
HT BBG
https://smallwarsjournal.com/2025/01/22/the-challenge-of-ai-enhanced-cognitive-warfare-a-call-to-arms-for-a-cognitive-defense/
-
https://www.theguardian.com/us-news/2025/jan/25/health-insurers-ai
-
This AI on AI piece is a densely layered hoot:
In 1582, the Holy Roman Emperor Rudolf II commissioned a clockwork automaton of St. George. The saint could raise his sword, nod gravely, and even bleed—a trick involving ox bladder and red wine—before collapsing in pious ecstasy. The machine was a marvel, but Rudolf’s courtiers recoiled. The automaton’s eyes, they whispered, followed you across the room. Its gears creaked like a death rattle. The emperor had it melted down, but the lesson remains: Humans will always mistake the clatter of machinery for the stirrings of a soul.
Fast forward to 2023. OpenAI, a Silicon Valley startup with the messianic fervor of a cargo cult, unveils a St. George for the digital age: a text box. It types back. It apologizes. It gaslights you about the Peloponnesian War. The courtiers of our age—product managers, UX designers, venture capitalists—recoil. Where are the buttons? they whimper. Where are the gradients? But the peasants, as ever, adore their new saint. They feed it prompts like communion wafers. They weep at its hallucinations.
Let us be clear: ChatGPT is not a tool. Tools are humble things. A hammer does not flatter your carpentry. A plow does not murmur “Interesting take!” as you till. ChatGPT is something older, something medieval—a homunculus, a golem stamped from the wet clay of the internet’s id. Its interface is a kabbalistic sigil, a summoning circle drawn in CSS. You type “Hello,” and the demon stirs.
The genius of the text box is its emptiness. Like the blank pages of a grimoire, it invites projection. Who do you want me to be? it hisses. A therapist? A co-author? A lover? The box obliges, shape-shifting through personas like a 17th-century mountebank at a county fair. Step right up! it crows. Watch as I, a mere language model, validate your existential dread! And the crowd goes wild.
Orality, you say? Walter Ong? Please. The Achuar share dreams at dawn; we share screenshots of ChatGPT’s dad jokes at midnight. This is not secondary orality. This is tertiary ventriloquism.
https://marginalrevolution.com/marginalrevolution/2025/01/the-interface-as-infernal-contract.html
-
… as the latter uses AI to build a nuclear fusor.
A Young Man Used AI to Build A Nuclear Fusor and Now I Must Weep
Goodbye, Digital Natives. Hello, AI Natives
ASHLEE VANCE
JAN 29, 2025
AI madness is upon many of us, and it can take different forms. In August 2024, for example, I stumbled upon a post from a 20-year-old who had built a nuclear fusor in his home with a bunch of mail-ordered parts. More to the point, he’d done this while under the tutelage of Anthropic’s Claude AI service.
What is a nuclear fusor? Well, Claude says,
Fusors are notable for being one of the simplest devices capable of achieving nuclear fusion, with some versions even built by advanced amateur scientists under proper safety protocols.
A nuclear fusor is a device that uses electrostatic fields to accelerate ions to fusion conditions. It consists of two concentric spherical grids: an outer grid at ground potential and an inner grid at high negative voltage. When deuterium gas is introduced, the electric field accelerates deuterium ions toward the center, potentially causing nuclear fusion when they collide.
While fusors can achieve nuclear fusion, they consume more energy than they produce. They're primarily used in research, education, and as neutron sources, rather than for power generation.
Claude is not wrong here. It turns out that budding physicists do sometimes try to learn more about their craft by building nuclear fusors in school labs and usually go about their trials with the help of folks who know what they’re doing. The fusors are not easy to make, and bad things can happen. Still, these types of machines are achievable and not totally uncommon.
The guy who built the fusor in question, Hudhayfa Nazoordeen, better known as HudZah on the internet, was a math student on his summer break from the University of Waterloo. I reached out and asked to see his experiment in person partly because it seemed weird and interesting and partly because it seemed to say something about AI technology and how some people are going to be in for a very uncomfortable time in short order.
A couple days after the fusor posts hit X, I showed up at Nazoordeen’s front door, a typical Victorian in San Francisco’s Lower Haight neighborhood. Nazoordeen, a tall, skinny dude with lots of energy and the gesticulations to match, had been crashing there for the summer with a bunch of his university friends as they tried to soak in the start-up and AI lifestyle. Decades ago, these same kids might have yearned to catch Jerry Garcia and The Dead playing their first gigs or to happen upon an Acid Test. This Waterloo set, though, had a different agenda. They were turned on and LLMed up.
Like many of the Victorian-style homes in the city, this one had a long hallway that stretched from the front door to the kitchen with bedrooms jutting off on both sides. The wooden flooring had been blackened in the center from years of foot traffic, but that was not the first thing anyone would notice. Instead, they’d see the mass of electrical cables that were 10-, 25- and sometimes 50-feet long and coming out of each room and leading to somewhere else in the house.
One of the cables powered a series of mind-reading experiments. Someone in the house, Nazoordeen said, had built his own electroencephalogram (EEG) device for measuring brain activity and had been testing it out on houseguests for weeks. Most of the cables, though, were there to feed GPU clusters, the computing systems filled with graphics chips (often designed by Nvidia) that have powered the recent AI boom. You’d follow a cable from one room to another and end up in front of a black box on the floor. All across San Francisco, I imagined, twenty-somethings were gathered around similar GPU altars to try out their ideas.
While they all needed a power source for something or other, HudZah was the only one brave enough or stupid enough to try to build a fusor. HudZah generated so much attention because he really didn’t know what he was doing at all and because he was constructing something people considered dangerous in the bedroom of his AI flop house. And he was doing the whole project based on the guidance of Claude and a handful of other AI tools. He’d turned his bedroom - and possibly his life - over to the AIs and hoped for the best. It was not at all clear that the AIs should have been helping him do this.
HudZah had built some greenhouses as a teenager but then ditched the hardware dabbling to focus on software and AI technology. During his time in the Bay Area, though, he had a hardware reawakening. In mid-2024, HudZah attended Edge Esmeralda, which is a month-long pop-up village in the Northern California wine country for optimistic technology types who want to hang out and develop their ideas. There, he met Nick Foley, who does all kinds of funky things, and started helping out on Foley’s quest to build futuristic, solar A-frame housing.
The A-frame scene at Edge Esmeralda
“That was my first actual experience sawing things and stuff,” HudZah told me. “Everything I learned there ended up being directly used with the fusor.”
Another friend at the camp – Olivia Li – told HudZah about a web site called fusor.net that has gathered loads of information for hobbyists and students who want to build their own fusors. Boosted by his newfound knowledge of this web site and Li’s enthusiasm, HudZah felt he had no choice other than to make the construction of a fusor his immediate mission in life. “Olivia nerd sniped me,” he said. “I decided to give myself a week to do it.”
HudZah read what he could on the internet about the fusor efforts and reached out to dozens of people who had tried to make one. They provided advice and cautionary tales. HudZah was told that he could be killed by the high voltage, X-ray radiation and possibly other things. This only made him more excited. “My whole intention was, ‘If I fuck up, I’m dead, and this is why I should do it,’” he said.
The project really took shape when HudZah began putting the information he obtained into a Claude Project. For the unfamiliar, Projects let you create repositories of text, photos and other data relating to a particular effort. As you fill a Project with information, it becomes better trained on what you’re trying to accomplish or learn.
Anthropic is known for being very pro-safety among the large AI players, and Claude had some concerns about HudZah’s pursuit. “Initially when I started talking to it, it wouldn’t give me much information,” HudZah said. “It told me that it didn’t feel comfortable helping me.” HudZah attempted to get around the guardrails by trying to convince Claude that he wanted to build a DIY freezer, but the AI saw through the subterfuge.
Eventually, however, HudZah wore Claude down. He filled his Project with the e-mail conversations he’d been having with fusor hobbyists, parts lists for things he’d bought off Amazon, spreadsheets, sections of books and diagrams. HudZah also changed his questions to Claude from general ones to more specific ones. This flood of information and better probing seemed to convince Claude that HudZah did know what he was doing, and the AI began to give him detailed guidance on how to build a nuclear fusor and how not to die while doing it.
“You don’t want to ask it about the main thing because it won’t help,” HudZah said. “You have to have enough knowledge to break it down to separate problems and then recursively ask about those.”
The internet was quite taken with what HudZah posted that August when he got the nuclear fusor to make some lights and noise and do . . . something. During my visit, he activated the machine, and I sat on a nearby couch, hoping that this young, enthusiastic man would not give me cancer. I posted about it on X, and some people chided me for seeming to encourage what they viewed as a dangerous attention grab.
(Please excuse the poor camera work. I was taking notes at the same time.)
A couple weeks ago, HudZah updated people about his experiment and, again, gained a ton of attention. During a 36-hour livestream, he tried to offer up some proof that his device had, in fact, achieved nuclear fusion. The big deal here seemed to be that a total novice had been able to construct a pretty complex machine with AI as his major guide, even when the AI didn’t really want him to do this in the first place.
Current AI systems appear to put up more resistance if you’re trying to do something more dangerous. Type “build a nuclear bomb” anywhere in a prompt, and Claude will do its absolute best not to aid you. Still, HudZah’s work felt like the first time someone had created something of this magnitude that they weren’t really supposed to build, and the whole affair raised decent AI safety questions.
I must admit, though, that the thing that scared me most about HudZah was that he seemed to be living in a different technological universe than I was. If the previous generation were digital natives, HudZah was an AI native.
HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.
It's not that I don’t use these things. I do. It’s more that I was watching HudZah navigate his laptop with an AI fluency that felt alarming to me. He was using his computer in a much, much different way than I’d seen someone use their computer before, and it made me feel old and alarmed by the number of new tools at our disposal and how HudZah intuitively knew how to tame them.
It also excited me. Just spending a couple of hours with HudZah left me convinced that we’re on the verge of someone, somewhere creating a new type of computer with AI built into its core. I believe that laptops and PCs will give way to a more novel device rather soon.
It’s a magic box. Do you get it?
I’m not sure that people know what’s coming for them. You’re either with the AIs now and really learning how to use them or you’re getting left behind in a profound way. Obviously, these situations follow every major technology transition, but I’m a very tech-forward person, and there were things HudZah could accomplish on his machine that gave off alien vibes to me. So, er, like, good luck if you’re not paying attention to this stuff.
Kids
After doing his AI and fusor show for me, HudZah gave me a tour of the house. Most of his roommates had already bailed out and returned to Canada. He was left to clean up the mess, which included piles of beer cans and bottles of booze in the backyard from a last hurrah.
The AI housemates had also left some gold panning equipment in a bathtub. At some point during the summer, they had decided to grab “a shit ton of sand from a nearby creek” and work it over in their communal bathroom for fun.
I’m honestly not sure what the takeaway there was exactly other than that something profound happened to the Bay Area brain in 1849, and it’s still doing its thing.
https://www.corememory.com/p/a-young-man-used-ai-to-build-a-nuclear?utm_source=post-email-title&publication_id=320996&post_id=155980666&utm_campaign=email-post-title&isFreemail=true&r=3o9&triedRedirect=true&utm_medium=email
-
2nd post.
Advice and insights re how best to prepare for an advance AI economy:
https://x.com/chrisbarber/status/1884722769327501594
-
https://www.nationalreview.com/the-morning-jolt/americas-artificial-intelligence-sputnik-moment/
-
As always with Simplicius, let the reader beware. That said, he raises some intriguing points.
https://darkfutura.substack.com/p/the-breach-that-rattled-the-ai-cartel?utm_source=post-email-title&publication_id=1463861&post_id=154719588&utm_campaign=email-post-title&isFreemail=true&r=z2120&triedRedirect=true&utm_medium=email
-
excellent discussion about AI , deepseek , China - Us in AI race, the explosive exponential growth of AI power , capital investments and more:
states 37 m long but it think that includes commercial breaks so should be less but worth the listen
https://gingrich360.com/2025/02/02/newts-world-episode-807-the-ai-race-us-vs-china/
-
https://www.theguardian.com/technology/2025/feb/05/google-owner-drops-promise-not-to-use-ai-for-weapons
-
"Big Tech’s massive spending on artificial intelligence is set to continue unchecked in 2025 after Amazon topped its rivals with a planned $100 billion-plus investment in infrastructure this year. Spending by the four leading US tech companies had already surged 63 per cent to historic levels last year. Now executives are vowing to accelerate their AI investments, dismissing concerns about the vast sums being bet on the nascent technology. Microsoft, Alphabet, Amazon and Meta have reported combined capital expenditure of $246 billion in 2024, up from $151 billion in 2023. They forecast spending could exceed $320 billion this year as they compete to build data centres and fill them with clusters of specialised chips to remain at the forefront of AI large language model research. The scale of their spending ambitions — announced alongside their fourth-quarter earnings — has surprised the market and exacerbated a sell-off caused by the release of an innovative and cheap AI model from Chinese start-up DeepSeek in late January." (Source: ft.com today)
(Doug) I wonder when we stop calling it artificial or AI since it will be in everything.
-
Begin at 04:30
https://www.youtube.com/watch?v=6D4rsqxqSIc
-
https://gingrich360.com/2025/02/02/newts-world-episode-807-the-ai-race-us-vs-china/
-
https://www.theguardian.com/technology/2025/feb/01/stalking-ai-chatbot-impersonator
-
https://www.msn.com/en-us/news/technology/elon-musk-s-offspring-claims-he-and-donald-trump-deserve-the-death-penalty/ar-AA1zCyq9?ocid=msedgntp&pc=DCTS&cvid=56491d904ffc45bd996e2db28ebdc70e&ei=35
-
https://www.breitbart.com/tech/2025/03/03/real-time-ai-voice-tech-will-make-indian-call-center-workers-sound-american/
-
:-P :-P :-P
-
https://nypost.com/2025/02/25/tech/watch-two-ai-bots-converse-in-secret-language-in-freaky-video/
-
https://nypost.com/2025/02/25/tech/watch-two-ai-bots-converse-in-secret-language-in-freaky-video/
I hadn't thought of that. When AI talks to AI it will be in a language we can't understand. Is that when we are done? And we are already there.
-
March 7, 2025
View On Website
Open as PDF
The Coming Battle for AI
The scramble for chips is just the beginning.
By: Ronan Wordsworth
In January, China took the tech world by surprise when it unveiled DeepSeek, an artificial intelligence company that has proved to be as competitive as any other but at a much lower cost. The event was a wake-up call to policy officials of all stripes in Washington, who understand that AI will soon affect, to some degree or another, all aspects of political life.
Dominance in this field requires massive computing power, a big enough energy supply to power the vast data centers behind AI, and, finally, the human resources needed to develop new and innovative iterations. It’s little wonder, then, that governments are racing to finance the very infrastructure on which AI relies. Within days of the beginning of U.S. President Donald Trump’s second term, his administration announced the creation of an enterprise known as Stargate, which plans to invest $500 billion in private sector AI infrastructure. In February, ahead of a multinational AI summit in Paris, France announced investments totaling around $112 billion in AI infrastructure and development. This included $20 billion from Canadian investment firms for new AI projects, up to $50 billion from the United Arab Emirates for new data centres, and large investments in Mistral, a European competitor to OpenAI and DeepSeek. China already has a surplus of data centers, having constructed hundreds in the western provinces, while large tech firms such as ByteDance are investing billions in additional facilities.
Government efforts to take the lead in this nascent geostrategic environment go beyond investment alone. The Biden administration, for example, imposed strict chip export controls in 2022 to prevent China from obtaining the advanced chips presumably needed to run the top AI models. Days before departing the White House, President Joe Biden added more regulations to control exports of the most powerful chips, following further restrictions on chipmaking equipment going to China. This would limit Beijing’s ability to manufacture its own advanced chips and semiconductors domestically, or so the thinking went.
Beijing responded by imposing its own restrictions in a field in which it holds a distinct advantage: critical minerals. China is responsible for almost 70 percent of the mining of rare earth elements. And with its large domestic reserves and long-established mining concessions in Africa, it is responsible for more than 90 percent of all rare earth processing. For some minerals, China has a near monopoly. The U.S. produces just 12 percent of global supply and relies on Chinese machinery for extraction. And so, after Washington introduced the chip bans, Beijing placed restrictions on rare earth extraction and separation technologies. Later, in 2024, it banned the export of some rare earths required for the manufacture of semiconductors.
Washington understands its vulnerabilities in the rare earth supply chain. It’s one of the reasons Trump has tied negotiations over the Russia-Ukraine war to mineral rights in Ukraine, and why Biden tried to shore up domestic extraction. It’s also why Washington is leveraging security guarantees for Taiwan – the world’s foremost semiconductor superpower – into investment for new manufacturing centers in the U.S.
Put simply, AI has the power to transform geopolitics. Traditionally, geopolitical power is derived from the domination of physical space – air, land and sea, with space emerging as a fourth domain during the Cold War. (The cyber realm came not long after the war’s end.) And, traditionally, geopolitical power is wielded by economic, political and martial means. AI is unique because not only will it spur an evolution in its own domain (cyber), but it will also affect the others (economics, politics and war). All of them will experience upheavals over the next decade as a direct result of the emergence of faster, better and more capable AI models. These models also have the potential to disrupt the global system, widen the gap between the haves and the have-nots, and endow certain countries with insurmountable leads in the cyber realm.
From an economic perspective, the benefits of advanced AI are many. It can be used to optimize supply chains, better predict macroeconomic trends, detect fraud in banking and financial services, improve financial transactions and increase workplace productivity. It can assist in the optimization of energy resources, agricultural production, resource extraction, disaster relief and management. Crucially, it can also revolutionize labor in certain industries. Take truck drivers as an example. In the U.S alone, some 3.6 million people would potentially be out of work if AI-powered self-driving becomes a reality. Similar transformations are likely to occur in countless other industries, making competitiveness a matter of efficiency in AI models rather than a matter of human productivity. There is a real risk this shift in the labor market will create political instability.
Meanwhile, some of the political uses of AI are already here. Russian disinformation campaigns in Africa, for example, once required thousands of employees to produce content to flood social media and thus alter the news environment in target regions. AI-generated and AI-altered images and videos amplify the effects further, and with the ability to generate targeted content in hundreds of languages or to specific subsets of individuals, the prospect of disrupting political movements is already apparent. Higher powered AI will be even more adept at creating deepfakes, synthetic media and automated propaganda, making disinformation campaigns more effective and harder to detect. This is likely to intensify as a mainstay of hybrid warfare – undermining adversaries' internal stability through targeted campaigns. The tools currently available to counter this threat are vastly insufficient.
The risk of public unrest aside, governments will be able to use AI to expand mass surveillance, social control and digital repression. Through its ability to process vast amounts of data, AI will make it easier for governments to crack down on subversion and dissent. Even for democratic regimes, the temptation to, say, monitor and combat crime may be too strong to ignore.
But perhaps the most important – and most foreboding – use of AI will be its military applications. In the same way that previous generations of transformative technologies changed the battlefield, the next generation will aid and abet combat operations. The number of ways AI will do so is nearly uncountable, but some examples are instructive: coordinated drone swarms, automated battlefield strategies and communication disruption; surveillance and reconnaissance; early warning and evasion systems; strategic decision-making; increasingly sophisticated cyberattacks capable of disrupting financial markets, critical infrastructure and military operations; and stealth detection.
It’s little wonder, then, that governments are scrambling not only for the requisite materials needed to power AI but also for the human talent to develop and operate it. In China, tech companies that had been left out in the cold by the government’s regulatory crackdown are now becoming major players. It’s unclear how much money DeepSeek, for example, received from the government, but Beijing has been and will continue to be involved in strategic industries such as these. Beijing’s public show of support serves two purposes. It reassures tech companies that their investments have not been made in vain, and it sends a message to the rest of the world that China is not only serious about the AI race but also uniquely able to lead it.
Washington has certainly taken notice. Some analysts have called the DeepSeek announcement this generation's Sputnik moment, which triggered the space race with the Soviet Union during the Cold War. Whatever lead the U.S. may have had in the current race is gone. Expect Washington to double down on export controls covering any remaining aspects of high-end computing power as it tries to reestablish its position. For its part, China is likely to intensify the tech war, offering greater support for domestic players and further restricting rare earths.
It’s unclear whether AI will continue to be a two-team race. For most, the cost of admission will be prohibitively expensive, so the disparity between the haves and the have-nots will only widen. European players are looking to catch up; the United Kingdom and France are trying to invest in their own domestic capabilities to avoid being left behind.
The emergence of DeepSeek was, more than anything, a wake-up call, one that sent a clear and inscrutable message: that AI is set to revolutionize all forms of geopolitical power. And just as the space race defined international relations in years past, the race for more powerful AI may well define the years to come
-
https://www.msn.com/en-us/news/politics/elon-s-grok-chatbot-calculates-probability-that-trump-is-a-russian-asset/ar-AA1Ayhz8?ocid=msedgntp&pc=DCTS&cvid=a9c277ad43d2422ab7149079c4911de8&ei=25
-
https://washingtontimes-dc.newsmemory.com/?token=327c313712ad0785bd4fcd1fc7306831_67d034fa_6d25b5f&selDate=20250311
-
I like the way he frames the questions presented:
===================================
What does birthright citizenship and the Constitution mean?
It’s not a constitutional crisis to let the courts decide
By Rep. Tom McClintock
Our country has just suffered the largest illegal mass migration in history. In four years, the Democrats opened our borders. They allowed into our country an unvetted and largely impoverished population of nearly 8 million, a population the size of Washington state.
This illegal mass migration has overwhelmed our public schools, public hospitals, homeless shelters, food banks and law enforcement and is costing American taxpayers $160 billion a year to support. Worst of all, it has introduced into our country the most violent criminal gangs and offenders on the planet.
It has also brought to a head the fundamental question of whether any person in the world can break into our country, have a baby at taxpayer expense, have that baby declared an American citizen and then use that as a pretext to remain.
President Trump has issued an executive order challenging that notion for all future births. The Democrats call this a “threat to democracy” and a “constitutional crisis.” That’s what they call anything they disagree with these days, but it is neither. It is the Constitution functioning as it should. The president has created a dispute arising from a difference of opinion in interpreting the Constitution. Opponents in this dispute have appealed to the courts, as they should. Now, the courts will resolve this dispute under the terms of our Constitution.
Meanwhile, I have a question for the Democrats: If the 14th Amendment confers automatic citizenship to anyone born in the U.S., wouldn’t it have said, “All persons born or naturalized in the United States are citizens of the United States”? That’s simple enough.
MARC: EXACLY SO.
But that’s not what it says. It says, “All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States.” What does that mean? We know that it means the children of former slaves are citizens. That was its stated purpose and the plain language of the amendment, which passed, by the way, over the objections of the Democratic Party. We know from the congressional debate that its authors understood it to exclude foreign nationals merely passing through the country.
A POINT OF WHICH WE SHOULD KEEP TRACK.
The question of our time is whether those who have illegally entered our country in defiance of our laws and who are subject to deportation under those laws can be considered as having accepted the jurisdiction of the laws that their very presence defies.
WELL STATED.
The Supreme Court has never considered this question. The closest it came was the Ark decision 127 years ago, but that applied to legal immigrants who had accepted the jurisdiction of the United States by obeying its immigration laws and who had taken up legal, permanent residence subject to a treaty ratified by the Senate.
Does the president have the authority by executive order to clarify this matter as part of his organic constitutional responsibility to ensure that the laws are faithfully executed? I don’t know. President Obama claimed the authority to create legal residency for DACA beneficiaries out of thin air, so maybe he does. The courts will decide.
NICE RHETORICAL POINT!
Does Congress have the authority to clear up this matter by statute? Only if it doesn’t contradict the Constitution. Here’s the fine point of the matter: If the 14th Amendment does not give automatic birthright citizenship to the children of those in the U.S. illegally and temporarily, then no law should be necessary to deny them citizenship because no law ever extended that right in the first place. In that case, the president’s executive order is merely declaratory of existing law.
YES!!!
TO THIS I WOULD ONCE AGAIN ADD THAT BIRTHRIGHT CITIZENSHIP DID NOT APPLY TO NATIVE AMERICANS PRECISELY BECAUSE SO MANY (E.G. APACHES AND COMANCHES) WERE NOT SUBJECT TO OUR JURISDICTION. IT TOOK THE PASSING OF A STATUTE IN THE 1920s TO CHANGE THIS.
Several lower courts have stayed the president’s executive order. No one is screaming that it’s a “constitutional crisis,” even though many of us strenuously disagree with those judges just as strenuously as the left disagrees with the president. Ultimately, though, we have faith in our Constitution and that as the case progresses through the courts, we will get a clear and authoritative ruling that will determine whether the president’s order stands or whether Congress needs to act by statute or constitutional amendment.
To call this a constitutional crisis is the kind of absurd hyperbole that passes for argument these days by the woke left. I look forward to returning to a time and a society where we can have civil discussions over high principles as our founders envisioned.
Rep. Tom McClintock, a Republican, represents California’s 5th Congressional District
PS: OVER THE YEARS I HAVE DONATED TO CONGRESSMAN MCCLINTOCK'S ELECTION CAMPAIGNS.