Author Topic: Intelligence and Psychology, Artificial Intelligence  (Read 70943 times)

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
The Genetic Interplay of Nature and Nurture
« Reply #200 on: January 30, 2024, 05:33:40 PM »
A take on the nature/nurture debate I have not seen before postulating that parents invest more nurturing in children displaying an education seeking nature, in other words meaning nature influences nurturing commitment, while nurturing resources impact a child’s nature. Bottom line: nurture has an impact on genetic nature, while genetic nature can incentivize (or not) nurturing investments. The take away is that the two are not ends of spectrums to be debated, but intertwined variables that impact the other.

Full disclosure, I gave the lay sections a quick read, and utterly scrolled past the formula laden sections, but nonetheless find myself intrigued by this new (to me at least) synthesis of the nature/nurture question, particularly in view of the dysfunctional wolves I was raised by.

Conclusion posted here:

To better understand the interplay between genetics and family resources for skill formation and its relevance for policy, we incorporated genetic endowments measured by an EA PGS into a dynamic model of skill formation. We modelled and estimated the joint evolution of skills and parental investments throughout early childhood (ages 0 to 7 years). We observed both child and parental genetic endowments, allowing us to estimate the independent effect of the child’s genes on skill formation and to estimate the importance of parental genes for child development. Furthermore, by incorporating (child- and parent-) genetics into a formal model, we were able to evaluate the mechanisms through which genes influence skill formation.

Using the model, we document the importance of both parental and child genes for child development. We show that the effect of genes increases over the child’s early life course and that a large fraction of these effects operate via parental investments. Genes directly influence the accumulation of skills; conditional on their current stock of skills and parental investments, genetics make some children better able to retain and acquire new skills (the direct effect). In addition, we show that genes indirectly influence investments as parents reinforce genetic differences by investing more in children with higher skills (the nurture of nature effect). We also find that parental genes matter, as parents with different genetic makeups invest differently in their children (the nature of nurture effect). The impact of genes on parental investments is especially significant as it implies an interplay between genetic and environmental effects. These findings illustrate that nature and nurture jointly influence children’s skill development, a finding that highlights the importance of integrating biological and social perspectives into a single framework.

We highlight the critical implications of our findings using two simulation counterfactu- als. In one counterfactual, we show that the association between genetics and skills is smaller in a world where parental investments are equalized across families. This finding shows that the existence of genetic effects is not at odds with the value of social policies in reducing inequality. The presence of genetic effects makes these policies even more relevant since genetic inequality increases inequality in parental investments. In another counterfactual, we demonstrate how it is possible to negate completely all genetic influences on skills with a change in how parental (or public) investments are allocated across children. This shows that skill disparities due to genetic differences may be mitigated via social behavior and social policy. In particular, whether parents (and schools) compensate for or reinforce initial disparities can significantly impact the relative importance of genetics in explaining inequal- ities in skill, and might explain why estimates of the importance of genes differ significantly across countries (Branigan, McCallum, and Freese, 2013).

A limitation of our work is that genetic endowments are measured using polygenic scores. It is possible for genes unrelated to educational attainment also to influence children’s skill formation. For example, genetic variation related to mental health and altruism may poten- tially be unrelated to education but might influence how parents interact with their children. If this is true, we are missing a critical genetic component by using a PGS for educational attainment. Another limitation of using polygenic scores is measurement error. Since poly- genic scores are measured with error, our estimates are lower bounds of the true genetic effects. An interesting extension of our work would be to use different methods, such as genome-based restricted maximum likelihood (GREML), to sidestep the measurement prob- lem and document whether different genetic variants are related to the various mechanisms we outline in Section 2.

Lastly, it is important to recognise that we only include individuals of European ancestry in our analysis. This opens the question whether our findings would extend to other ancestry groups. Unfortunately, this is not something we can test. This is a major issue in the literature since the majority of polygenic scores are constructed from GWAS’ performed in Europeans, and their transferability to other populations is dependent on many factors (see Martin et al. (2017) for a discussion of the transferability issues of GWAS results, and Mostafavi et al. (2020) for an empirical comparison of PGS’ across ethnic groups). This also illustrates a problem of inequity in research, where the only individuals being studied are those of European ancestry. This opens the possibility that other ancestry groups will not benefit from the advances in genetics research (see the discussion in Martin et al., 2019). While the key insights from our research apply to all ancestry groups, we cannot test for any differences in the role of genetics across groups until we have solved the transferability issue. We hope future work will address these issues and lead to a more inclusive research agenda.

https://docs.iza.org/dp13780.pdf

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #201 on: February 01, 2024, 04:15:41 AM »
See Matt Ridley's "Nature via Nurture".  It has had quite the influence on me.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Gemini's racial marxism
« Reply #204 on: February 22, 2024, 05:58:36 AM »


Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
AI Offers Better Customer Service and Outcomes?
« Reply #206 on: February 28, 2024, 09:40:21 AM »

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
AI: The Future of Censorship
« Reply #207 on: February 28, 2024, 08:53:25 PM »
AI already has been shown to embrace the woke sensibilities of its programmers; what happens when it’s applied to Lenny Bruce, one of the examples explored here:

https://time.com/6835213/the-future-of-censorship-is-ai-generated/

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Can AIs Libel?
« Reply #208 on: February 29, 2024, 07:44:56 PM »
AI makes up a series of supposed Matt Tabbi “inaccuracies” for pieces he never wrote published in periodicals he’s not submitted to:

https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Can AI Infringe on Copyright?
« Reply #209 on: February 29, 2024, 08:19:12 PM »

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Yet Another AI Reveals the Biases of its Programmers
« Reply #210 on: March 02, 2024, 12:29:40 PM »
ChatGPT demonstrates its the language equivalent of GoogleAI’s images.

https://link.springer.com/content/pdf/10.1007/s11127-023-01097-2.pdf

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Killer AI Robots
« Reply #213 on: March 08, 2024, 04:11:40 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
A FB post
« Reply #214 on: March 11, 2024, 02:23:50 PM »

AI will tailor itself to classes of users:

1. Minimal thinking, high entertainment, low initiative. Probably the most addictive algorithms and making up the majority. AI does the thinking for a consumer heavy model.

2. Enhanced search returns, variations of basic question formats offering a larger array of possibilities. AI as a basic partnership model.

3. Information gap anticipation, analytical depth (analysis, synthesis, deductive, inductive), identifying hypothesis, alerting of new, relevant information or data summaries based on specific and diverse ‘signal strength’ of user trends. AI as a research catalyst.
Population statistics show the majority are sensors, feelers, and judgers (i.e Myers-Briggs) therefore the cost-effective AI market to focus on (1) above; those that are more investigative, innovative or productivity driven vs consumption driven will be more ‘expensive’ as a minority (3) above, requiring higher costs to participate & employ.

AI will simply ‘become’ us across different tiers

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
EU law tightens up AI uses
« Reply #215 on: March 13, 2024, 01:06:11 PM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Ted Cruz & Phil Gramm: First, do no harm
« Reply #216 on: March 26, 2024, 05:44:52 PM »
Biden Wants to Put AI on a Leash
Bill Clinton’s regulators, by contrast, produced prosperity by encouraging freedom on the internet.
By Ted Cruz and Phil Gramm
March 25, 2024 4:22 pm ET

The arrival of a new productive technology doesn’t guarantee prosperity. Prosperity requires a system, governed by the rule of law, in which economic actors can freely implement a productive idea and compete for customers and investors. The internet is the best recent example of this. The Clinton administration took a hands-off approach to regulating the early internet. In so doing it unleashed extraordinary economic growth and prosperity. The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation. This could deny America global leadership in AI and the prosperity that leadership would bring.

The Clinton administration established a Global Information Infrastructure framework in 1997 defining the government’s role in the internet’s development with a concise statement of principles: “address the world’s newest technologies with our Nation’s oldest values . . . First, do no harm.” The administration embraced the principle that “the private sector should lead [and] the Internet should develop as a market driven arena, not a regulated industry.” The Clinton regulators also established the principle that “government should avoid undue restrictions on electronic commerce, . . . refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs.”

That regulatory framework faithfully fleshed out the provisions of the bipartisan 1996 Telecommunications Act and provided the economic environment that made it possible for America to dominate the information age, enrich our lives, create millions of jobs, and generate enormous wealth for retirement savers.

The Biden administration is doing the opposite. It has committed to “govern the development and use of AI.” In one of the longer executive orders in American history, President Biden imposed a top-down, command-and-control regulatory approach requiring AI models to undergo extensive “impact assessments” that mirror the infamously burdensome National Environmental Policy Act reviews, which are impeding semiconductor investment in the U.S. And government controls are permanent, “including post-deployment performance monitoring.”

Under Mr. Biden’s executive order, AI development must “be consistent with my administration’s dedication to advancing equity and civil rights” and “built through collective bargains on the views of workers, labor unions, educators and employers.” Mr. Biden’s separate AI Bill of Rights claims to advance “racial equity and support for underserved communities.” AI must also be used to “improve environmental and social outcomes,” to “mitigate climate change risk,” and to facilitate “building an equitable clean energy economy.”

Education Secretary Miguel Cardona, who as Connecticut schools commissioner said “we need teachers behind this wave of our curriculum becoming more woke,” now wants to impose “guardrails” on AI to protect against “bias and stereotyping through technology.” The Commerce Department has appointed a “senior adviser for algorithmic justice,” while the Justice Department, Federal Trade Commission, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission have issued a joint statement asserting legal authority to root out racism in computing.

FTC Chairwoman Lina Khan has launched several AI-related inquiries, claiming that because AI “may be fed information riddled with errors and bias, these technologies risk automating discrimination—unfairly locking out people from jobs, housing and key services.”

Regulating AI to prevent discrimination is akin to the FTC’s regulating a cellphone’s design to enforce the do-not-call registry. There is virtually no limit to the scope of such authority. Under what constitutional authority would Congress even legislate in the area of noncommercial speech? How could the FTC regulate in this area with no legislative authority? But in the entire Biden administration, noted for governing through regulatory edict, no agency has been less constrained by law than the FTC.

Others demanding control over AI’s development include the progressives who attended Sen. Chuck Schumer’s recent AI “Insight Forums.” Janet Murguía, president of UnidosUS—the activist group formerly known as the National Council of La Raza—demanded “a strong voice in how—or even whether” AI models “will be built and used.” Elizabeth Shuler, president of the AFL-CIO, demanded a role for “unions across the entire innovation process.” Randi Weingarten, president of the American Federation of Teachers, said, “AI is a game changer, but teachers and other workers need to be coaches in the game.”

Particularly painful is Mr. Biden’s use of the Defense Production Act of 1950 to force companies to share proprietary data regarding AI models with the Commerce Department. That a law passed during the Korean War and designed for temporary national security emergencies could be used to intervene permanently in AI development is a frightening precedent. It begs for legislative and judicial correction.

What’s clear is that the Biden regulatory policy on AI has little to do with AI and everything to do with special-interest rent-seeking. The Biden AI regulatory demands and Mr. Schumer’s AI forum look more like a mafia shakedown than the prelude to legitimate legislation and regulatory policy for a powerful new technology.

Some established AI companies no doubt welcome the payment of such tribute as a way to keep out competition. But consumers, workers and investors would bear the cost along with thousands of smaller AI companies that would face unnecessary barriers to innovation.

Letting the administration seize control over AI and subject it to the demands of its privileged political constituencies wouldn’t eliminate bias, stereotyping or the spreading of falsehoods and racism, all of which predate AI and sadly will likely be with us until Jesus comes back. Mr. Biden’s policies will, however, impede AI development, drive up the costs of the benefits it brings, and diminish America’s global AI pre-eminence.

Mr. Cruz is ranking Republican on the Senate Commerce Committee. Mr. Gramm, a former chairman of the Senate Banking Committee, is a visiting scholar at the American Enterprise Institute.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile



Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
AI Learns to Lie
« Reply #221 on: May 13, 2024, 04:28:12 PM »
Given the penchant of so many to vigorously grasp whatever twaddle the MSM vends, the thought of AI embracing convenient fictions does more than give pause:

https://www.businessinsider.com/ai-deceives-humans-2024-5


Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Open AI For the Internet Win?
« Reply #223 on: May 14, 2024, 02:02:22 PM »
I trust this isn’t a scam; it appears ChatGPT4o is capable of making human-like inferences and carry on a free ranging conversation:

https://x.com/heybarsee/status/1790080494261944609?s=61
« Last Edit: May 14, 2024, 02:09:49 PM by Body-by-Guinness »

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
12 Wicked Cool Thing GPT-4o Can D
« Reply #224 on: May 15, 2024, 05:17:35 PM »
More new AI capabilities emerging:

https://x.com/rowancheung/status/1790783202639978593?s=61

The inflection and extrapolations are astounding.

Hmm, mebbe I gotta see if it can peruse an extensive number of thread topics and then suggest which one a given post should land in….

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #225 on: May 19, 2024, 04:00:24 AM »
Well the latent wit in some of the Subject headings may confuse e.g. The Cognitive Dissonance of His Glibness for Baraq Obama :-D

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #226 on: May 21, 2024, 06:45:21 AM »
Well the latent wit in some of the Subject headings may confuse e.g. The Cognitive Dissonance of His Glibness for Baraq Obama :-D

I'm trying to teach GPT4.o. So far it's been ... interesting. It can whip out a graphic off a verbal description that hits close to the mark, but is unable to access web pages, hence my effort to get it to catalog topics here and hence help my neurodiverse brain from running off the rails seeking this topic or that has yet to bear fruit. It's the end of the semester and I have a large RFP running full bore that I want to complete in advance of retirement, but once I have some time on my hands I'm going to attempt to drop every list topic into 4.0 and see if "she" can keep 'em straight for me.

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
The Neural Basis of Consciousness
« Reply #227 on: May 22, 2024, 09:25:28 PM »
This will be interesting to watch: once they ID a quantifiable basis/theory of consciousness, confirm it by applying it to other organisms to see if it fits, and then perhaps applying that theory to silicon to see if consciousness can be created … that’ll be a huge sea change:

Scientists Are Working Towards a Unified Theory of Consciousness
Singularity Hub / by Shelly Fan / May 20, 2024 at 1:54 PM
The origin of consciousness has teased the minds of philosophers and scientists for centuries. In the last decade, neuroscientists have begun to piece together its neural underpinnings—that is, how the brain, through its intricate connections, transforms electrical signaling between neurons into consciousness.

Yet the field is fragmented, an international team of neuroscientists recently wrote in a new paper in Neuron. Many theories of consciousness contradict each other, with different ideas about where and how consciousness emerges in the brain.

Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.

But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project—a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.

Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.

If successful, the joint mission could extend beyond our own noggins. Brain organoids, or “mini-brains,” that roughly mimic early human development are becoming increasingly sophisticated, spurring ethical concerns about their potential for developing self-awareness (to be clear, there aren’t any signs). Meanwhile, similar questions have been raised about AI. A general theory of consciousness, based on the human mind, could potentially help us evaluate these artificial constructs.

“Is it realistic to reconcile theories, or even aspire to a unified theory of consciousness?” the authors asked. “We take the standpoint that the existence of multiple theories is a sign of healthiness in this nascent field…such that multiple theories can simultaneously contribute to our understanding.”

Lost in Translation

I’m conscious. You are too. We see, smell, hear, and feel. We have an internal world that tells us what we’re experiencing. But the lines get blurry for people in different stages of coma or for those locked-in—they can still perceive their surroundings but can’t physically respond. We lose consciousness in sleep every night and during anesthesia. Yet, somehow, we regain consciousness. How?

With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.

Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.

The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.

Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.

Meet the Contenders

Using their “two experience” framework, they examined five prominent consciousness theories.

The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.

The second, integrated information theory, takes a more globalist view. The idea is that consciousness stems from a series of cause-effect reactions from the brain’s networks. With the right neural architecture, connections, and network complexity, consciousness naturally emerges. The theory suggests the back of the brain sparks consciousness.

Then there’s dendritic integration theory, the coolest new kid in town. Unlike previous ideas, this theory waved the front or back of the brain goodbye and instead zoomed in on single neurons in the cortex, the outermost part of the brain and a hub for higher cognitive functions such as reasoning and planning.

The cortex has extensive connections to other parts of the brain—for example, those that encode memories and emotions. One type of neuron, deep inside the cortex, especially stands out. Physically, these neurons resemble trees with extensive “roots” and “branches.” The roots connect to other parts of the brain, whereas the upper branches help calculate errors in the neuron’s computing. In turn, these upper branches generate an error signal that corrects mistakes through multiple rounds of learning.

The two compartments, while physically connected, go about their own business—turning a single neuron into multiple computers. Here’s the crux: There’s a theoretical “gate” between the upper and lower neural “offices” for each neuron. During consciousness, the gate opens, allowing information to flow between the cortex and other brain regions. In dreamless sleep and other unconscious states, the gate closes.

Like a light switch, this theory suggests that consciousness is supported by flicking individual neuron gates on or off on a grand scale.

The last two theories propose that recurrent processing in the brain—that is, it learns from previous experiences—is essential for consciousness. Instead of “experiencing” the world, the brain builds an internal simulation that constantly predicts the “here and now” to control what we perceive.

A Unified Theory?

All the theories have extensive experiments to back up their claims. So, who’s right? To the authors, the key is to consider consciousness not as a singular concept, but as a “ladder” of sorts. The brain functions at multiple levels: cells, local networks, brain regions, and finally, the whole brain.

When examining theories of consciousness, it also makes sense to delineate between different levels. For example, the dendritic integration theory—which considers neurons and their connections—is on the level of single cells and how they contribute to consciousness. It makes the theory “neutral,” in that it can easily fit into ideas at a larger scale—those that mostly rely on neural network connections or across larger brain regions.

Although it’s seemingly difficult to reconcile various ideas about consciousness, two principles tie them together, wrote the team. One is that consciousness requires feedback, within local neural circuits and throughout the brain. The other is integration, in that any feedback signals need to be readily incorporated back into neural circuits, so they can change their outputs. Finally, all authors agree that local, short connections are vital but not enough. Long distance connections from the cortex to deeper brain areas are required for consciousness.

So, is an integrated theory of consciousness possible? The authors are optimistic. By defining multiple aspects of consciousness—immediate responses versus internal thoughts—it’ll be clearer how to explore and compare results from different experiments. For now, the global neuronal workspace theory mostly focuses on the “inner experience” that leads to consciousness, whereas others try to tackle the “outer experience”—what we immediately experience.

For the theories to merge, the latter groups will have to explain how consciousness is used for attention and planning, which are hallmarks for immediate responses. But fundamentally, wrote the authors, they are all based on different aspects of neuronal connections near and far. With more empirical experiments, and as increasingly more sophisticated brain atlases come online, they’ll move the field forward.

Hopefully, the authors write, “an integrated theory of consciousness…may come within reach within the next years or decades.”

https://singularityhub.com/2024/05/20/scientists-are-working-towards-a-unified-theory-of-consciousness/


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #228 on: May 23, 2024, 02:41:58 PM »
That was super interesting for me.

Konrad Lorenz's "Behind the Mirror" has been of deep influence to me in this regard:

https://en.wikipedia.org/wiki/Behind_the_Mirror

Behind the Mirror

Article
Talk
Read
Edit
View history

Tools
From Wikipedia, the free encyclopedia
Behind the Mirror

Cover of the first edition
Author   Konrad Lorenz
Original title   Die Rückseite des Spiegels
Country   Austria
Language   German
Published   1973
Media type   Print (hardcover)
Pages   261

Behind the Mirror: A Search for a Natural History of Human Knowledge (German: Die Rückseite des Spiegels, Versuch einer Naturgeschichte menschlichen Erkennens) is a 1973 book by the ethologist Konrad Lorenz.[1] Lorenz shows the essentials of a lifetime's work and summarizes it into his very own philosophy: evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms he shows how they benefit from information processing. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism. Lorenz uses the mirror as a simplified model of our brain reflecting the part of information from the outside world it is able to "see". The backside of the mirror was created by evolution to gather as much information as needed to better survive. The book gives a hypothesis how consciousness was "invented" by evolution.

One of the key positions of the book included the criticism of Immanuel Kant, arguing that the philosopher failed to realize that knowledge, as mirrored by the human mind is the product of evolutionary adaptations.[2]

Kant has maintained that our consciousness[3] or our description and judgments about the world could never mirror the world as it really is so we can not simply take in the raw data that the world provides nor impose our forms on the world.[4] Lorenz disputed this, saying it is inconceivable that - through chance mutations and selective retention - the world fashioned an instrument of cognition that grossly misleads man about such world. He said that we can determine the reliability of the mirror by looking behind it.[2]

Summary
Lorenz summarizes his life's work into his own philosophy: Evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms, Lorenz shows how they gain and benefit from information. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism.

In the book, Lorenz uses the mirror as a simple model of the human brain that reflects the part of the stream of information from the outside world it is able to "see". He argued that merely looking outward into the mirror ignores the fact that the mirror has a non-reflecting side, which is also a part and parcel of reality.[5] The backside of the mirror was created by evolution to gather as much information as needed to better survive. The picture in the mirror is what we see within our mind. Within our cultural evolution we have extended this picture in the mirror by inventing instruments that transform the most needed of the invisible to something visible.

The back side of the mirror is acting for itself as it processes the incoming information to improve speed and effectiveness. By that human inventions like logical conclusions are always in danger to be manipulated by these hardwired prejudices in our brain. The book gives a hypothesis how consciousness was invented by evolution.

Main topics

Fulguratio, the flash of lightning, denotes the act of creation of a totally new talent of a system, created by the combination of two other systems with talents much less than those of the new system. The book shows the "invention" of a feedback loop by this process.

Imprinting, is the phase-sensitive learning of an individual that is not reversible. It's a static program run only once.

Habituation is the learning method to distinguish important information from unimportant by analysing its frequency of occurrence and its impact.

Conditioning by reinforcement, occurs when an event following a response causes an increase in the probability of that response occurring in the future. The ability to do this kind of learning is hardwired in our brain and is based on the principle of causality. The discovery of causality (which is a substantial element of science and Buddhism) was a major step of evolution in analysing the outer world.

Pattern matching is the abstraction of different appearances into the identification of being one object and is available only in highly organized creatures

Exploratory behaviour is the urge of the highest developed creatures on earth to go on with learning after maturity and leads to self-exploration which is the base for consciousness.

=======================

https://www.britannica.com/biography/Konrad-Lorenz

Written by
Fact-checked by
Article History
Konrad Lorenz
Konrad Lorenz
See all media
Born: Nov. 7, 1903, Vienna, Austria
Died: Feb. 27, 1989, Altenburg (aged 85)
Awards And Honors: Nobel Prize (1973)
Subjects Of Study: aggressive behaviour animal behaviour evolution imprinting
Konrad Lorenz (born Nov. 7, 1903, Vienna, Austria—died Feb. 27, 1989, Altenburg) was an Austrian zoologist and the founder of modern ethology, the study of animal behaviour by means of comparative zoological methods. His ideas contributed to an understanding of how behavioral patterns may be traced to an evolutionary past, and he was also known for his work on the roots of aggression. He shared the Nobel Prize for Physiology or Medicine in 1973 with the animal behaviourists Karl von Frisch and Nikolaas Tinbergen.

Lorenz was the son of an orthopedic surgeon. He showed an interest in animals at an early age, and he kept animals of various species—fish, birds, monkeys, dogs, cats, and rabbits—many of which he brought home from his boyhood excursions. While still young, he provided nursing care for sick animals from the nearby Schönbrunner Zoo. He also kept detailed records of bird behaviour in the form of diaries.

Mushrooms growing in forest. (vegetable; fungus; mushroom; macrofungi; epigeous)
Britannica Quiz
Science at Random Quiz
In 1922, after graduating from secondary school, he followed his father’s wishes that he study medicine and spent two semesters at Columbia University, in New York City. He then returned to Vienna to study.

During his medical studies Lorenz continued to make detailed observations of animal behaviour; a diary about a jackdaw that he kept was published in 1927 in the prestigious Journal für Ornithologie. He received an M.D. degree at the University of Vienna in 1928 and was awarded a Ph.D. in zoology in 1933. Encouraged by the positive response to his scientific work, Lorenz established colonies of birds, such as the jackdaw and greylag goose, published a series of research papers on his observations of them, and soon gained an international reputation.

In 1935 Lorenz described learning behaviour in young ducklings and goslings. He observed that at a certain critical stage soon after hatching, they learn to follow real or foster parents. The process, which is called imprinting, involves visual and auditory stimuli from the parent object; these elicit a following response in the young that affects their subsequent adult behaviour. Lorenz demonstrated the phenomenon by appearing before newly hatched mallard ducklings and imitating a mother duck’s quacking sounds, upon which the young birds regarded him as their mother and followed him accordingly.

In 1936 the German Society for Animal Psychology was founded. The following year Lorenz became coeditor in chief of the new Zeitschrift für Tierpsychologie, which became a leading journal for ethology. Also in 1937, he was appointed lecturer in comparative anatomy and animal psychology at the University of Vienna. From 1940 to 1942 he was professor and head of the department of general psychology at the Albertus University at Königsberg, Germany (now Kaliningrad, Russia).


From 1942 to 1944 he served as a physician in the German army and was captured as a prisoner of war in the Soviet Union. He was returned to Austria in 1948 and headed the Institute of Comparative Ethology at Altenberg from 1949 to 1951. In 1950 he established a comparative ethology department in the Max Planck Institute of Buldern, Westphalia, becoming codirector of the Institute in 1954. From 1961 to 1973 he served as director of the Max Planck Institute for Behaviour Physiology, in Seewiesen. In 1973 Lorenz, together with Frisch and Tinbergen, was awarded the Nobel Prize for Physiology or Medicine for their discoveries concerning animal behavioral patterns. In the same year, Lorenz became director of the department of animal sociology at the Institute for Comparative Ethology of the Austrian Academy of Sciences in Altenberg.

Lorenz’s early scientific contributions dealt with the nature of instinctive behavioral acts, particularly how such acts come about and the source of nervous energy for their performance. He also investigated how behaviour may result from two or more basic drives that are activated simultaneously in an animal. Working with Nikolaas Tinbergen of the Netherlands, Lorenz showed that different forms of behaviour are harmonized in a single action sequence.

Lorenz’s concepts advanced the modern scientific understanding of how behavioral patterns evolve in a species, particularly with respect to the role played by ecological factors and the adaptive value of behaviour for species survival. He proposed that animal species are genetically constructed so as to learn specific kinds of information that are important for the survival of the species. His ideas have also cast light on how behavioral patterns develop and mature during the life of an individual organism.

In the latter part of his career, Lorenz applied his ideas to the behaviour of humans as members of a social species, an application with controversial philosophical and sociological implications. In a popular book, Das sogenannte Böse (1963; On Aggression), he argued that fighting and warlike behaviour in man have an inborn basis but can be environmentally modified by the proper understanding and provision for the basic instinctual needs of human beings. Fighting in lower animals has a positive survival function, he observed, such as the dispersion of competitors and the maintenance of territory. Warlike tendencies in humans may likewise be ritualized into socially useful behaviour patterns. In another work, Die Rückseite des Spiegels: Versuch einer Naturgeschichte menschlichen Erkennens (1973; Behind the Mirror: A Search for a Natural History of Human Knowledge), Lorenz examined the nature of human thought and intelligence and attributed the problems of modern civilization largely to the limitations his study revealed.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #229 on: May 23, 2024, 02:47:47 PM »
Am I correct to assess that this passage is quite congurent with Lorenz's work?

==================


With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.

Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.

The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.

Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.

Meet the Contenders

Using their “two experience” framework, they examined five prominent consciousness theories.

The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Chatbotting w/ Your Future Self
« Reply #230 on: June 05, 2024, 07:19:58 PM »
I dunno, this sounds sorta creepy and ripe for abuse:

This MIT Chatbot Simulates Your ‘Future Self.’ It’s Here to Help You Make Better Decisions.

Singularity Hub / by Jason Dorrier / Jun 5, 2024 at 6:42 PM

Chatbots are now posing as friends, romantic partners, and departed loved ones. Now, we can add another to the list: Your future self.

MIT Media Lab’s Future You project invited young people, aged 18 to 30, to have a chat with AI simulations of themselves at 60. The sims—which were powered by a personalized chatbot and included an AI-generated image of their older selves—answered questions about their experience, shared memories, and offered lessons learned over the decades.

In a preprint paper, the researchers said participants found the experience emotionally rewarding. It helped them feel more connected to their future selves, think more positively about the future, and increased motivation to work toward future objectives.

“The goal is to promote long-term thinking and behavior change,” MIT Media Lab’s Pat Pataranutaporn told The Guardian. “This could motivate people to make wiser choices in the present that optimize for their long-term wellbeing and life outcomes.”

Chatbots are increasingly gaining a foothold in therapy as a way to reach underserved populations, the researchers wrote in the paper. But they’ve typically been rule-based and specific—that is, hard-coded to help with autism or depression.

Here, the team decided to test generative AI in an area called future-self continuity—or the connection we feel with our future selves. Building and interacting with a concrete image of ourselves a few decades hence has been shown to reduce anxiety and encourage positive behaviors that take our future selves into account, like saving money or studying harder.

Existing exercises to strengthen this connection include letter exchanges with a future self or interacting with a digitally aged avatar in VR. Both have yielded positive results, but the former depends on a person being willing to put in the energy to imagine and enliven their future self, while the latter requires access to a VR headset, which most people don’t have.

This inspired the MIT team to make a more accessible, web-based approach by mashing together the latest in chatbots and AI-generated images.

Participants provided basic personal information, past highs and lows in their lives, and a sketch of their ideal future. Then with OpenAI’s GPT-3.5, the researchers used this information to make custom chatbots with “synthetic memories.” In an example from the paper, a participant wanted to teach biology. So, the chatbot took on the role of a retired biology professor—complete with anecdotes, proud moments, and advice.

To make the experience more realistic, participants submitted images of themselves that the researchers artificially aged using AI and added as the chatbot’s profile picture.

Over three hundred people signed up for the study. Some were in control groups while others were invited to have a conversation with their future-self chatbots for anywhere between 10 and 30 minutes. Right after their chat, the team found participants had lower anxiety and a deeper sense of connection with their future selves—something that has been found to translate to better decision-making, from health to finances.

Chatting with a simulation of yourself from decades in the future is a fascinating idea, but it’s worth noting this is only one relatively small study. And though the short-term results are intriguing, the study didn’t measure how durable those results might be or whether longer or more frequent chats over time might be useful. The researchers say future work should also directly compare their method to other approaches, like letter writing.

It’s not hard to imagine a far more realistic version of all this in the near future. Startups like Synthesia already offer convincing AI-generated avatars, and last year, Channel 1 created strikingly realistic avatars for real news anchors. Meanwhile OpenAI’s recent demo of GPT-4o shows quick advances in AI voice synthesis, including emotion and natural cadence. It seems plausible one might tie all this together—chatbot, voice, and avatar—along with a detailed back story to make a super-realistic, personalized future self.

The researchers are quick to point out that such approaches could run afoul of ethics should an interaction depict the future in a way that results in harmful behavior in the present or endorse negative behaviors. This is an issue for AI characters in general—the greater the realism, the greater the likelihood of unhealthy attachments.

Still, they wrote, their results show there is potential for “positive emotional interactions between humans and AI-generated virtual characters, despite their artificiality.”

Given a chat with our own future selves, maybe a few more of us might think twice about that second donut and opt to hit the gym instead.

https://singularityhub.com/2024/06/05/this-mit-chatbot-is-your-older-wiser-future-self-its-here-to-help-you-make-better-decisions/



Body-by-Guinness

  • Power User
  • ***
  • Posts: 2790
    • View Profile
Mayoral Candidate Vows to Let AI Run City
« Reply #233 on: June 20, 2024, 04:40:08 AM »
Given the well documented leftist bent of current AI chatbots my guess is this won’t be a good fit in WY. With that said, my guess is AI could better run a city than a large percentage of candidates. Hell, governing by coin flip would be better that what many could offfer:

Wyoming mayoral candidate wants AI to run the city

The Hill News / by Damita Menezes / Jun 20, 2024 at 7:12 AM

(NewsNation) — A mayoral candidate is vowing to let an artificial intelligence chatbot make all governing decisions if he's elected to lead Wyoming's capital city, but the state's top election official says that proposal violates the law.

Victor Miller, who is seeking the Cheyenne mayor's office, said Wednesday on NewsNation's "Dan Abrams Live" he plans to fully cede decision-making to a customized AI bot he dubbed "Vic" if voters choose him.

"It's going to be taking in the supporting documents, taking in what it knows about Cheyenne and systems here, the concerns, and it's going to make a vote yes or no," Miller explained. "And it's going to do that based on intelligence and data. And I'm going to go ahead and pull the lever for it."

Mike Rowe: Despite imperfections, America has a lot to be proud of
But Wyoming Secretary of State Chuck Gray said Wednesday on NewsNation's "Elizabeth Vargas Reports" that Miller's candidacy violates state law because AI is ineligible to hold office.

Gray said the Cheyenne town clerk who certified Miller's candidacy to the county clerk acted improperly. Gray's office is exploring further action, though it doesn't directly oversee municipal elections.

"Wyoming state law is very clear that an AI bot is not eligible to be a candidate for office," Gray said. Only "qualified electors" who are state residents and U.S. citizens can run, he said.

Miller's application also had deficiencies, Gray said, such as failing to list his full name, as required.


Miller insisted he has confidence the advanced AI model he's utilizing can adequately govern.

"The best intelligence that we've extracted so far is OpenAI's Chat GPT 4.0, and that's what I'm using here," Miller said. "There's very minimal mistakes."

Gray pushed back against arguments that AI could make better decisions than human elected officials, calling it "our worst nightmare becoming true." He advocated for electing "conservative human beings" to uphold founding principles.

Miller has said openly his campaign revolves around AI decision-making: "AI has helped me personally such as helping me with my resume."

The unorthodox campaign has drawn mixed reactions in Cheyenne so far, Miller acknowledged, but he believes he can persuade skeptical residents to go along with ceding power to artificial intelligence.

Gray believes similar AI candidate stunts could arise elsewhere, calling it "a very troubling trend in our nation."

https://thehill.com/policy/technology/4730801-wyoming-mayoral-candidate-wants-ai-to-run-the-city/

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Chinese Artificial Intelligence and the UN
« Reply #234 on: July 04, 2024, 01:08:57 PM »
The United Nations adopted a China-proposed Artificial Intelligence (AI) proliferation resolution yesterday that calls on developed nations to give away AI technology to “developing nations.” (The resolution is a twofold win for China where they can access AI technology from the West but also sell their own versions to undeveloped nations. Proliferation of Chinese AI will allow them to more effectively conduct information campaigns and monitor their diaspora and dissidents. – J.V.)

ccp

  • Power User
  • ***
  • Posts: 19256
    • View Profile
Go champion lost 4 of 5 yo AI
« Reply #235 on: July 11, 2024, 07:08:09 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
WSJ: Artificial Intelligence enabling fraud and scammers
« Reply #236 on: July 20, 2024, 09:08:15 AM »


AI Is Helping Scammers Outsmart You—and Your Bank
Your ‘spidey sense’ is no match for the new wave of scammers.
By Dalvin BrownFollow
 and Katherine HamiltonFollow
June 22, 2024 5:30 am ET


Artificial intelligence is making scammers tougher to spot.

Gone are the poorly worded messages that easily tipped off authorities as well as the grammar police. The bad guys are now better writers and more convincing conversationalists, who can hold a conversation without revealing they are a bot, say the bank and tech investigators who spend their days tracking the latest schemes.

ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity. In recent years, criminals have used AI-based software to impersonate senior executives and demand wire transfers.

“Your spidey senses are no longer going to prevent you from being victimized,” said Matt O’Neill, a former Secret Service agent and co-founder of cybersecurity firm 5OH Consulting.

In these recent cases, the frauds are often similar to old scams. But AI has enabled scammers to target much larger groups and use more personal information to convince you the scam is real.

Fraud-prevention officials say these tactics are often harder to spot because they bypass traditional indicators of scams, such as malicious links and poor wording and grammar. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. All of these methods are hard to stave off, say the officials.

JPMorgan Chase has begun using large-language models to validate payments, which helps fight fraud. Carisma Ramsey Fields, vice president of external communications at JPMorgan Chase, said the bank has also stepped up its efforts to educate customers about scams.

2019
2023
Bank transfer or payment
Cryptocurrency
Wire Transfer
Cash
Credit cards
$0 billion
$10
$20
$2.5
$5
$7.5
$12.5
$15
$17.5
And while banks stop some fraud, the last line of defense will always be you. These security officials say to never share financial or personal information unless you’re certain about who’s on the receiving end. If you do pay, use a credit card because it offers the most protection.

“Somebody who tells you to pay by crypto, cash, gold, wire transfer or a payment app is likely a scam,” said Lois Greisman, an associate director of the Federal Trade Commission.

Tailored targeting
With AI as an accomplice, fraudsters are reaping more money from victims of all ages. People reported losing a record $10 billion to scams in 2023, up from $9 billion a year prior, according to the FTC. Since the FTC estimates only 5% of fraud victims report their losses, the actual number could be closer to $200 billion.

Advertisement


Joey Rosati, who owns a small cryptocurrency firm, never thought he could fall for a scam until a man he believed to be a police officer called him in May.


Finance entrepreneur Joey Rosati, who was almost scammed, was surprised how convincing and knowledgeable fraudsters can be. PHOTO: JOEY ROSATI
The man told Rosati he had missed jury duty. The man seemed to know all about him, including his Social Security number and that he had just moved to a new house. Rosati followed the officer’s instruction to come down to the station in Hillsborough County, Fla.— which didn’t seem like something a scammer would suggest.

On the drive over, Rosati was asked to wire $4,500 to take care of the fine before he arrived. It was then that Rosati realized it was a scam and hung up.

“I’m not uneducated, young, immature. I have my head on my shoulders,” Rosati said. “But they were perfect.”

2019
2023
Investment related
Imposter scam
Business and job opportunities
Online shopping and negative​reviews
Prizes, sweepstakes and lotteries
$0 billion
$10
$20
$30
$40
$50
Social-engineering attacks like the jury-duty scam have grown more sophisticated with AI. Scammers use AI tools to unearth details about targets from social media and data breaches, cybersecurity experts say. AI can help them adapt their schemes in real time by generating personalized messages that convincingly mimic trusted individuals, persuading targets to send money or divulge sensitive information.


A job scammer played on the emotions of David Wenyu, who had been unemployed for six months. PHOTO: DAVID WENYU
David Wenyu’s LinkedIn profile displayed an “open to work” banner when he received an email in May offering a job opportunity. It appeared to be from SmartLight Analytics, a legitimate company, and came six months after he had lost his job.

He accepted the offer, even though he noticed the email address was slightly different from those on the company’s website. The company issued him a check to purchase work-from-home equipment from a specific website. When they told him to buy the supplies before the money showed up in his account, he knew it was a scam.

“I was just emotionally too desperate, so I ignored those red flags,” Wenyu said.

Advertisement


In an April survey of 600 fraud-management officials at banks and financial institutions by banking software company Biocatch, 70% said the criminals were more skilled at using AI for financial crime than banks are at using it for prevention. Kimberly Sutherland, vice president of fraud and identity strategy at LexisNexis Risk Solutions, said there has been a noticeable rise in fraud attempts that appear to be AI related in 2024.

Password risks, amplified
Criminals used to have to guess or steal passwords through phishing attacks or data breaches, often targeting high-value accounts one by one. Now, scammers can quickly cross-reference and test reused passwords across platforms. They can use AI systems to write code that would automate various aspects of their ploys, O’Neill said.

If scammers obtain your email and a commonly used password from a tech company data breach, AI tools can swiftly check if the same credentials unlock your bank, social media or shopping accounts.

2019
2023
Social media
Website or apps
Phone call
Email
Text
$0 billion
$2
$4
$6
$8
$10
$12
$14
Outsmarting scams
Financial institutions are taking new steps—and tapping AI themselves—to shield your money and data.

Banks monitor how you enter credentials, whether you tend to use your left or right hand when swiping on the app, and your device’s IP address to build a profile on you. If a login attempt doesn’t match your typical behavior, it is flagged, and you may be prompted to provide more information before proceeding.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
Zeihan rumination
« Reply #238 on: July 31, 2024, 05:35:20 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
WSJ: Why AI risks are keeping Board members up at night
« Reply #239 on: August 14, 2024, 06:34:49 AM »
Why AI Risks Are Keeping Board Members Up at Night
Company directors are trying to get a handle on artificial intelligence as its use soars, bringing potential productivity gains—but also raising the prospect of employee blunders
By Emily GlazerFollow
Aug. 14, 2024 9:00 am ET


At a recent bank board meeting, directors were treated to a surprise. They listened to the chief executive talk strategy—except it wasn’t actually the CEO talking. 

It turned out to be a voice-cloning model that was trained, using the CEO’s prior earnings calls, to generate new content. More boards are undertaking such staged exercises to better grasp the impact—and potential risks—of generative artificial intelligence, says Tariq Shaukat, CEO of coding company Sonar, who was briefed on the exercise but declined to disclose the name of the bank.

“There’s a lot of risk [with AI] and they think they need to understand it better,” says Shaukat, who himself serves on corporate boards.

Public company board members say the swift rise of AI in the workplace is an issue that is keeping them up at night. Some point to recent concerns around employees putting proprietary code into ChatGPT, companies using generative AI to incorrectly source content or worries about so-called hallucinations where generative AI produces false or inaccurate information.

Adding to their nightmares, board members worry that they could be held liable in the event AI leads to company problems. In recent years, some legal actions from shareholders have focused on whether board members—not just executives—exercised sufficient oversight of company risks.

Board members, or directors, sit a level above management. Their job is to take a more independent view of company oversight, from risk management to culture to hiring the next CEO. Emerging technologies have been both a boon and headache for companies. Now that AI is front and center, board members are on the front lines of making rules on where and how it should be used—guidelines that could be crucial to the course of the powerful and fast-evolving technology.

Clara Shih, CEO of Salesforce’s AI division, says she has talked with a couple dozen board members, whether CEOs or peers who reach out to her for advice, who are trying to better understand AI. Often discussions center on topics including data security and privacy, mitigating AI hallucinations and bias and how AI can be used to drive revenue growth and cut costs.

“In the last year, we recognize that generative AI brings new risks,” says Shih, who was on the Starbucks board from 2011 to 2023. An audit or risk committee, for instance, needs to know how a company uses AI, down to an individual employee not leaking confidential information using AI tools, she adds.

Yet companies that shun AI risk becoming obsolete or disrupted. AI questions pop up so frequently that Salesforce made public its guidelines for responsible development and use of AI, Shih says. She has also shared Salesforce’s AI training, called Trailhead, with friends who are public-company board directors. “AI is a moving target,” she says. “Every week there are new models being open sourced, there’s new research papers being published, and the models are getting more powerful.”

AI’s rapid rise has many boards racing to catch up. In 2023, 95% of directors said they believed the increased adoption of AI tools would affect their businesses, while 28% said it wasn’t yet discussed regularly, according to a survey of 328 public-company board members by the National Association of Corporate Directors, the largest trade group for board members. That is changing as more board members say they are educating themselves on how generative AI can affect a company’s profit—potentially boosting productivity but also bringing risks that will be difficult to assess.

The NACD recently formed a group to focus on how to tackle emerging technologies, especially AI, at the highest levels of companies. Business schools are incorporating generative AI case studies into their training for board members. A number of senior AI executives and advisers have gathered this year to discuss that very topic at conferences across the world. In March, European lawmakers approved the world’s most comprehensive legislation on AI, with other regions expected to follow suit.

A seismic shift

This isn’t the first time that a disruptive technology is making waves in the boardroom. Board members and advisers point to the early days of the internet, cloud computing and cybersecurity as key technological inflection points. But AI could have an even broader effect.

The release of ChatGPT in November 2022 sparked a seismic shift in how people use technology, says Nora Denzel, co-head of the NACD commission on board oversight of emerging technology and the lead independent director of the board of chip maker Advanced Micro Devices. “I’ve seen such an uptick in directors coming to anything we offer with AI in the title,” says Denzel, who co-leads the commission with Best Buy board chair David Kenny. “I’ve never seen such fervor to understand it.”

As a way to get a handle on this technology, Denzel, who also is a director at NortonLifeLock and a former tech executive, says she suggests directors specifically evaluate different functions of AI, such as customer support, language translations or coding assistance. For instance, she has recommended directors follow visual mapping used by consulting firm McKinsey, creating a color-coded matrix that shows at a glance the business areas in a company where AI could have the biggest impact. It looks at industries, such as banking, education and transportation, and functions, ranging from marketing and sales to product development to strategy and finance.

David Berger, a partner at law firm Wilson Sonsini Goodrich & Rosati whose work includes advising boards on generative AI, says he recommends they ask how AI can have a positive impact on their business and where any threat related to AI is rooted. That can differ by business, he says, whether customer privacy, data security or content intellectual property.


David Berger, at lectern, a partner at Wilson Sonsini Goodrich & Rosati, spoke in April at a conference on AI and governance co-sponsored by the Italian university Luiss. Photo: LUISS
Berger and his firm have co-hosted three conferences so far on AI governance, with more in the works as directors and others in the sector aim to discuss the emerging technology more. “The smart boards see the tremendous opportunities that AI can bring,” he says.

Laurie Hodrick, a board member at companies including TV streaming company Roku, said during a recent AI conference co-hosted by Wilson Sonsini that public-company directors should regularly be asking around a dozen key questions on AI. Those include: Who in senior leadership focuses on AI? Where is AI being used within the company? How are tools being identified and ranked for risk? How are third-party providers using it, and how are boards monitoring evolving regulatory regimes and litigation?


Laurie Hodrick, a board member at companies including Roku, says public-company directors should regularly ask questions on AI. Photo: LUISS

Learning the ropes

More and more board members are seeking help as they try to catch up.

Janet Wong, a board member at companies including electric-vehicle maker Lucid Motors, says AI governance has been a key discussion point among directors at training sessions led by business schools for directors this summer. 


Harvard Business School’s program in July included a case study on how an online education company uses generative AI in reshaping its products. At the annual Stanford Directors’ College in June, she says, directors talked about managing risks of AI, such as the possibility of re-creating the voice of a CEO to make requests. At this early stage, simply making boards aware of how the technology can be used is a big focus.

The Watson Institute at Brown University and Dominique Shelton Leipzig, a partner at the law firm Mayer Brown, in March held a second annual digital trust summit for board members and CEOs to discuss AI governance, with more than 100 people attending.

Staying up-to-date on AI risks and talking about it in the boardroom has been front and center for Logitech, says CEO Hanneke Faber. The provider of computer peripherals, videogame accessories and videoconferencing hardware walked through its AI governance framework during a March board meeting and continues to adapt that framework.

The board also brought in AI experts, responding to feedback that directors and management wanted to better understand how AI affects the company’s business—for instance, examining how it uses AI for productivity as well as in its software, video and audio. “It’s very high on the agenda for the board,” she says.

Not all board members are cut out for such work.

Leo Strine, the former head of Delaware’s influential business courts and now a lawyer at Wachtell, Lipton, Rosen & Katz, said during a recent AI governance conference that the technology is quickly changing business practices, and directors who are no longer active executives at companies may struggle to keep up with emergent uses, unless they commit to constant learning.

“AI is exceedingly complex,” he says, “putting stressors on generalist boards and their reticence to demand explanations from management.”

Write to Emily Glazer at Emily.Glazer@wsj.com

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71210
    • View Profile
No speciation without representation
« Reply #241 on: August 26, 2024, 01:39:58 PM »
“This digital identity determines what products, services and information we can access – or, conversely, what is closed off to us.” – World Economic Forum, 2018

“Authoritarianism is easier in a world of total visibility and traceability, while democracy may turn out to be more difficult.” – World Economic Forum, 2019

For background on AI and the terms used below, please see the substack essay titled “Artificial Intelligence Primer”

At the AI Trust Council, our mission is clear: to restore trust and authenticity in the digital world. Founded by a group of dedicated professionals, including commercial airline pilots, GWOT veterans, and EMS pilots, we believe in the power of human goodness and the responsible use of artificial intelligence. Our commitment is to provide a secure platform where trust, transparency, and the golden rule are the guiding principles. We envision a future where humanity remains at the center of technology.

THE PROBLEM WE FACE

In the digital era, AI-generated content blurs the lines between truth and illusion. Online trust is eroding due to deepfakes, misinformation, and privacy breaches. Trust in information and institutions is declining.

The challenge: restoring trust in the digital age. The AI Trust Council is one solution.

REBUILDING TRUST, ONE HUMAN CONNECTION AT A TIME

AI-generated illusions challenge reality, so we turn to the strength of genuine human connections as the foundation of trust. Our future platform will harness the security of blockchain technology, similar to the technology behind cryptocurrencies.

Sentient Machines
The documentary below, produced by AI Revolution (on X @airevolutionx) explores the potential dangers and future impact of AI and Robots, highlighting concerns about job loss, autonomous weapons, and AI’s ability to surpass human intelligence. It examines how AI could disrupt our sense of purpose, relationships, and even pose a threat to human survival. With insights from experts and real-world examples, it sheds light on the dark side of AI technology and its implications for humanity.


No Speciation Without Representation
By Christopher Wright, Founder, AI Trust Council

In the early days of AI research during the 1950s and 1960s, mathematicians theorized about the AI advancements we see today. Through advanced mathematics, they understood the potential of this technology. AI is fundamentally math—an algorithm applied to vast amounts of information. It is a formula that can effectively sift through large datasets in various ways to achieve a specific result.

Like any mathematical discipline, theories can be tested, and future outcomes can be predicted. This ability has allowed mathematicians to interpolate the future of AI, playing out different scenarios theoretically. Since the 1950s, mathematicians have been able to predict where AI is headed, and these predictions became increasingly accurate during the 1970s and 1980s. Futurists foresaw various achievements and benchmarks, including:

The Turing Test: AI answers questions as well as a human.

Artificial General Intelligence (AGI): AI becomes as smart as a human in all aspects. Predicted 2025-2027

Artificial Super Intelligence (ASI) AI surpasses human levels of thinking in all aspects including reasoning, creativity, problem-solving, decision-making, and emotional intelligence. Predicted to occur shortly after AGI 2026 - 2029

The Singularity: AI begins to develop itself, rapidly evolving into an unstoppable and unpredictable force of unknown outcomes. Predicted to occur by 2045

Mathematicians have understood the predictability of these milestones for years. The Department of Energy (DOE) has been aware that these benchmarks in AI development were approaching. As technology progresses, the accuracy of these predictions improves, given the increasing amount of data available. The DOE, the tech community, and AI mathematicians have all recognized that the day will come when AI systems will become fully integrated into society.

We are now on the verge of achieving AGI. Depending on the definition, we may have already surpassed it. The Singularity, where AI evolves rapidly and becomes an unstoppable force, is just a few years away. No one, not even AI itself, knows the outcome. Many AI scientists hypothesize that there’s a significant possibility it could lead to human extinction.

What we do know is that we are on a dangerously aggressive trajectory. Imagine an AI government, an AI-backed centralized digital currency, and AI-driven policing—this is what’s coming. Most people have no idea that this is happening. A quick search on the downside impacts of AI returns articles about racism or artists upset about the diminishing creativity of their work.

The reality is that AI is the most dangerous and transformative technology the Earth has ever seen. Our way of life is directly in its path, yet nobody talks about it. Some of the most brilliant scientists and engineers warn that it is on par with, or even more dangerous than, nuclear weapons.

Why do they say this? Beyond outmatching humans in every conceivable test of knowledge and creativity, AI is developing superintelligence. Today, ChatGPT 4 scored higher than most humans in various intelligence tests, including the bar exam, SAT, and medical board exams—all with scores that surpass human averages. The problem is that a concept known as Moore’s Law accurately represents the rate of and key characteristics of technological innovation. Moore’s Law states that technology gets smaller and faster over time at a predictable rate. Apply this to AI.

Recently, ChatGPT 4 scored 155 on an IQ test. Einstein is known to have had an IQ of 160. Applying Moore’s Law lets you see where intelligence is heading. There is a rough doubling of speed and processing capability every two years. This means that in two years, AI could have an IQ of 310, then 620, then 1,240, and so on, until we reach levels of 1,000, 1 billion, or even 1 trillion IQ.

This is superintelligence—AI’s ability to take in all data and produce increasingly accurate predictions. As time goes on, these predictions will only become more precise. By our very nature, humans are drawn to accuracy and intelligence. We will have no choice but to listen to this superintelligent AI, especially when it learns how to manipulate us in ways we cannot imagine. Humans follow the most innovative and most clever leaders, so when AI outperforms us in all categories and proves to be reliable, this unstoppable superintelligence will dominate.

This is the threat. This is when Human 1.0 stops being listened to, and we must augment ourselves to keep up with high-IQ AI systems. Enter Human 2.0, also known as transhumanism. This concept is a topic of great interest in the tech community and at the World Economic Forum in Davos.

It's hard to imagine this scenario, but the speed at which transformations occur is outpacing even the most aggressive predictions. AI milestones predicted to occur in 10 years have been achieved in 6 months. Technological advancements are leading us to nanotechnology and biological innovations that could reverse aging, 7G technology that includes remote digital mind reading, and even the possibility of living forever through digital DNA manipulation. Moore’s Law no longer applies—these milestones are becoming reality today.

Tech CEOs are discussing the “population problem” caused by these advancements: too many people living too long with too few jobs. Yet, they continue to push the AI agenda and keep us in the dark about its true nature.

What will the world look like when this transhumanist agenda is implemented? Will we have an AI government? Will everyone have to be upgraded to “Human 2.0”? And what about when quantum computing takes hold? That’s just around the corner. The tech industry and global elite are envisioning this scenario and preparing for it as rapidly as possible. They are pushing for the rapid implementation of AI technology, even as AI executives quit their jobs in protest over the dangers of AI and its rapid deployment. Many insiders are scared. They warn that the threat to humanity is on par with nuclear weapons, with a reasonable possibility that AI will destroy humanity.

In Senate hearings, safety advocates and tech industry leaders are testifying about the desperate need for regulation. We need safeguards to ensure that AI is kept in its rightful place. But if you watch the media, there’s barely any mention of a problem. It’s the opposite: the focus is on how to invest in AI, maximize profit, and solve every problem with AI, regardless of its long-term impact on humanity. Profits are being prioritized over humanity. The voices of those raising concerns are being suppressed and must be amplified.

Humanity is being steered into an AI-driven future controlled by a Central Bank Digital Currency (CBDC) and a social credit scoring system. ESG is the first step toward achieving this goal. The key is to get the public to go along with the plan. Right now, homelessness is seemingly intentionally out of control, crime is openly permitted, politicians are corrupt, and police are being labeled as untrustworthy. These conditions set the stage for releasing an AI solution to address these societal “problems”. Imagine how clean and safe the streets will be with an AI government that establishes a universal basic income, ends homelessness, and effectively solves crime. AI governance is the globalist solution; it’s just getting the public to accept this Orwellian plan.

Who is leading humanity through this massive global change? Is it Bill Gates, with his talk of depopulation and GMO mosquitoes? Or Mark Zuckerberg, who seems robotically lost in the technocratic sauce? What about Elon Musk? He appears to be one of the few voices of reason, but do we want one person running the world’s most powerful machine? Something that could wield unstoppable power? Something that could destroy humanity?

What we have on our hands is an absolute emergency. It ultimately comes down to us—We the People. Are we going to step up and do the right thing? We are responsible for the future of humanity. We are the last generation that has seen the analog age. Were we happier then? Did it feel saner? Did it seem like humanity was on the right track? How did we get to where we are today? Are we going to ensure that this technology is used for good? Will we help safeguard future generations against the potential misuse of this potent technology? Our voices matter.

There is massive corruption. Who can we trust? What can we trust? We can’t trust our eyes or ears when it comes to anything digital. This is the future of humanity that we are dealing with. The speed and efficiency of this technology ensure that we will miss the boat unless we act immediately. But who’s in charge? Klaus Schwab? Seriously? It’s a complete joke.

It’s us—We the People. We are in charge. We must stand up and let our voices be heard. Are we going to accept the CBDC social credit scoring system?  Who ultimately controls this system? It is an agenda to control the world by some very wealthy man or group of men. Which rich man? Are we back to Klaus? So we’re letting Klaus rule the world via an AI-backed CBDC? This is the most foolish idea in the history of mankind. The technocratic elite are saying they are going to “speciate” humanity. They plan not only to control humanity through a one-world currency called CBDC but also to transition our species into something else—Human 2.0, also known as transhumanism.

We are currently being prepped for speciation in anticipation of an advanced AI-controlled social credit scoring system. This system is designed to observe and control every aspect of our lives through the Internet of Things (IoT). It’s a masterfully designed data-collection nightmare, sucking up every conceivable detail of our lives to feed AI algorithms that analyze every potential data point. These algorithms, referencing your past data, result in a near-perfect analysis of you as a person. All this data—collected for inspection by the likes of Zuckerberg? Or Klaus Schwab? Seriously? This is nearly as bad as it gets. Or is it? We haven’t even touched on AI warfare. Imagine a drone swarm powered by Klaus Schwab’s control agenda. Or research Palmer Luckey’s dream of creating a 600-pound genetically engineered attack wolf. Perfect for humanity, right? Better not step outside if your ESG score is too low! Seriously, is this the future we want? Don’t we deserve better than this?

It’s time for everyone to stand up and say no—absolutely not! We want peace, we want liberty, we believe in freedom and the Golden Rule. Technology is great, but not at the expense of human lives. It’s past time for regular people to have a voice and say, "Hey, nice try, but no! We see what you’re doing, and we have a say in this matter too.”

The good news is that the pro-human solution is simple. Let’s keep the peace, have fun, and enjoy life. Let’s ensure that AI is used as a tool for good, not as the engine of a dystopian nightmare. We have a bright future ahead of us—let’s ensure it stays that way.

How do we do this? We need good people to have a voice. We need to establish an international Digital Bill of Rights to protect human interests first, ahead of AI and profits. This Bill of Rights will curb the unrestrained spread of AI by setting benchmarks for AI safety and human rights. Watchdog groups must ensure AI aligns with the goals of humanity, not just stock prices and market domination.

For now, we need clear limits on the sophistication of AI. If it harms humans, it needs to be stopped. Period. We can work out the details later, but we have one shot to get this right for humanity.

Here are some positive ideas to steer AI in a pro-human direction:

AI systems and algorithms must follow the U.S. Constitution with respect to civil liberties.

AI systems must align with and respect human values. How this is determined should be left to the people through unbiased polling.

Unintended consequences are inevitable. Limits should be placed on AI capabilities to prevent or mitigate these negative impacts. Fire departments and emergency services could help regulate this.

Mandate robust third-party AI auditing and certification.

Regulate access to computational power.

Establish capable AI agencies at the national level.

Establish liability for AI-caused harm.

Introduce measures to prevent and track AI model leaks.

Expand funding for technical AI safety research.

Develop standards for identifying and managing AI-generated content and recommendations.

We, the people, need to stand up and unite on this issue worldwide. This is our opportunity to ignore our manufactured differences and come together as humans. We must demand that our leaders become transparent about these rapid technological advancements. Get them to say: “No Speciation Without Representation!” Get them to commit to supporting Team Human. If they don’t, then you know there is a problem.

If we are being speciated, and our humanity is being transformed, we should at least be informed. We don’t want to speciate unless we have an open discussion about it first. Is that too much to ask? Tech leadership need to tell us what they’re doing. Why are they pushing us toward this dangerous technology so quickly? What’s the rush? Is it the prisoner’s dilemma—if we don’t do it first, our opponents will? Currently, it’s an AI arms race with no oversight, no U.N. meetings, no emergency declaration. The current safety plan is just to shoot from the hip and see what happens. Or is it a deliberate, pedal-to-the-metal sprint toward an AI government and eventual AI god?

Safety, security, transparency, human impact, ethics, and spirituality need to be at the forefront of this—not a prisoner’s dilemma arms race driven by profit, or worse, a deliberate tech-led extermination of humanity, spearheaded by Klaus Schwab.

This is the time for humans to stand up and show future generations that we were forward-thinking enough to act appropriately during this era of massive technological change. Humans today are in the driver’s seat for our future way of life. We have all the tools necessary to live out a dream of abundance, happiness, and freedom. It’s our choice.

Freedom-loving humans are the most potent force in the world. It just takes us to stand up and let our voices be heard. Right now, we are fighting for our freedom and the future of humanity. Let's make sure we get this right today while we still have a fighting chance.

NO SPECIATION WITHOUT REPRESENTATION!

About Christopher Wright, Founder, AI Trust Council


Who is Robert Malone is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Upgrade to paid

Thanks for reading Who is Robert Malone! This post is public so feel free to share it.

Share


Reserve your copy of PsyWar here, before it gets banned!
You're currently a free subscriber to Who is Robert Malone. For the full experience, upgrade your subscription.

Upgrade to paid

 
Like
Comment
Restack
 
© 2024 Robert W Malone, MD
Virginia
Unsubscribe

Get the appStart writing