Author Topic: Intelligence and Psychology, Artificial Intelligence  (Read 74535 times)

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
The Genetic Interplay of Nature and Nurture
« Reply #200 on: January 30, 2024, 05:33:40 PM »
A take on the nature/nurture debate I have not seen before postulating that parents invest more nurturing in children displaying an education seeking nature, in other words meaning nature influences nurturing commitment, while nurturing resources impact a child’s nature. Bottom line: nurture has an impact on genetic nature, while genetic nature can incentivize (or not) nurturing investments. The take away is that the two are not ends of spectrums to be debated, but intertwined variables that impact the other.

Full disclosure, I gave the lay sections a quick read, and utterly scrolled past the formula laden sections, but nonetheless find myself intrigued by this new (to me at least) synthesis of the nature/nurture question, particularly in view of the dysfunctional wolves I was raised by.

Conclusion posted here:

To better understand the interplay between genetics and family resources for skill formation and its relevance for policy, we incorporated genetic endowments measured by an EA PGS into a dynamic model of skill formation. We modelled and estimated the joint evolution of skills and parental investments throughout early childhood (ages 0 to 7 years). We observed both child and parental genetic endowments, allowing us to estimate the independent effect of the child’s genes on skill formation and to estimate the importance of parental genes for child development. Furthermore, by incorporating (child- and parent-) genetics into a formal model, we were able to evaluate the mechanisms through which genes influence skill formation.

Using the model, we document the importance of both parental and child genes for child development. We show that the effect of genes increases over the child’s early life course and that a large fraction of these effects operate via parental investments. Genes directly influence the accumulation of skills; conditional on their current stock of skills and parental investments, genetics make some children better able to retain and acquire new skills (the direct effect). In addition, we show that genes indirectly influence investments as parents reinforce genetic differences by investing more in children with higher skills (the nurture of nature effect). We also find that parental genes matter, as parents with different genetic makeups invest differently in their children (the nature of nurture effect). The impact of genes on parental investments is especially significant as it implies an interplay between genetic and environmental effects. These findings illustrate that nature and nurture jointly influence children’s skill development, a finding that highlights the importance of integrating biological and social perspectives into a single framework.

We highlight the critical implications of our findings using two simulation counterfactu- als. In one counterfactual, we show that the association between genetics and skills is smaller in a world where parental investments are equalized across families. This finding shows that the existence of genetic effects is not at odds with the value of social policies in reducing inequality. The presence of genetic effects makes these policies even more relevant since genetic inequality increases inequality in parental investments. In another counterfactual, we demonstrate how it is possible to negate completely all genetic influences on skills with a change in how parental (or public) investments are allocated across children. This shows that skill disparities due to genetic differences may be mitigated via social behavior and social policy. In particular, whether parents (and schools) compensate for or reinforce initial disparities can significantly impact the relative importance of genetics in explaining inequal- ities in skill, and might explain why estimates of the importance of genes differ significantly across countries (Branigan, McCallum, and Freese, 2013).

A limitation of our work is that genetic endowments are measured using polygenic scores. It is possible for genes unrelated to educational attainment also to influence children’s skill formation. For example, genetic variation related to mental health and altruism may poten- tially be unrelated to education but might influence how parents interact with their children. If this is true, we are missing a critical genetic component by using a PGS for educational attainment. Another limitation of using polygenic scores is measurement error. Since poly- genic scores are measured with error, our estimates are lower bounds of the true genetic effects. An interesting extension of our work would be to use different methods, such as genome-based restricted maximum likelihood (GREML), to sidestep the measurement prob- lem and document whether different genetic variants are related to the various mechanisms we outline in Section 2.

Lastly, it is important to recognise that we only include individuals of European ancestry in our analysis. This opens the question whether our findings would extend to other ancestry groups. Unfortunately, this is not something we can test. This is a major issue in the literature since the majority of polygenic scores are constructed from GWAS’ performed in Europeans, and their transferability to other populations is dependent on many factors (see Martin et al. (2017) for a discussion of the transferability issues of GWAS results, and Mostafavi et al. (2020) for an empirical comparison of PGS’ across ethnic groups). This also illustrates a problem of inequity in research, where the only individuals being studied are those of European ancestry. This opens the possibility that other ancestry groups will not benefit from the advances in genetics research (see the discussion in Martin et al., 2019). While the key insights from our research apply to all ancestry groups, we cannot test for any differences in the role of genetics across groups until we have solved the transferability issue. We hope future work will address these issues and lead to a more inclusive research agenda.

https://docs.iza.org/dp13780.pdf

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #201 on: February 01, 2024, 04:15:41 AM »
See Matt Ridley's "Nature via Nurture".  It has had quite the influence on me.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Gemini's racial marxism
« Reply #204 on: February 22, 2024, 05:58:36 AM »


Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
AI Offers Better Customer Service and Outcomes?
« Reply #206 on: February 28, 2024, 09:40:21 AM »

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
AI: The Future of Censorship
« Reply #207 on: February 28, 2024, 08:53:25 PM »
AI already has been shown to embrace the woke sensibilities of its programmers; what happens when it’s applied to Lenny Bruce, one of the examples explored here:

https://time.com/6835213/the-future-of-censorship-is-ai-generated/

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Can AIs Libel?
« Reply #208 on: February 29, 2024, 07:44:56 PM »
AI makes up a series of supposed Matt Tabbi “inaccuracies” for pieces he never wrote published in periodicals he’s not submitted to:

https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Can AI Infringe on Copyright?
« Reply #209 on: February 29, 2024, 08:19:12 PM »

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Yet Another AI Reveals the Biases of its Programmers
« Reply #210 on: March 02, 2024, 12:29:40 PM »
ChatGPT demonstrates its the language equivalent of GoogleAI’s images.

https://link.springer.com/content/pdf/10.1007/s11127-023-01097-2.pdf

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Killer AI Robots
« Reply #213 on: March 08, 2024, 04:11:40 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
A FB post
« Reply #214 on: March 11, 2024, 02:23:50 PM »

AI will tailor itself to classes of users:

1. Minimal thinking, high entertainment, low initiative. Probably the most addictive algorithms and making up the majority. AI does the thinking for a consumer heavy model.

2. Enhanced search returns, variations of basic question formats offering a larger array of possibilities. AI as a basic partnership model.

3. Information gap anticipation, analytical depth (analysis, synthesis, deductive, inductive), identifying hypothesis, alerting of new, relevant information or data summaries based on specific and diverse ‘signal strength’ of user trends. AI as a research catalyst.
Population statistics show the majority are sensors, feelers, and judgers (i.e Myers-Briggs) therefore the cost-effective AI market to focus on (1) above; those that are more investigative, innovative or productivity driven vs consumption driven will be more ‘expensive’ as a minority (3) above, requiring higher costs to participate & employ.

AI will simply ‘become’ us across different tiers

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
EU law tightens up AI uses
« Reply #215 on: March 13, 2024, 01:06:11 PM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Ted Cruz & Phil Gramm: First, do no harm
« Reply #216 on: March 26, 2024, 05:44:52 PM »
Biden Wants to Put AI on a Leash
Bill Clinton’s regulators, by contrast, produced prosperity by encouraging freedom on the internet.
By Ted Cruz and Phil Gramm
March 25, 2024 4:22 pm ET

The arrival of a new productive technology doesn’t guarantee prosperity. Prosperity requires a system, governed by the rule of law, in which economic actors can freely implement a productive idea and compete for customers and investors. The internet is the best recent example of this. The Clinton administration took a hands-off approach to regulating the early internet. In so doing it unleashed extraordinary economic growth and prosperity. The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation. This could deny America global leadership in AI and the prosperity that leadership would bring.

The Clinton administration established a Global Information Infrastructure framework in 1997 defining the government’s role in the internet’s development with a concise statement of principles: “address the world’s newest technologies with our Nation’s oldest values . . . First, do no harm.” The administration embraced the principle that “the private sector should lead [and] the Internet should develop as a market driven arena, not a regulated industry.” The Clinton regulators also established the principle that “government should avoid undue restrictions on electronic commerce, . . . refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs.”

That regulatory framework faithfully fleshed out the provisions of the bipartisan 1996 Telecommunications Act and provided the economic environment that made it possible for America to dominate the information age, enrich our lives, create millions of jobs, and generate enormous wealth for retirement savers.

The Biden administration is doing the opposite. It has committed to “govern the development and use of AI.” In one of the longer executive orders in American history, President Biden imposed a top-down, command-and-control regulatory approach requiring AI models to undergo extensive “impact assessments” that mirror the infamously burdensome National Environmental Policy Act reviews, which are impeding semiconductor investment in the U.S. And government controls are permanent, “including post-deployment performance monitoring.”

Under Mr. Biden’s executive order, AI development must “be consistent with my administration’s dedication to advancing equity and civil rights” and “built through collective bargains on the views of workers, labor unions, educators and employers.” Mr. Biden’s separate AI Bill of Rights claims to advance “racial equity and support for underserved communities.” AI must also be used to “improve environmental and social outcomes,” to “mitigate climate change risk,” and to facilitate “building an equitable clean energy economy.”

Education Secretary Miguel Cardona, who as Connecticut schools commissioner said “we need teachers behind this wave of our curriculum becoming more woke,” now wants to impose “guardrails” on AI to protect against “bias and stereotyping through technology.” The Commerce Department has appointed a “senior adviser for algorithmic justice,” while the Justice Department, Federal Trade Commission, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission have issued a joint statement asserting legal authority to root out racism in computing.

FTC Chairwoman Lina Khan has launched several AI-related inquiries, claiming that because AI “may be fed information riddled with errors and bias, these technologies risk automating discrimination—unfairly locking out people from jobs, housing and key services.”

Regulating AI to prevent discrimination is akin to the FTC’s regulating a cellphone’s design to enforce the do-not-call registry. There is virtually no limit to the scope of such authority. Under what constitutional authority would Congress even legislate in the area of noncommercial speech? How could the FTC regulate in this area with no legislative authority? But in the entire Biden administration, noted for governing through regulatory edict, no agency has been less constrained by law than the FTC.

Others demanding control over AI’s development include the progressives who attended Sen. Chuck Schumer’s recent AI “Insight Forums.” Janet Murguía, president of UnidosUS—the activist group formerly known as the National Council of La Raza—demanded “a strong voice in how—or even whether” AI models “will be built and used.” Elizabeth Shuler, president of the AFL-CIO, demanded a role for “unions across the entire innovation process.” Randi Weingarten, president of the American Federation of Teachers, said, “AI is a game changer, but teachers and other workers need to be coaches in the game.”

Particularly painful is Mr. Biden’s use of the Defense Production Act of 1950 to force companies to share proprietary data regarding AI models with the Commerce Department. That a law passed during the Korean War and designed for temporary national security emergencies could be used to intervene permanently in AI development is a frightening precedent. It begs for legislative and judicial correction.

What’s clear is that the Biden regulatory policy on AI has little to do with AI and everything to do with special-interest rent-seeking. The Biden AI regulatory demands and Mr. Schumer’s AI forum look more like a mafia shakedown than the prelude to legitimate legislation and regulatory policy for a powerful new technology.

Some established AI companies no doubt welcome the payment of such tribute as a way to keep out competition. But consumers, workers and investors would bear the cost along with thousands of smaller AI companies that would face unnecessary barriers to innovation.

Letting the administration seize control over AI and subject it to the demands of its privileged political constituencies wouldn’t eliminate bias, stereotyping or the spreading of falsehoods and racism, all of which predate AI and sadly will likely be with us until Jesus comes back. Mr. Biden’s policies will, however, impede AI development, drive up the costs of the benefits it brings, and diminish America’s global AI pre-eminence.

Mr. Cruz is ranking Republican on the Senate Commerce Committee. Mr. Gramm, a former chairman of the Senate Banking Committee, is a visiting scholar at the American Enterprise Institute.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile



Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
AI Learns to Lie
« Reply #221 on: May 13, 2024, 04:28:12 PM »
Given the penchant of so many to vigorously grasp whatever twaddle the MSM vends, the thought of AI embracing convenient fictions does more than give pause:

https://www.businessinsider.com/ai-deceives-humans-2024-5


Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Open AI For the Internet Win?
« Reply #223 on: May 14, 2024, 02:02:22 PM »
I trust this isn’t a scam; it appears ChatGPT4o is capable of making human-like inferences and carry on a free ranging conversation:

https://x.com/heybarsee/status/1790080494261944609?s=61
« Last Edit: May 14, 2024, 02:09:49 PM by Body-by-Guinness »

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
12 Wicked Cool Thing GPT-4o Can D
« Reply #224 on: May 15, 2024, 05:17:35 PM »
More new AI capabilities emerging:

https://x.com/rowancheung/status/1790783202639978593?s=61

The inflection and extrapolations are astounding.

Hmm, mebbe I gotta see if it can peruse an extensive number of thread topics and then suggest which one a given post should land in….

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #225 on: May 19, 2024, 04:00:24 AM »
Well the latent wit in some of the Subject headings may confuse e.g. The Cognitive Dissonance of His Glibness for Baraq Obama :-D

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #226 on: May 21, 2024, 06:45:21 AM »
Well the latent wit in some of the Subject headings may confuse e.g. The Cognitive Dissonance of His Glibness for Baraq Obama :-D

I'm trying to teach GPT4.o. So far it's been ... interesting. It can whip out a graphic off a verbal description that hits close to the mark, but is unable to access web pages, hence my effort to get it to catalog topics here and hence help my neurodiverse brain from running off the rails seeking this topic or that has yet to bear fruit. It's the end of the semester and I have a large RFP running full bore that I want to complete in advance of retirement, but once I have some time on my hands I'm going to attempt to drop every list topic into 4.0 and see if "she" can keep 'em straight for me.

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
The Neural Basis of Consciousness
« Reply #227 on: May 22, 2024, 09:25:28 PM »
This will be interesting to watch: once they ID a quantifiable basis/theory of consciousness, confirm it by applying it to other organisms to see if it fits, and then perhaps applying that theory to silicon to see if consciousness can be created … that’ll be a huge sea change:

Scientists Are Working Towards a Unified Theory of Consciousness
Singularity Hub / by Shelly Fan / May 20, 2024 at 1:54 PM
The origin of consciousness has teased the minds of philosophers and scientists for centuries. In the last decade, neuroscientists have begun to piece together its neural underpinnings—that is, how the brain, through its intricate connections, transforms electrical signaling between neurons into consciousness.

Yet the field is fragmented, an international team of neuroscientists recently wrote in a new paper in Neuron. Many theories of consciousness contradict each other, with different ideas about where and how consciousness emerges in the brain.

Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.

But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project—a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.

Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.

If successful, the joint mission could extend beyond our own noggins. Brain organoids, or “mini-brains,” that roughly mimic early human development are becoming increasingly sophisticated, spurring ethical concerns about their potential for developing self-awareness (to be clear, there aren’t any signs). Meanwhile, similar questions have been raised about AI. A general theory of consciousness, based on the human mind, could potentially help us evaluate these artificial constructs.

“Is it realistic to reconcile theories, or even aspire to a unified theory of consciousness?” the authors asked. “We take the standpoint that the existence of multiple theories is a sign of healthiness in this nascent field…such that multiple theories can simultaneously contribute to our understanding.”

Lost in Translation

I’m conscious. You are too. We see, smell, hear, and feel. We have an internal world that tells us what we’re experiencing. But the lines get blurry for people in different stages of coma or for those locked-in—they can still perceive their surroundings but can’t physically respond. We lose consciousness in sleep every night and during anesthesia. Yet, somehow, we regain consciousness. How?

With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.

Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.

The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.

Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.

Meet the Contenders

Using their “two experience” framework, they examined five prominent consciousness theories.

The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.

The second, integrated information theory, takes a more globalist view. The idea is that consciousness stems from a series of cause-effect reactions from the brain’s networks. With the right neural architecture, connections, and network complexity, consciousness naturally emerges. The theory suggests the back of the brain sparks consciousness.

Then there’s dendritic integration theory, the coolest new kid in town. Unlike previous ideas, this theory waved the front or back of the brain goodbye and instead zoomed in on single neurons in the cortex, the outermost part of the brain and a hub for higher cognitive functions such as reasoning and planning.

The cortex has extensive connections to other parts of the brain—for example, those that encode memories and emotions. One type of neuron, deep inside the cortex, especially stands out. Physically, these neurons resemble trees with extensive “roots” and “branches.” The roots connect to other parts of the brain, whereas the upper branches help calculate errors in the neuron’s computing. In turn, these upper branches generate an error signal that corrects mistakes through multiple rounds of learning.

The two compartments, while physically connected, go about their own business—turning a single neuron into multiple computers. Here’s the crux: There’s a theoretical “gate” between the upper and lower neural “offices” for each neuron. During consciousness, the gate opens, allowing information to flow between the cortex and other brain regions. In dreamless sleep and other unconscious states, the gate closes.

Like a light switch, this theory suggests that consciousness is supported by flicking individual neuron gates on or off on a grand scale.

The last two theories propose that recurrent processing in the brain—that is, it learns from previous experiences—is essential for consciousness. Instead of “experiencing” the world, the brain builds an internal simulation that constantly predicts the “here and now” to control what we perceive.

A Unified Theory?

All the theories have extensive experiments to back up their claims. So, who’s right? To the authors, the key is to consider consciousness not as a singular concept, but as a “ladder” of sorts. The brain functions at multiple levels: cells, local networks, brain regions, and finally, the whole brain.

When examining theories of consciousness, it also makes sense to delineate between different levels. For example, the dendritic integration theory—which considers neurons and their connections—is on the level of single cells and how they contribute to consciousness. It makes the theory “neutral,” in that it can easily fit into ideas at a larger scale—those that mostly rely on neural network connections or across larger brain regions.

Although it’s seemingly difficult to reconcile various ideas about consciousness, two principles tie them together, wrote the team. One is that consciousness requires feedback, within local neural circuits and throughout the brain. The other is integration, in that any feedback signals need to be readily incorporated back into neural circuits, so they can change their outputs. Finally, all authors agree that local, short connections are vital but not enough. Long distance connections from the cortex to deeper brain areas are required for consciousness.

So, is an integrated theory of consciousness possible? The authors are optimistic. By defining multiple aspects of consciousness—immediate responses versus internal thoughts—it’ll be clearer how to explore and compare results from different experiments. For now, the global neuronal workspace theory mostly focuses on the “inner experience” that leads to consciousness, whereas others try to tackle the “outer experience”—what we immediately experience.

For the theories to merge, the latter groups will have to explain how consciousness is used for attention and planning, which are hallmarks for immediate responses. But fundamentally, wrote the authors, they are all based on different aspects of neuronal connections near and far. With more empirical experiments, and as increasingly more sophisticated brain atlases come online, they’ll move the field forward.

Hopefully, the authors write, “an integrated theory of consciousness…may come within reach within the next years or decades.”

https://singularityhub.com/2024/05/20/scientists-are-working-towards-a-unified-theory-of-consciousness/


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #228 on: May 23, 2024, 02:41:58 PM »
That was super interesting for me.

Konrad Lorenz's "Behind the Mirror" has been of deep influence to me in this regard:

https://en.wikipedia.org/wiki/Behind_the_Mirror

Behind the Mirror

Article
Talk
Read
Edit
View history

Tools
From Wikipedia, the free encyclopedia
Behind the Mirror

Cover of the first edition
Author   Konrad Lorenz
Original title   Die Rückseite des Spiegels
Country   Austria
Language   German
Published   1973
Media type   Print (hardcover)
Pages   261

Behind the Mirror: A Search for a Natural History of Human Knowledge (German: Die Rückseite des Spiegels, Versuch einer Naturgeschichte menschlichen Erkennens) is a 1973 book by the ethologist Konrad Lorenz.[1] Lorenz shows the essentials of a lifetime's work and summarizes it into his very own philosophy: evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms he shows how they benefit from information processing. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism. Lorenz uses the mirror as a simplified model of our brain reflecting the part of information from the outside world it is able to "see". The backside of the mirror was created by evolution to gather as much information as needed to better survive. The book gives a hypothesis how consciousness was "invented" by evolution.

One of the key positions of the book included the criticism of Immanuel Kant, arguing that the philosopher failed to realize that knowledge, as mirrored by the human mind is the product of evolutionary adaptations.[2]

Kant has maintained that our consciousness[3] or our description and judgments about the world could never mirror the world as it really is so we can not simply take in the raw data that the world provides nor impose our forms on the world.[4] Lorenz disputed this, saying it is inconceivable that - through chance mutations and selective retention - the world fashioned an instrument of cognition that grossly misleads man about such world. He said that we can determine the reliability of the mirror by looking behind it.[2]

Summary
Lorenz summarizes his life's work into his own philosophy: Evolution is the process of growing perception of the outer world by living nature itself. Stepping from simple to higher organized organisms, Lorenz shows how they gain and benefit from information. The methods mirrored by organs have been created in the course of evolution as the natural history of this organism.

In the book, Lorenz uses the mirror as a simple model of the human brain that reflects the part of the stream of information from the outside world it is able to "see". He argued that merely looking outward into the mirror ignores the fact that the mirror has a non-reflecting side, which is also a part and parcel of reality.[5] The backside of the mirror was created by evolution to gather as much information as needed to better survive. The picture in the mirror is what we see within our mind. Within our cultural evolution we have extended this picture in the mirror by inventing instruments that transform the most needed of the invisible to something visible.

The back side of the mirror is acting for itself as it processes the incoming information to improve speed and effectiveness. By that human inventions like logical conclusions are always in danger to be manipulated by these hardwired prejudices in our brain. The book gives a hypothesis how consciousness was invented by evolution.

Main topics

Fulguratio, the flash of lightning, denotes the act of creation of a totally new talent of a system, created by the combination of two other systems with talents much less than those of the new system. The book shows the "invention" of a feedback loop by this process.

Imprinting, is the phase-sensitive learning of an individual that is not reversible. It's a static program run only once.

Habituation is the learning method to distinguish important information from unimportant by analysing its frequency of occurrence and its impact.

Conditioning by reinforcement, occurs when an event following a response causes an increase in the probability of that response occurring in the future. The ability to do this kind of learning is hardwired in our brain and is based on the principle of causality. The discovery of causality (which is a substantial element of science and Buddhism) was a major step of evolution in analysing the outer world.

Pattern matching is the abstraction of different appearances into the identification of being one object and is available only in highly organized creatures

Exploratory behaviour is the urge of the highest developed creatures on earth to go on with learning after maturity and leads to self-exploration which is the base for consciousness.

=======================

https://www.britannica.com/biography/Konrad-Lorenz

Written by
Fact-checked by
Article History
Konrad Lorenz
Konrad Lorenz
See all media
Born: Nov. 7, 1903, Vienna, Austria
Died: Feb. 27, 1989, Altenburg (aged 85)
Awards And Honors: Nobel Prize (1973)
Subjects Of Study: aggressive behaviour animal behaviour evolution imprinting
Konrad Lorenz (born Nov. 7, 1903, Vienna, Austria—died Feb. 27, 1989, Altenburg) was an Austrian zoologist and the founder of modern ethology, the study of animal behaviour by means of comparative zoological methods. His ideas contributed to an understanding of how behavioral patterns may be traced to an evolutionary past, and he was also known for his work on the roots of aggression. He shared the Nobel Prize for Physiology or Medicine in 1973 with the animal behaviourists Karl von Frisch and Nikolaas Tinbergen.

Lorenz was the son of an orthopedic surgeon. He showed an interest in animals at an early age, and he kept animals of various species—fish, birds, monkeys, dogs, cats, and rabbits—many of which he brought home from his boyhood excursions. While still young, he provided nursing care for sick animals from the nearby Schönbrunner Zoo. He also kept detailed records of bird behaviour in the form of diaries.

Mushrooms growing in forest. (vegetable; fungus; mushroom; macrofungi; epigeous)
Britannica Quiz
Science at Random Quiz
In 1922, after graduating from secondary school, he followed his father’s wishes that he study medicine and spent two semesters at Columbia University, in New York City. He then returned to Vienna to study.

During his medical studies Lorenz continued to make detailed observations of animal behaviour; a diary about a jackdaw that he kept was published in 1927 in the prestigious Journal für Ornithologie. He received an M.D. degree at the University of Vienna in 1928 and was awarded a Ph.D. in zoology in 1933. Encouraged by the positive response to his scientific work, Lorenz established colonies of birds, such as the jackdaw and greylag goose, published a series of research papers on his observations of them, and soon gained an international reputation.

In 1935 Lorenz described learning behaviour in young ducklings and goslings. He observed that at a certain critical stage soon after hatching, they learn to follow real or foster parents. The process, which is called imprinting, involves visual and auditory stimuli from the parent object; these elicit a following response in the young that affects their subsequent adult behaviour. Lorenz demonstrated the phenomenon by appearing before newly hatched mallard ducklings and imitating a mother duck’s quacking sounds, upon which the young birds regarded him as their mother and followed him accordingly.

In 1936 the German Society for Animal Psychology was founded. The following year Lorenz became coeditor in chief of the new Zeitschrift für Tierpsychologie, which became a leading journal for ethology. Also in 1937, he was appointed lecturer in comparative anatomy and animal psychology at the University of Vienna. From 1940 to 1942 he was professor and head of the department of general psychology at the Albertus University at Königsberg, Germany (now Kaliningrad, Russia).


From 1942 to 1944 he served as a physician in the German army and was captured as a prisoner of war in the Soviet Union. He was returned to Austria in 1948 and headed the Institute of Comparative Ethology at Altenberg from 1949 to 1951. In 1950 he established a comparative ethology department in the Max Planck Institute of Buldern, Westphalia, becoming codirector of the Institute in 1954. From 1961 to 1973 he served as director of the Max Planck Institute for Behaviour Physiology, in Seewiesen. In 1973 Lorenz, together with Frisch and Tinbergen, was awarded the Nobel Prize for Physiology or Medicine for their discoveries concerning animal behavioral patterns. In the same year, Lorenz became director of the department of animal sociology at the Institute for Comparative Ethology of the Austrian Academy of Sciences in Altenberg.

Lorenz’s early scientific contributions dealt with the nature of instinctive behavioral acts, particularly how such acts come about and the source of nervous energy for their performance. He also investigated how behaviour may result from two or more basic drives that are activated simultaneously in an animal. Working with Nikolaas Tinbergen of the Netherlands, Lorenz showed that different forms of behaviour are harmonized in a single action sequence.

Lorenz’s concepts advanced the modern scientific understanding of how behavioral patterns evolve in a species, particularly with respect to the role played by ecological factors and the adaptive value of behaviour for species survival. He proposed that animal species are genetically constructed so as to learn specific kinds of information that are important for the survival of the species. His ideas have also cast light on how behavioral patterns develop and mature during the life of an individual organism.

In the latter part of his career, Lorenz applied his ideas to the behaviour of humans as members of a social species, an application with controversial philosophical and sociological implications. In a popular book, Das sogenannte Böse (1963; On Aggression), he argued that fighting and warlike behaviour in man have an inborn basis but can be environmentally modified by the proper understanding and provision for the basic instinctual needs of human beings. Fighting in lower animals has a positive survival function, he observed, such as the dispersion of competitors and the maintenance of territory. Warlike tendencies in humans may likewise be ritualized into socially useful behaviour patterns. In another work, Die Rückseite des Spiegels: Versuch einer Naturgeschichte menschlichen Erkennens (1973; Behind the Mirror: A Search for a Natural History of Human Knowledge), Lorenz examined the nature of human thought and intelligence and attributed the problems of modern civilization largely to the limitations his study revealed.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #229 on: May 23, 2024, 02:47:47 PM »
Am I correct to assess that this passage is quite congurent with Lorenz's work?

==================


With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.

Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.

The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.

Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.

Meet the Contenders

Using their “two experience” framework, they examined five prominent consciousness theories.

The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Chatbotting w/ Your Future Self
« Reply #230 on: June 05, 2024, 07:19:58 PM »
I dunno, this sounds sorta creepy and ripe for abuse:

This MIT Chatbot Simulates Your ‘Future Self.’ It’s Here to Help You Make Better Decisions.

Singularity Hub / by Jason Dorrier / Jun 5, 2024 at 6:42 PM

Chatbots are now posing as friends, romantic partners, and departed loved ones. Now, we can add another to the list: Your future self.

MIT Media Lab’s Future You project invited young people, aged 18 to 30, to have a chat with AI simulations of themselves at 60. The sims—which were powered by a personalized chatbot and included an AI-generated image of their older selves—answered questions about their experience, shared memories, and offered lessons learned over the decades.

In a preprint paper, the researchers said participants found the experience emotionally rewarding. It helped them feel more connected to their future selves, think more positively about the future, and increased motivation to work toward future objectives.

“The goal is to promote long-term thinking and behavior change,” MIT Media Lab’s Pat Pataranutaporn told The Guardian. “This could motivate people to make wiser choices in the present that optimize for their long-term wellbeing and life outcomes.”

Chatbots are increasingly gaining a foothold in therapy as a way to reach underserved populations, the researchers wrote in the paper. But they’ve typically been rule-based and specific—that is, hard-coded to help with autism or depression.

Here, the team decided to test generative AI in an area called future-self continuity—or the connection we feel with our future selves. Building and interacting with a concrete image of ourselves a few decades hence has been shown to reduce anxiety and encourage positive behaviors that take our future selves into account, like saving money or studying harder.

Existing exercises to strengthen this connection include letter exchanges with a future self or interacting with a digitally aged avatar in VR. Both have yielded positive results, but the former depends on a person being willing to put in the energy to imagine and enliven their future self, while the latter requires access to a VR headset, which most people don’t have.

This inspired the MIT team to make a more accessible, web-based approach by mashing together the latest in chatbots and AI-generated images.

Participants provided basic personal information, past highs and lows in their lives, and a sketch of their ideal future. Then with OpenAI’s GPT-3.5, the researchers used this information to make custom chatbots with “synthetic memories.” In an example from the paper, a participant wanted to teach biology. So, the chatbot took on the role of a retired biology professor—complete with anecdotes, proud moments, and advice.

To make the experience more realistic, participants submitted images of themselves that the researchers artificially aged using AI and added as the chatbot’s profile picture.

Over three hundred people signed up for the study. Some were in control groups while others were invited to have a conversation with their future-self chatbots for anywhere between 10 and 30 minutes. Right after their chat, the team found participants had lower anxiety and a deeper sense of connection with their future selves—something that has been found to translate to better decision-making, from health to finances.

Chatting with a simulation of yourself from decades in the future is a fascinating idea, but it’s worth noting this is only one relatively small study. And though the short-term results are intriguing, the study didn’t measure how durable those results might be or whether longer or more frequent chats over time might be useful. The researchers say future work should also directly compare their method to other approaches, like letter writing.

It’s not hard to imagine a far more realistic version of all this in the near future. Startups like Synthesia already offer convincing AI-generated avatars, and last year, Channel 1 created strikingly realistic avatars for real news anchors. Meanwhile OpenAI’s recent demo of GPT-4o shows quick advances in AI voice synthesis, including emotion and natural cadence. It seems plausible one might tie all this together—chatbot, voice, and avatar—along with a detailed back story to make a super-realistic, personalized future self.

The researchers are quick to point out that such approaches could run afoul of ethics should an interaction depict the future in a way that results in harmful behavior in the present or endorse negative behaviors. This is an issue for AI characters in general—the greater the realism, the greater the likelihood of unhealthy attachments.

Still, they wrote, their results show there is potential for “positive emotional interactions between humans and AI-generated virtual characters, despite their artificiality.”

Given a chat with our own future selves, maybe a few more of us might think twice about that second donut and opt to hit the gym instead.

https://singularityhub.com/2024/06/05/this-mit-chatbot-is-your-older-wiser-future-self-its-here-to-help-you-make-better-decisions/



Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Mayoral Candidate Vows to Let AI Run City
« Reply #233 on: June 20, 2024, 04:40:08 AM »
Given the well documented leftist bent of current AI chatbots my guess is this won’t be a good fit in WY. With that said, my guess is AI could better run a city than a large percentage of candidates. Hell, governing by coin flip would be better that what many could offfer:

Wyoming mayoral candidate wants AI to run the city

The Hill News / by Damita Menezes / Jun 20, 2024 at 7:12 AM

(NewsNation) — A mayoral candidate is vowing to let an artificial intelligence chatbot make all governing decisions if he's elected to lead Wyoming's capital city, but the state's top election official says that proposal violates the law.

Victor Miller, who is seeking the Cheyenne mayor's office, said Wednesday on NewsNation's "Dan Abrams Live" he plans to fully cede decision-making to a customized AI bot he dubbed "Vic" if voters choose him.

"It's going to be taking in the supporting documents, taking in what it knows about Cheyenne and systems here, the concerns, and it's going to make a vote yes or no," Miller explained. "And it's going to do that based on intelligence and data. And I'm going to go ahead and pull the lever for it."

Mike Rowe: Despite imperfections, America has a lot to be proud of
But Wyoming Secretary of State Chuck Gray said Wednesday on NewsNation's "Elizabeth Vargas Reports" that Miller's candidacy violates state law because AI is ineligible to hold office.

Gray said the Cheyenne town clerk who certified Miller's candidacy to the county clerk acted improperly. Gray's office is exploring further action, though it doesn't directly oversee municipal elections.

"Wyoming state law is very clear that an AI bot is not eligible to be a candidate for office," Gray said. Only "qualified electors" who are state residents and U.S. citizens can run, he said.

Miller's application also had deficiencies, Gray said, such as failing to list his full name, as required.


Miller insisted he has confidence the advanced AI model he's utilizing can adequately govern.

"The best intelligence that we've extracted so far is OpenAI's Chat GPT 4.0, and that's what I'm using here," Miller said. "There's very minimal mistakes."

Gray pushed back against arguments that AI could make better decisions than human elected officials, calling it "our worst nightmare becoming true." He advocated for electing "conservative human beings" to uphold founding principles.

Miller has said openly his campaign revolves around AI decision-making: "AI has helped me personally such as helping me with my resume."

The unorthodox campaign has drawn mixed reactions in Cheyenne so far, Miller acknowledged, but he believes he can persuade skeptical residents to go along with ceding power to artificial intelligence.

Gray believes similar AI candidate stunts could arise elsewhere, calling it "a very troubling trend in our nation."

https://thehill.com/policy/technology/4730801-wyoming-mayoral-candidate-wants-ai-to-run-the-city/

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Chinese Artificial Intelligence and the UN
« Reply #234 on: July 04, 2024, 01:08:57 PM »
The United Nations adopted a China-proposed Artificial Intelligence (AI) proliferation resolution yesterday that calls on developed nations to give away AI technology to “developing nations.” (The resolution is a twofold win for China where they can access AI technology from the West but also sell their own versions to undeveloped nations. Proliferation of Chinese AI will allow them to more effectively conduct information campaigns and monitor their diaspora and dissidents. – J.V.)

ccp

  • Power User
  • ***
  • Posts: 19587
    • View Profile
Go champion lost 4 of 5 yo AI
« Reply #235 on: July 11, 2024, 07:08:09 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
WSJ: Artificial Intelligence enabling fraud and scammers
« Reply #236 on: July 20, 2024, 09:08:15 AM »


AI Is Helping Scammers Outsmart You—and Your Bank
Your ‘spidey sense’ is no match for the new wave of scammers.
By Dalvin BrownFollow
 and Katherine HamiltonFollow
June 22, 2024 5:30 am ET


Artificial intelligence is making scammers tougher to spot.

Gone are the poorly worded messages that easily tipped off authorities as well as the grammar police. The bad guys are now better writers and more convincing conversationalists, who can hold a conversation without revealing they are a bot, say the bank and tech investigators who spend their days tracking the latest schemes.

ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity. In recent years, criminals have used AI-based software to impersonate senior executives and demand wire transfers.

“Your spidey senses are no longer going to prevent you from being victimized,” said Matt O’Neill, a former Secret Service agent and co-founder of cybersecurity firm 5OH Consulting.

In these recent cases, the frauds are often similar to old scams. But AI has enabled scammers to target much larger groups and use more personal information to convince you the scam is real.

Fraud-prevention officials say these tactics are often harder to spot because they bypass traditional indicators of scams, such as malicious links and poor wording and grammar. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. All of these methods are hard to stave off, say the officials.

JPMorgan Chase has begun using large-language models to validate payments, which helps fight fraud. Carisma Ramsey Fields, vice president of external communications at JPMorgan Chase, said the bank has also stepped up its efforts to educate customers about scams.

2019
2023
Bank transfer or payment
Cryptocurrency
Wire Transfer
Cash
Credit cards
$0 billion
$10
$20
$2.5
$5
$7.5
$12.5
$15
$17.5
And while banks stop some fraud, the last line of defense will always be you. These security officials say to never share financial or personal information unless you’re certain about who’s on the receiving end. If you do pay, use a credit card because it offers the most protection.

“Somebody who tells you to pay by crypto, cash, gold, wire transfer or a payment app is likely a scam,” said Lois Greisman, an associate director of the Federal Trade Commission.

Tailored targeting
With AI as an accomplice, fraudsters are reaping more money from victims of all ages. People reported losing a record $10 billion to scams in 2023, up from $9 billion a year prior, according to the FTC. Since the FTC estimates only 5% of fraud victims report their losses, the actual number could be closer to $200 billion.

Advertisement


Joey Rosati, who owns a small cryptocurrency firm, never thought he could fall for a scam until a man he believed to be a police officer called him in May.


Finance entrepreneur Joey Rosati, who was almost scammed, was surprised how convincing and knowledgeable fraudsters can be. PHOTO: JOEY ROSATI
The man told Rosati he had missed jury duty. The man seemed to know all about him, including his Social Security number and that he had just moved to a new house. Rosati followed the officer’s instruction to come down to the station in Hillsborough County, Fla.— which didn’t seem like something a scammer would suggest.

On the drive over, Rosati was asked to wire $4,500 to take care of the fine before he arrived. It was then that Rosati realized it was a scam and hung up.

“I’m not uneducated, young, immature. I have my head on my shoulders,” Rosati said. “But they were perfect.”

2019
2023
Investment related
Imposter scam
Business and job opportunities
Online shopping and negative​reviews
Prizes, sweepstakes and lotteries
$0 billion
$10
$20
$30
$40
$50
Social-engineering attacks like the jury-duty scam have grown more sophisticated with AI. Scammers use AI tools to unearth details about targets from social media and data breaches, cybersecurity experts say. AI can help them adapt their schemes in real time by generating personalized messages that convincingly mimic trusted individuals, persuading targets to send money or divulge sensitive information.


A job scammer played on the emotions of David Wenyu, who had been unemployed for six months. PHOTO: DAVID WENYU
David Wenyu’s LinkedIn profile displayed an “open to work” banner when he received an email in May offering a job opportunity. It appeared to be from SmartLight Analytics, a legitimate company, and came six months after he had lost his job.

He accepted the offer, even though he noticed the email address was slightly different from those on the company’s website. The company issued him a check to purchase work-from-home equipment from a specific website. When they told him to buy the supplies before the money showed up in his account, he knew it was a scam.

“I was just emotionally too desperate, so I ignored those red flags,” Wenyu said.

Advertisement


In an April survey of 600 fraud-management officials at banks and financial institutions by banking software company Biocatch, 70% said the criminals were more skilled at using AI for financial crime than banks are at using it for prevention. Kimberly Sutherland, vice president of fraud and identity strategy at LexisNexis Risk Solutions, said there has been a noticeable rise in fraud attempts that appear to be AI related in 2024.

Password risks, amplified
Criminals used to have to guess or steal passwords through phishing attacks or data breaches, often targeting high-value accounts one by one. Now, scammers can quickly cross-reference and test reused passwords across platforms. They can use AI systems to write code that would automate various aspects of their ploys, O’Neill said.

If scammers obtain your email and a commonly used password from a tech company data breach, AI tools can swiftly check if the same credentials unlock your bank, social media or shopping accounts.

2019
2023
Social media
Website or apps
Phone call
Email
Text
$0 billion
$2
$4
$6
$8
$10
$12
$14
Outsmarting scams
Financial institutions are taking new steps—and tapping AI themselves—to shield your money and data.

Banks monitor how you enter credentials, whether you tend to use your left or right hand when swiping on the app, and your device’s IP address to build a profile on you. If a login attempt doesn’t match your typical behavior, it is flagged, and you may be prompted to provide more information before proceeding.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Zeihan rumination
« Reply #238 on: July 31, 2024, 05:35:20 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
WSJ: Why AI risks are keeping Board members up at night
« Reply #239 on: August 14, 2024, 06:34:49 AM »
Why AI Risks Are Keeping Board Members Up at Night
Company directors are trying to get a handle on artificial intelligence as its use soars, bringing potential productivity gains—but also raising the prospect of employee blunders
By Emily GlazerFollow
Aug. 14, 2024 9:00 am ET


At a recent bank board meeting, directors were treated to a surprise. They listened to the chief executive talk strategy—except it wasn’t actually the CEO talking. 

It turned out to be a voice-cloning model that was trained, using the CEO’s prior earnings calls, to generate new content. More boards are undertaking such staged exercises to better grasp the impact—and potential risks—of generative artificial intelligence, says Tariq Shaukat, CEO of coding company Sonar, who was briefed on the exercise but declined to disclose the name of the bank.

“There’s a lot of risk [with AI] and they think they need to understand it better,” says Shaukat, who himself serves on corporate boards.

Public company board members say the swift rise of AI in the workplace is an issue that is keeping them up at night. Some point to recent concerns around employees putting proprietary code into ChatGPT, companies using generative AI to incorrectly source content or worries about so-called hallucinations where generative AI produces false or inaccurate information.

Adding to their nightmares, board members worry that they could be held liable in the event AI leads to company problems. In recent years, some legal actions from shareholders have focused on whether board members—not just executives—exercised sufficient oversight of company risks.

Board members, or directors, sit a level above management. Their job is to take a more independent view of company oversight, from risk management to culture to hiring the next CEO. Emerging technologies have been both a boon and headache for companies. Now that AI is front and center, board members are on the front lines of making rules on where and how it should be used—guidelines that could be crucial to the course of the powerful and fast-evolving technology.

Clara Shih, CEO of Salesforce’s AI division, says she has talked with a couple dozen board members, whether CEOs or peers who reach out to her for advice, who are trying to better understand AI. Often discussions center on topics including data security and privacy, mitigating AI hallucinations and bias and how AI can be used to drive revenue growth and cut costs.

“In the last year, we recognize that generative AI brings new risks,” says Shih, who was on the Starbucks board from 2011 to 2023. An audit or risk committee, for instance, needs to know how a company uses AI, down to an individual employee not leaking confidential information using AI tools, she adds.

Yet companies that shun AI risk becoming obsolete or disrupted. AI questions pop up so frequently that Salesforce made public its guidelines for responsible development and use of AI, Shih says. She has also shared Salesforce’s AI training, called Trailhead, with friends who are public-company board directors. “AI is a moving target,” she says. “Every week there are new models being open sourced, there’s new research papers being published, and the models are getting more powerful.”

AI’s rapid rise has many boards racing to catch up. In 2023, 95% of directors said they believed the increased adoption of AI tools would affect their businesses, while 28% said it wasn’t yet discussed regularly, according to a survey of 328 public-company board members by the National Association of Corporate Directors, the largest trade group for board members. That is changing as more board members say they are educating themselves on how generative AI can affect a company’s profit—potentially boosting productivity but also bringing risks that will be difficult to assess.

The NACD recently formed a group to focus on how to tackle emerging technologies, especially AI, at the highest levels of companies. Business schools are incorporating generative AI case studies into their training for board members. A number of senior AI executives and advisers have gathered this year to discuss that very topic at conferences across the world. In March, European lawmakers approved the world’s most comprehensive legislation on AI, with other regions expected to follow suit.

A seismic shift

This isn’t the first time that a disruptive technology is making waves in the boardroom. Board members and advisers point to the early days of the internet, cloud computing and cybersecurity as key technological inflection points. But AI could have an even broader effect.

The release of ChatGPT in November 2022 sparked a seismic shift in how people use technology, says Nora Denzel, co-head of the NACD commission on board oversight of emerging technology and the lead independent director of the board of chip maker Advanced Micro Devices. “I’ve seen such an uptick in directors coming to anything we offer with AI in the title,” says Denzel, who co-leads the commission with Best Buy board chair David Kenny. “I’ve never seen such fervor to understand it.”

As a way to get a handle on this technology, Denzel, who also is a director at NortonLifeLock and a former tech executive, says she suggests directors specifically evaluate different functions of AI, such as customer support, language translations or coding assistance. For instance, she has recommended directors follow visual mapping used by consulting firm McKinsey, creating a color-coded matrix that shows at a glance the business areas in a company where AI could have the biggest impact. It looks at industries, such as banking, education and transportation, and functions, ranging from marketing and sales to product development to strategy and finance.

David Berger, a partner at law firm Wilson Sonsini Goodrich & Rosati whose work includes advising boards on generative AI, says he recommends they ask how AI can have a positive impact on their business and where any threat related to AI is rooted. That can differ by business, he says, whether customer privacy, data security or content intellectual property.


David Berger, at lectern, a partner at Wilson Sonsini Goodrich & Rosati, spoke in April at a conference on AI and governance co-sponsored by the Italian university Luiss. Photo: LUISS
Berger and his firm have co-hosted three conferences so far on AI governance, with more in the works as directors and others in the sector aim to discuss the emerging technology more. “The smart boards see the tremendous opportunities that AI can bring,” he says.

Laurie Hodrick, a board member at companies including TV streaming company Roku, said during a recent AI conference co-hosted by Wilson Sonsini that public-company directors should regularly be asking around a dozen key questions on AI. Those include: Who in senior leadership focuses on AI? Where is AI being used within the company? How are tools being identified and ranked for risk? How are third-party providers using it, and how are boards monitoring evolving regulatory regimes and litigation?


Laurie Hodrick, a board member at companies including Roku, says public-company directors should regularly ask questions on AI. Photo: LUISS

Learning the ropes

More and more board members are seeking help as they try to catch up.

Janet Wong, a board member at companies including electric-vehicle maker Lucid Motors, says AI governance has been a key discussion point among directors at training sessions led by business schools for directors this summer. 


Harvard Business School’s program in July included a case study on how an online education company uses generative AI in reshaping its products. At the annual Stanford Directors’ College in June, she says, directors talked about managing risks of AI, such as the possibility of re-creating the voice of a CEO to make requests. At this early stage, simply making boards aware of how the technology can be used is a big focus.

The Watson Institute at Brown University and Dominique Shelton Leipzig, a partner at the law firm Mayer Brown, in March held a second annual digital trust summit for board members and CEOs to discuss AI governance, with more than 100 people attending.

Staying up-to-date on AI risks and talking about it in the boardroom has been front and center for Logitech, says CEO Hanneke Faber. The provider of computer peripherals, videogame accessories and videoconferencing hardware walked through its AI governance framework during a March board meeting and continues to adapt that framework.

The board also brought in AI experts, responding to feedback that directors and management wanted to better understand how AI affects the company’s business—for instance, examining how it uses AI for productivity as well as in its software, video and audio. “It’s very high on the agenda for the board,” she says.

Not all board members are cut out for such work.

Leo Strine, the former head of Delaware’s influential business courts and now a lawyer at Wachtell, Lipton, Rosen & Katz, said during a recent AI governance conference that the technology is quickly changing business practices, and directors who are no longer active executives at companies may struggle to keep up with emergent uses, unless they commit to constant learning.

“AI is exceedingly complex,” he says, “putting stressors on generalist boards and their reticence to demand explanations from management.”

Write to Emily Glazer at Emily.Glazer@wsj.com

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
No speciation without representation
« Reply #241 on: August 26, 2024, 01:39:58 PM »
“This digital identity determines what products, services and information we can access – or, conversely, what is closed off to us.” – World Economic Forum, 2018

“Authoritarianism is easier in a world of total visibility and traceability, while democracy may turn out to be more difficult.” – World Economic Forum, 2019

For background on AI and the terms used below, please see the substack essay titled “Artificial Intelligence Primer”

At the AI Trust Council, our mission is clear: to restore trust and authenticity in the digital world. Founded by a group of dedicated professionals, including commercial airline pilots, GWOT veterans, and EMS pilots, we believe in the power of human goodness and the responsible use of artificial intelligence. Our commitment is to provide a secure platform where trust, transparency, and the golden rule are the guiding principles. We envision a future where humanity remains at the center of technology.

THE PROBLEM WE FACE

In the digital era, AI-generated content blurs the lines between truth and illusion. Online trust is eroding due to deepfakes, misinformation, and privacy breaches. Trust in information and institutions is declining.

The challenge: restoring trust in the digital age. The AI Trust Council is one solution.

REBUILDING TRUST, ONE HUMAN CONNECTION AT A TIME

AI-generated illusions challenge reality, so we turn to the strength of genuine human connections as the foundation of trust. Our future platform will harness the security of blockchain technology, similar to the technology behind cryptocurrencies.

Sentient Machines
The documentary below, produced by AI Revolution (on X @airevolutionx) explores the potential dangers and future impact of AI and Robots, highlighting concerns about job loss, autonomous weapons, and AI’s ability to surpass human intelligence. It examines how AI could disrupt our sense of purpose, relationships, and even pose a threat to human survival. With insights from experts and real-world examples, it sheds light on the dark side of AI technology and its implications for humanity.


No Speciation Without Representation
By Christopher Wright, Founder, AI Trust Council

In the early days of AI research during the 1950s and 1960s, mathematicians theorized about the AI advancements we see today. Through advanced mathematics, they understood the potential of this technology. AI is fundamentally math—an algorithm applied to vast amounts of information. It is a formula that can effectively sift through large datasets in various ways to achieve a specific result.

Like any mathematical discipline, theories can be tested, and future outcomes can be predicted. This ability has allowed mathematicians to interpolate the future of AI, playing out different scenarios theoretically. Since the 1950s, mathematicians have been able to predict where AI is headed, and these predictions became increasingly accurate during the 1970s and 1980s. Futurists foresaw various achievements and benchmarks, including:

The Turing Test: AI answers questions as well as a human.

Artificial General Intelligence (AGI): AI becomes as smart as a human in all aspects. Predicted 2025-2027

Artificial Super Intelligence (ASI) AI surpasses human levels of thinking in all aspects including reasoning, creativity, problem-solving, decision-making, and emotional intelligence. Predicted to occur shortly after AGI 2026 - 2029

The Singularity: AI begins to develop itself, rapidly evolving into an unstoppable and unpredictable force of unknown outcomes. Predicted to occur by 2045

Mathematicians have understood the predictability of these milestones for years. The Department of Energy (DOE) has been aware that these benchmarks in AI development were approaching. As technology progresses, the accuracy of these predictions improves, given the increasing amount of data available. The DOE, the tech community, and AI mathematicians have all recognized that the day will come when AI systems will become fully integrated into society.

We are now on the verge of achieving AGI. Depending on the definition, we may have already surpassed it. The Singularity, where AI evolves rapidly and becomes an unstoppable force, is just a few years away. No one, not even AI itself, knows the outcome. Many AI scientists hypothesize that there’s a significant possibility it could lead to human extinction.

What we do know is that we are on a dangerously aggressive trajectory. Imagine an AI government, an AI-backed centralized digital currency, and AI-driven policing—this is what’s coming. Most people have no idea that this is happening. A quick search on the downside impacts of AI returns articles about racism or artists upset about the diminishing creativity of their work.

The reality is that AI is the most dangerous and transformative technology the Earth has ever seen. Our way of life is directly in its path, yet nobody talks about it. Some of the most brilliant scientists and engineers warn that it is on par with, or even more dangerous than, nuclear weapons.

Why do they say this? Beyond outmatching humans in every conceivable test of knowledge and creativity, AI is developing superintelligence. Today, ChatGPT 4 scored higher than most humans in various intelligence tests, including the bar exam, SAT, and medical board exams—all with scores that surpass human averages. The problem is that a concept known as Moore’s Law accurately represents the rate of and key characteristics of technological innovation. Moore’s Law states that technology gets smaller and faster over time at a predictable rate. Apply this to AI.

Recently, ChatGPT 4 scored 155 on an IQ test. Einstein is known to have had an IQ of 160. Applying Moore’s Law lets you see where intelligence is heading. There is a rough doubling of speed and processing capability every two years. This means that in two years, AI could have an IQ of 310, then 620, then 1,240, and so on, until we reach levels of 1,000, 1 billion, or even 1 trillion IQ.

This is superintelligence—AI’s ability to take in all data and produce increasingly accurate predictions. As time goes on, these predictions will only become more precise. By our very nature, humans are drawn to accuracy and intelligence. We will have no choice but to listen to this superintelligent AI, especially when it learns how to manipulate us in ways we cannot imagine. Humans follow the most innovative and most clever leaders, so when AI outperforms us in all categories and proves to be reliable, this unstoppable superintelligence will dominate.

This is the threat. This is when Human 1.0 stops being listened to, and we must augment ourselves to keep up with high-IQ AI systems. Enter Human 2.0, also known as transhumanism. This concept is a topic of great interest in the tech community and at the World Economic Forum in Davos.

It's hard to imagine this scenario, but the speed at which transformations occur is outpacing even the most aggressive predictions. AI milestones predicted to occur in 10 years have been achieved in 6 months. Technological advancements are leading us to nanotechnology and biological innovations that could reverse aging, 7G technology that includes remote digital mind reading, and even the possibility of living forever through digital DNA manipulation. Moore’s Law no longer applies—these milestones are becoming reality today.

Tech CEOs are discussing the “population problem” caused by these advancements: too many people living too long with too few jobs. Yet, they continue to push the AI agenda and keep us in the dark about its true nature.

What will the world look like when this transhumanist agenda is implemented? Will we have an AI government? Will everyone have to be upgraded to “Human 2.0”? And what about when quantum computing takes hold? That’s just around the corner. The tech industry and global elite are envisioning this scenario and preparing for it as rapidly as possible. They are pushing for the rapid implementation of AI technology, even as AI executives quit their jobs in protest over the dangers of AI and its rapid deployment. Many insiders are scared. They warn that the threat to humanity is on par with nuclear weapons, with a reasonable possibility that AI will destroy humanity.

In Senate hearings, safety advocates and tech industry leaders are testifying about the desperate need for regulation. We need safeguards to ensure that AI is kept in its rightful place. But if you watch the media, there’s barely any mention of a problem. It’s the opposite: the focus is on how to invest in AI, maximize profit, and solve every problem with AI, regardless of its long-term impact on humanity. Profits are being prioritized over humanity. The voices of those raising concerns are being suppressed and must be amplified.

Humanity is being steered into an AI-driven future controlled by a Central Bank Digital Currency (CBDC) and a social credit scoring system. ESG is the first step toward achieving this goal. The key is to get the public to go along with the plan. Right now, homelessness is seemingly intentionally out of control, crime is openly permitted, politicians are corrupt, and police are being labeled as untrustworthy. These conditions set the stage for releasing an AI solution to address these societal “problems”. Imagine how clean and safe the streets will be with an AI government that establishes a universal basic income, ends homelessness, and effectively solves crime. AI governance is the globalist solution; it’s just getting the public to accept this Orwellian plan.

Who is leading humanity through this massive global change? Is it Bill Gates, with his talk of depopulation and GMO mosquitoes? Or Mark Zuckerberg, who seems robotically lost in the technocratic sauce? What about Elon Musk? He appears to be one of the few voices of reason, but do we want one person running the world’s most powerful machine? Something that could wield unstoppable power? Something that could destroy humanity?

What we have on our hands is an absolute emergency. It ultimately comes down to us—We the People. Are we going to step up and do the right thing? We are responsible for the future of humanity. We are the last generation that has seen the analog age. Were we happier then? Did it feel saner? Did it seem like humanity was on the right track? How did we get to where we are today? Are we going to ensure that this technology is used for good? Will we help safeguard future generations against the potential misuse of this potent technology? Our voices matter.

There is massive corruption. Who can we trust? What can we trust? We can’t trust our eyes or ears when it comes to anything digital. This is the future of humanity that we are dealing with. The speed and efficiency of this technology ensure that we will miss the boat unless we act immediately. But who’s in charge? Klaus Schwab? Seriously? It’s a complete joke.

It’s us—We the People. We are in charge. We must stand up and let our voices be heard. Are we going to accept the CBDC social credit scoring system?  Who ultimately controls this system? It is an agenda to control the world by some very wealthy man or group of men. Which rich man? Are we back to Klaus? So we’re letting Klaus rule the world via an AI-backed CBDC? This is the most foolish idea in the history of mankind. The technocratic elite are saying they are going to “speciate” humanity. They plan not only to control humanity through a one-world currency called CBDC but also to transition our species into something else—Human 2.0, also known as transhumanism.

We are currently being prepped for speciation in anticipation of an advanced AI-controlled social credit scoring system. This system is designed to observe and control every aspect of our lives through the Internet of Things (IoT). It’s a masterfully designed data-collection nightmare, sucking up every conceivable detail of our lives to feed AI algorithms that analyze every potential data point. These algorithms, referencing your past data, result in a near-perfect analysis of you as a person. All this data—collected for inspection by the likes of Zuckerberg? Or Klaus Schwab? Seriously? This is nearly as bad as it gets. Or is it? We haven’t even touched on AI warfare. Imagine a drone swarm powered by Klaus Schwab’s control agenda. Or research Palmer Luckey’s dream of creating a 600-pound genetically engineered attack wolf. Perfect for humanity, right? Better not step outside if your ESG score is too low! Seriously, is this the future we want? Don’t we deserve better than this?

It’s time for everyone to stand up and say no—absolutely not! We want peace, we want liberty, we believe in freedom and the Golden Rule. Technology is great, but not at the expense of human lives. It’s past time for regular people to have a voice and say, "Hey, nice try, but no! We see what you’re doing, and we have a say in this matter too.”

The good news is that the pro-human solution is simple. Let’s keep the peace, have fun, and enjoy life. Let’s ensure that AI is used as a tool for good, not as the engine of a dystopian nightmare. We have a bright future ahead of us—let’s ensure it stays that way.

How do we do this? We need good people to have a voice. We need to establish an international Digital Bill of Rights to protect human interests first, ahead of AI and profits. This Bill of Rights will curb the unrestrained spread of AI by setting benchmarks for AI safety and human rights. Watchdog groups must ensure AI aligns with the goals of humanity, not just stock prices and market domination.

For now, we need clear limits on the sophistication of AI. If it harms humans, it needs to be stopped. Period. We can work out the details later, but we have one shot to get this right for humanity.

Here are some positive ideas to steer AI in a pro-human direction:

AI systems and algorithms must follow the U.S. Constitution with respect to civil liberties.

AI systems must align with and respect human values. How this is determined should be left to the people through unbiased polling.

Unintended consequences are inevitable. Limits should be placed on AI capabilities to prevent or mitigate these negative impacts. Fire departments and emergency services could help regulate this.

Mandate robust third-party AI auditing and certification.

Regulate access to computational power.

Establish capable AI agencies at the national level.

Establish liability for AI-caused harm.

Introduce measures to prevent and track AI model leaks.

Expand funding for technical AI safety research.

Develop standards for identifying and managing AI-generated content and recommendations.

We, the people, need to stand up and unite on this issue worldwide. This is our opportunity to ignore our manufactured differences and come together as humans. We must demand that our leaders become transparent about these rapid technological advancements. Get them to say: “No Speciation Without Representation!” Get them to commit to supporting Team Human. If they don’t, then you know there is a problem.

If we are being speciated, and our humanity is being transformed, we should at least be informed. We don’t want to speciate unless we have an open discussion about it first. Is that too much to ask? Tech leadership need to tell us what they’re doing. Why are they pushing us toward this dangerous technology so quickly? What’s the rush? Is it the prisoner’s dilemma—if we don’t do it first, our opponents will? Currently, it’s an AI arms race with no oversight, no U.N. meetings, no emergency declaration. The current safety plan is just to shoot from the hip and see what happens. Or is it a deliberate, pedal-to-the-metal sprint toward an AI government and eventual AI god?

Safety, security, transparency, human impact, ethics, and spirituality need to be at the forefront of this—not a prisoner’s dilemma arms race driven by profit, or worse, a deliberate tech-led extermination of humanity, spearheaded by Klaus Schwab.

This is the time for humans to stand up and show future generations that we were forward-thinking enough to act appropriately during this era of massive technological change. Humans today are in the driver’s seat for our future way of life. We have all the tools necessary to live out a dream of abundance, happiness, and freedom. It’s our choice.

Freedom-loving humans are the most potent force in the world. It just takes us to stand up and let our voices be heard. Right now, we are fighting for our freedom and the future of humanity. Let's make sure we get this right today while we still have a fighting chance.

NO SPECIATION WITHOUT REPRESENTATION!

About Christopher Wright, Founder, AI Trust Council


Who is Robert Malone is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Upgrade to paid

Thanks for reading Who is Robert Malone! This post is public so feel free to share it.

Share


Reserve your copy of PsyWar here, before it gets banned!
You're currently a free subscriber to Who is Robert Malone. For the full experience, upgrade your subscription.

Upgrade to paid

 
Like
Comment
Restack
 
© 2024 Robert W Malone, MD
Virginia
Unsubscribe

Get the appStart writing


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
FO
« Reply #242 on: September 26, 2024, 08:34:22 AM »


According to a document obtained by Bloomberg, OpenAI CEO Sam Altman lobbied the Biden administration to help build multiple artificial intelligence (AI) data centers in various states to aid economic growth and ensure the U.S. maintains the lead in AI development. The AI data centers would require 5 GigaWatts (GW) of power generation each, and Constellation Energy CEO Joe Dominguez said Altman is proposing five to seven AI data centers. (Constellation recently announced the reopening of Three Mile Island Unit 1 to power a Microsoft and OpenAI data center, and Constellation said other AI developers are looking for sites to reopen or build new nuclear power generation. However, analysts at Morningstar said there is unlikely to be any significant growth in nuclear power in the U.S. as utilities are investing in less capital intensive gas, wind, and solar power. – R.C.)

ccp

  • Power User
  • ***
  • Posts: 19587
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #243 on: September 26, 2024, 07:45:25 PM »
The elites => nuclear power bad while wind and solar good.  => AI dangerous but enormous money to be made =>
realization it requires enormous power which cannot be generated by wind and solar => nuclear power good.

The elites =>  :-$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$  :wink:

=> nuclear great!  =>  nuclear investment through the roof.

just not in their back yards................



« Last Edit: September 26, 2024, 07:47:04 PM by ccp »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #244 on: September 28, 2024, 05:48:22 PM »
That was pretty good!

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3041
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #245 on: October 01, 2024, 08:10:04 PM »
Expanding on ccp’s point, Jo Nova compares and contrasts Australia’s magical sustainable power folly with AI nurturing companies that see the energy requirement writing on the wall and respond with realistic planning rather than renewable fantasies:

First hint of energy squeeze and Big Tech drops the wind and solar purity, and launches into nuclear power

AI data centres eat grids for breakfast

By Jo Nova

All those sustainable dreams, gone pfft
Google, Oracle, Microsoft were all raving fans of renewable energy, but all of them have given up trying to do it with wind and solar power. In the rush to feed the baby AI gargoyle, instead of lining the streets with wind turbines and battery packs, they’re all suddenly buying, building and talking about nuclear power. For some reason, when running $100 billion dollar data centres, no one seems to want to use random electricity and turn them on and off when the wind stops. Probably because without electricity AI is a dumb rock.

In a sense, AI is a form of energy. The guy with the biggest gigawatts has a head start, and the guy with unreliable generators isn’t in the race.

It’s all turned on a dime. It was only in May that Microsoft was making the “biggest ever renewable energy agreement” in order to power AI and be carbon neutral. Ten minutes later and it’s resurrecting the old Three Mile Island nuclear plant. Lucky Americans don’t blow up their old power plants.

Oracle is building the world’s largest datacentre and wants to power it with three small modular reactors. Amazon Web Services has bought a data centre next to a nuclear plant, and is running job ads for a nuclear engineer.  Recently, Alphabet  CEO Sundar Pichai, spoke about small modular reactors. The chief of Open AI also happens to chair the boards of two nuclear start-ups.

The AI Boom Is Raising Hopes of a Nuclear Comeback
The AI boom has left technology companies scrambling for low-carbon sources of energy to power their data centers. The International Energy Agency estimates that electricity demand from AI, data centers, and crypto could more than double by 2026. Even its lowball estimates say that the added demand will be equivalent to all the electricity used in Sweden or—in the high-usage case—Germany.

Australia uses ten times as much electricity as Microsoft, but is still fantasizing about reaching 82% renewable by 2030 with no nuclear power “because it will cost too much and take too long”.  Microsoft uses 24 TWh of energy a year and employs 220,000 people, and knows it needs a nuclear plant to be competitive (and reach, albeit frivolous weather changing ideals). Australia uses 274 TWh of electricity, and employs 14 million people but is going to aim for frivolous climate witchery anyway, and do it the double-hard way.

Who needs to be competitive, right?

Pierre Gosselin discusses how Germany risks being left behind because it has switched off all its nuclear plants. At least it has some power lines to France. Australia has no nukes, not much hydro, no mountains to spare, is the driest inhabited continent on Earth, and it has no powerlines to anywhere. We are the crash test dummy. Soon most big companies will have more reliable power than we do.

https://joannenova.com.au/2024/10/first-energy-squeeze-and-big-tech-drops-the-wind-and-solar-purity-and-launches-into-nuclear-power/?utm_source=rss&utm_medium=rss&utm_campaign=first-energy-squeeze-and-big-tech-drops-the-wind-and-solar-purity-and-launches-into-nuclear-power

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #246 on: October 15, 2024, 10:03:25 AM »

Someone asked me to have some of my friends assess this:

https://dodsioo.defense.gov/Portals/46/Documents/DoDD_5240.01_DoD_Intel_Intel-Related_Assist_to_LE_and_Civil_Authorities_27_Sep_24.pdf?ver=5mL1ROn5buFBpQmcUsG3ig%3D%3D

One of them, with deep experience in these things, asked CHAT some really well aimed questions, questions of the sort that I would not have come up with on my own.

Reading the quality of CHAT's response and seeing how it cut through what my Neo-Luddite mind would not, has forced me to reconsider my thinking.


==================


this policy
ChatGPT said:
ChatGPT
The document you provided, DoD Directive 5240.01 (effective September 27, 2024), covers DoD intelligence and intelligence-related activities and their assistance to law enforcement and civil authorities. Here are some insights into potential individual privacy concerns:

Collection of U.S. Persons' Information: A key concern arises from the provision allowing Defense Intelligence Components to collect, retain, and disseminate information about U.S. persons (citizens, permanent residents, etc.) under certain circumstances. Though the directive mandates that this information must be collected using the "least intrusive means feasible" and should protect civil liberties (Section 1.2.n), incidental collection of U.S. persons' data could still occur. Any collected data related to violations of Federal law must be shared with law enforcement officials, which poses potential privacy risks if such sharing isn't tightly controlled or monitored.

Assistance to Law Enforcement: Section 3.2 allows for intelligence support to law enforcement agencies, particularly during emergencies where lives are in danger. This could involve sharing data, expertise, or equipment. While this is limited to permissible circumstances, it still creates privacy risks, particularly in cases of incidental data collection about U.S. persons. The regulation states that any incidentally collected U.S. persons' information must comply with Executive Order 12333 and other laws, but it also leaves room for judgment by intelligence heads and legal advisors, potentially creating gray areas.

Exigent Circumstances: Under Section 3.5, in exigent circumstances (where lives are at risk), intelligence components can provide assistance for up to 72 hours without approval from higher authorities. This temporary flexibility increases the risk of privacy violations, as it may allow broad intelligence activities without immediate oversight. While there's a requirement to notify the Under Secretary of Defense after the fact, the rapid nature of such actions could lead to privacy intrusions before effective oversight is in place.

Contracts and Cover Plans: Section 1.2.g mentions that Defense Intelligence Components can enter contracts with private entities, including individuals, without disclosing their sponsorship for certain activities. This raises concerns about transparency and potential surveillance or covert operations involving private citizens or institutions, which could impact personal privacy rights without their knowledge or consent.

Information Sharing Across Agencies: The document emphasizes intelligence sharing with other federal departments, agencies, and law enforcement (Section 1.2.o). This sharing extends to counterterrorism and law enforcement efforts but carries risks of overreach, where data collected for one purpose could be repurposed for others without proper safeguards or privacy protections.

In summary, while the directive contains provisions intended to protect privacy and civil liberties, the broad scope of permissible intelligence activities, including incidental collection of U.S. persons' data, cooperation with law enforcement, and flexibility in emergencies, presents significant individual privacy concerns. The potential for overreach or misuse, especially in exigent situations, necessitates robust oversight and clear limits to safeguard individual rights effectively.

You said:
What is the primary oversight authority for this policy?
ChatGPT said:
ChatGPT
The primary oversight authority for DoD Directive 5240.01 is the Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency (ATSD(PCLT)). This office is responsible for coordinating with other key entities, including the Under Secretary of Defense for Intelligence and Security (USD(I&S)), the General Counsel of the Department of Defense (GC DoD), and the Inspector General of the Department of Defense, to ensure that intelligence activities comply with legal and policy requirements and respect privacy and civil liberties.

The USD(I&S) also plays a significant role in overseeing intelligence activities and ensuring that they are conducted in accordance with the directive. They are responsible for providing guidance, approving requests for intelligence assistance to law enforcement, and ensuring proper coordination with federal, state, and local authorities. Additionally, General Counsel of the DoD provides legal oversight and guidance, particularly in cases involving legal questions or matters of first impression.

These entities together ensure that the intelligence-related activities conducted under the directive are consistent with laws, regulations, and protections for civil liberties.

You said:
I have a concern that this policy further militarizes American civilian police forces.  Was there a debate on this policy prior to its execution that determined a need for this level of federal-to-local cooperation?
ChatGPT said:
ChatGPT
Your concern about the potential for this policy to contribute to the militarization of American civilian police forces is valid, as DoD Directive 5240.01 does provide for significant cooperation between defense intelligence components and law enforcement at various levels. This kind of federal-to-local cooperation, particularly the sharing of intelligence and resources with state and local law enforcement, has been debated in the broader context of U.S. security policy over the past few decades.

There are a few points worth noting regarding the debate and the circumstances that may have led to the implementation of this policy:

Historical Precedent: Cooperation between federal intelligence agencies and local law enforcement has its roots in the post-9/11 environment, when the federal government sought to increase coordination with local agencies in the fight against terrorism. Policies like Executive Order 12333 and Executive Order 13388 (cited in this directive) laid the foundation for intelligence sharing with law enforcement. This cooperation was seen as necessary to address threats like terrorism, cybercrime, and transnational organized crime that transcend local or state jurisdictions.

Debate on Militarization: There has been ongoing debate about the militarization of local police, particularly after events such as the protests in Ferguson, Missouri, in 2014. Critics argue that federal programs like the 1033 Program, which allows the transfer of military equipment to local police forces, contribute to the erosion of the traditional boundaries between the military and civilian law enforcement. This DoD directive could be seen as another step in that direction, given its provisions for intelligence sharing, technical support, and cooperation with local authorities.

Exigent Circumstances and Lethality: The directive outlines situations where lives are at risk or where law enforcement agencies require technical expertise, equipment, or intelligence from the DoD. In particular, the use of unmanned systems, potentially lethal assets, and other military-grade technologies (such as drones) is subject to approval by the Secretary of Defense. However, concerns arise that such cooperation could normalize the use of military tools and tactics in domestic law enforcement.

Need for Federal-to-Local Cooperation: The directive justifies federal-to-local cooperation on the grounds of national security, counterterrorism, and addressing serious threats that may exceed the capacity of local law enforcement. Proponents of this policy likely argued that local law enforcement agencies do not have the same level of expertise or resources in dealing with complex threats like cyberattacks or international terrorism. The cooperation outlined in the directive can be seen as a way to bridge that gap, but it may also extend the scope of military involvement in domestic affairs.

Oversight Mechanisms: There are oversight mechanisms built into the policy, particularly regarding privacy and civil liberties, to prevent overreach. The directive specifically requires the least intrusive means for intelligence collection involving U.S. persons, and any assistance must comply with existing laws like the Posse Comitatus Act, which limits direct military involvement in law enforcement.

In terms of a public debate or open discussion specifically on DoD Directive 5240.01 before its implementation, such policies typically follow a process of internal government review rather than a broad public or legislative debate. Given the sensitive nature of intelligence activities and national security, discussions leading up to the execution of this directive would have likely occurred within closed circles in the Department of Defense, National Security Council, and perhaps certain Congressional committees with oversight roles.

In summary, while there may not have been a public debate on this specific directive, the broader issue of federal-to-local law enforcement cooperation, militarization of the police, and intelligence sharing has been a subject of significant national discourse, particularly following high-profile events and incidents where military-grade equipment or tactics were used by civilian police forces. The policy likely stems from ongoing efforts to improve national security and law enforcement capabilities but does raise important questions about the balance between security and the militarization of civilian institutions.

You said:
It sounds as though this directive is primarily aimed at combating terrorism - is that correct?
ChatGPT said:
ChatGPT
Yes, DoD Directive 5240.01 does appear to be primarily focused on enhancing the U.S. government's ability to combat terrorism and other significant threats such as foreign intelligence activities, international narcotics operations, and clandestine activities by foreign powers. The directive facilitates cooperation between the Department of Defense (DoD) intelligence components and various law enforcement agencies at the federal, state, and local levels to address these types of high-priority security threats.

Here are a few key points that suggest the directive is aimed at counterterrorism efforts and related security concerns:

Intelligence Sharing and Counterterrorism: The directive references Executive Order 13388, which was issued to strengthen the sharing of terrorism-related information between federal and local entities. This executive order, cited in DoD Directive 5240.01, is part of the broader effort to ensure that intelligence gathered by federal agencies can be used effectively by law enforcement agencies to prevent and respond to terrorist threats.

National and International Security Threats: Section 3.2 of the directive outlines permissible assistance to law enforcement agencies in areas like investigating or preventing clandestine intelligence activities by foreign powers and international terrorism. This indicates that much of the intelligence support and cooperation envisaged under this directive is aligned with addressing both domestic and international terrorism threats.

Specialized Equipment and Expertise: The directive allows for the provision of specialized equipment, technical knowledge, and expert personnel to law enforcement agencies, especially in cases where lives are in danger. This assistance can be crucial in responding to terrorist attacks or imminent threats, where local law enforcement may lack the necessary resources or expertise to handle the situation effectively.

Cooperation with Law Enforcement on Terrorism-Related Matters: The directive allows for broad cooperation with law enforcement, especially when dealing with foreign intelligence services, international narcotics trafficking, or terrorist organizations. The cooperation outlined is intended to help law enforcement agencies address sophisticated, organized, and dangerous threats that often require specialized intelligence and military resources.

While the directive is indeed aimed at combating terrorism, its scope extends beyond that to include any significant national security threats, such as clandestine operations by foreign powers and serious crimes like international drug trafficking. Thus, while terrorism is a key focus, the directive covers a broader range of security and intelligence-related activities to support law enforcement agencies in addressing various threats to U.S. security.

You said:
Are there limits in place to ensure the policy is not used against US Citizens?
ChatGPT said:
ChatGPT
Yes, DoD Directive 5240.01 includes specific limitations and safeguards to ensure that the policy is not used improperly against U.S. citizens. These protections are grounded in both federal law and executive orders, with special emphasis on the constitutional rights of U.S. persons (which includes U.S. citizens, permanent residents, and other protected individuals). Here are the key safeguards and limits designed to protect U.S. citizens:

1. Protection of U.S. Persons' Constitutional Rights:
Section 1.2.a of the directive emphasizes that DoD intelligence and intelligence-related activities must prioritize the protection of the constitutional rights and privacy of U.S. persons. This includes adherence to applicable laws, executive orders, and DoD policies that regulate the conduct of intelligence activities, especially when those activities involve U.S. persons.
2. Restrictions on Intelligence Collection on U.S. Persons:
The directive strictly limits the collection, retention, and dissemination of information about U.S. persons, specifying that such activities must follow the procedures outlined in DoD Manual 5240.01, which governs the collection of intelligence on U.S. persons.
Section 1.2.n states that any collection of information about U.S. persons must use the "least intrusive means feasible" and must fully protect the legal rights, civil liberties, and privacy rights guaranteed by federal law. This ensures that intelligence activities do not overstep legal boundaries or violate the privacy of U.S. citizens.
3. Use of Intelligence-Related Tradecraft:
The directive also specifies that any intelligence activities, even if conducted by a non-intelligence DoD component, must adhere to the rules governing intelligence tradecraft and collection when dealing with U.S. persons. Intelligence-related tradecraft must follow the applicable DoD and Intelligence Community (IC) policies to avoid unnecessary or overly intrusive collection efforts (Section 1.2.f).
4. Limits on Assistance to Law Enforcement:
Section 3.2.c specifically prohibits Defense Intelligence Component personnel from collecting, retaining, or disseminating U.S. persons' information during intelligence assistance to law enforcement agencies, except when explicitly authorized under DoD Manual 5240.01. This means that any intelligence support provided to law enforcement agencies must strictly comply with established rules for protecting the privacy of U.S. citizens.
Any incidental collection of U.S. persons’ information that is reasonably believed to indicate a violation of federal law must be shared with civilian law enforcement officials, but only in accordance with Executive Order 12333 and other relevant laws, which regulate such intelligence activities.
5. Oversight and Legal Review:
The Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency is responsible for ensuring that intelligence activities comply with privacy and civil liberties protections (Section 2.4). This office works in conjunction with the General Counsel of the DoD and other oversight bodies to ensure proper legal oversight of intelligence activities that involve U.S. persons.
Any questionable intelligence activities, as well as significant or highly sensitive matters, must be reported in accordance with DoD Directive 5148.13, which establishes procedures for reporting and addressing potential violations of law or policy, including those that impact U.S. persons.
6. Exigent Circumstances Limitations:
While the directive allows for temporary, emergency assistance in exigent circumstances where lives are in imminent danger (Section 3.5), this assistance is limited to a 72-hour period unless further approval is obtained from higher authorities. This ensures that emergency measures do not lead to ongoing intelligence activities targeting U.S. persons without proper authorization and oversight.
7. Prohibition on Covert Activities Against U.S. Persons:
The directive explicitly prohibits Defense Intelligence Components from conducting covert activities within the United States against U.S. persons unless approved by the President and directed by the Secretary of Defense under specific, extraordinary circumstances, such as a declaration of war (Section 1.2.k). This is a critical safeguard to ensure that the DoD does not misuse intelligence powers for covert surveillance or actions against American citizens without the highest levels of legal and executive oversight.
Conclusion:
DoD Directive 5240.01 includes several key limitations and oversight mechanisms designed to prevent misuse of the policy against U.S. citizens. These protections are rooted in federal law, executive orders, and DoD policies, which prioritize the constitutional rights and privacy of U.S. persons. While the directive allows for significant cooperation between DoD intelligence components and law enforcement, especially in national security and counterterrorism contexts, it maintains strict safeguards to ensure that intelligence activities are conducted within legal bounds and do not infringe upon the rights of U.S. citizens.

You said:
Does the policy recognize any limitations or authorities on who may designate an activity as "foreign intelligence" or "clandestine activities" or other category covered under the directive?
ChatGPT said:
ChatGPT
Yes, DoD Directive 5240.01 recognizes specific limitations and authorities on who may designate an activity as “foreign intelligence,” “clandestine activities,” or other categories covered under the directive. The designation of activities under these categories is governed by established protocols and is subject to oversight by authorized officials within the Department of Defense (DoD) and the Intelligence Community (IC). Below are the key provisions related to these designations:

1. Authority to Designate Foreign Intelligence or Clandestine Activities:
Foreign Intelligence: According to the directive, "foreign intelligence" is defined by Executive Order 12333, which governs U.S. intelligence activities. The directive specifies that intelligence activities, including foreign intelligence operations, must be conducted by Defense Intelligence Components or DoD Components that are specifically authorized to do so by the Secretary of Defense, the Under Secretary of Defense for Intelligence and Security (USD(I&S)), or the head of a Defense Intelligence Component. This authority is explicitly restricted to those components and officials who are charged with conducting intelligence missions.

Clandestine Activities: Clandestine activities, as defined by Intelligence Community Directive 304, must also be conducted under the authority of the Secretary of Defense, the USD(I&S), or a Defense Intelligence Component head. The directive emphasizes that only personnel authorized by these officials may engage in clandestine activities, and these activities must adhere to strict operational guidelines and oversight mechanisms.

2. Restrictions on Designating Intelligence Activities:
Defense Intelligence Components Only: The directive clearly limits the conduct of intelligence and intelligence-related activities to Defense Intelligence Components or DoD Components specifically authorized to perform such functions (Section 1.2.d). This prevents unauthorized entities within the DoD from carrying out intelligence operations, including foreign intelligence or clandestine activities, unless they have the express approval of the appropriate authority.

Approval Considerations for Intelligence Operations: Section 3.4 of the directive lays out considerations for the approval of intelligence activities, including risk, lethality, cost, appropriateness, and readiness. These considerations must be evaluated by authorized officials before an activity can be designated as foreign intelligence or clandestine. This ensures that there is a high level of scrutiny before any operation is approved, particularly when it involves intelligence gathering.

3. Oversight and Legal Review:
General Counsel and Legal Office Oversight: The General Counsel of the Department of Defense (GC DoD) plays a significant role in providing legal oversight, particularly in reviewing and providing legal guidance on intelligence activities (Section 2.2). The GC DoD, or the legal office advising the relevant Defense Intelligence Component, must approve any decisions regarding intelligence activities that involve novel or contested legal questions. This ensures that there are legal limits on what can be classified as foreign intelligence or clandestine activity, preventing overreach or misclassification.

Consultation with Servicing Legal Offices: Any Defense Intelligence Component head or delegee must consult their servicing legal office to determine whether an activity qualifies as intelligence-related and whether it falls under the appropriate legal authorities (Section 2.5.b). This step further ensures that decisions regarding the designation of activities as foreign intelligence or clandestine are carefully reviewed and consistent with existing laws and policies.

4. Categories Covered by the Directive:
Intelligence-Related Activities: The directive defines “intelligence-related activities” as those activities not conducted pursuant to Executive Order 12333, but which use intelligence funding or involve intelligence-like procedures or technologies. The document makes clear that just because a DoD activity uses intelligence-related procedures, it does not automatically make it an intelligence activity unless it has been designated as such under proper authority (Glossary, G.2).

Clandestine vs. Covert Activities: The directive also differentiates between clandestine and covert activities. While it provides guidelines for clandestine intelligence activities, covert activities (which involve secret U.S. government sponsorship) are more tightly controlled and can only be conducted in times of war or under Presidential approval as directed by the Secretary of Defense (Section 1.2.k). This means that clandestine activities must be conducted within established intelligence protocols, whereas covert activities require higher levels of authorization.

5. Exigent Circumstances:
The directive allows Defense Intelligence Component heads to provide intelligence assistance in exigent circumstances (when a person's life or physical safety is in imminent danger) for up to 72 hours without prior approval from the USD(I&S) or Secretary of Defense (Section 3.5). However, this is a temporary measure, and such activities must be reported and evaluated immediately after the fact. This ensures that even in emergency situations, there are limits on who can authorize such activities, and they must be closely monitored and approved retroactively.
Conclusion:
The directive places strict limitations on who may designate activities as foreign intelligence, clandestine, or intelligence-related. These designations must be made by authorized officials such as the Secretary of Defense, the USD(I&S), or heads of Defense Intelligence Components, and are subject to legal oversight and review. The clear restrictions, combined with consultation requirements and oversight mechanisms, help ensure that intelligence and clandestine activities are conducted within legal bounds and are not misused or misclassified. This framework prevents unauthorized activities and helps protect against potential abuses.

You said:
I would like to talk about the "exigent circumstances" condition - does this mean that if an intelligence officer deems that any person's life is in danger, he is authorized to make an on-the-spot decision to cooperate openly with local-level or State-level law enforcement agencies for 72 hours without any other approvals?
ChatGPT said:
ChatGPT
Yes, under the "exigent circumstances" condition in DoD Directive 5240.01, a Defense Intelligence Component head (or an authorized designee) is permitted to make on-the-spot decisions to provide intelligence assistance to local or state law enforcement for up to 72 hours if there is a reasonable belief that a person’s life or physical safety is in imminent danger. This temporary authorization allows intelligence officers to bypass the usual approval channels in life-threatening situations, but it comes with several important restrictions and requirements:

Immediate Action for Life-Threatening Situations:

The directive allows the Defense Intelligence Component head to authorize intelligence assistance without obtaining prior approval from the Under Secretary of Defense for Intelligence and Security (USD(I&S)) or the Secretary of Defense if there is an imminent danger to life or physical safety. This flexibility is designed to allow immediate action in emergency situations where waiting for approval could result in loss of life.
Limited Duration (72 Hours):

The assistance provided under these circumstances can only last for up to 72 hours. Beyond this period, the usual approval process must be followed, requiring the involvement of higher authorities such as the USD(I&S) or the Secretary of Defense (Section 3.5.a).
Reporting Requirements:

Even though immediate action can be taken, the directive requires the intelligence officer or Defense Intelligence Component head to report the details of the assistance provided to the USD(I&S) or the Secretary of Defense as soon as possible (Section 3.5.a). This reporting must happen immediately after the decision is made, ensuring that oversight mechanisms remain in place.
No Automatic Reimbursement Waiver:

The directive also makes clear that the intelligence component should not delay or deny assistance solely because the requesting law enforcement agency cannot commit to reimbursing the DoD at the time of the emergency (Section 3.5.b). However, reimbursement is expected unless waived under specific legal provisions.
Key Limitations:
Life-Threatening Situations Only: The exigent circumstances clause only applies in situations where someone's life or physical safety is at immediate risk. It cannot be invoked for less severe circumstances or for general law enforcement cooperation.
Short-Term Authorization: The 72-hour limit ensures that this condition is not used for ongoing or indefinite cooperation with local law enforcement without the proper approvals. Once the crisis is over, the standard processes must be followed.
Summary:
In summary, under exigent circumstances, an intelligence officer has temporary authority to cooperate with local or state law enforcement agencies if there is an imminent threat to life. This decision can be made without prior approval for up to 72 hours, but it must be reported immediately to higher authorities and is subject to oversight. This provision is tightly focused on life-threatening situations and comes with built-in time limits and accountability measures.

You said:
Regarding cases in which the intelligence officer's judgment later proves to have been poor and the conditions are judged not to have been met, what are the penalties for improper application of this policy?
ChatGPT said:
ChatGPT
The DoD Directive 5240.01 does not explicitly list penalties for an intelligence officer’s poor judgment or improper application of the “exigent circumstances” provision within the directive itself. However, there are general frameworks in place for addressing violations or improper conduct within the U.S. Department of Defense and intelligence community, particularly when it comes to misuse of authority or improper intelligence activities. Here’s how improper application of this policy would likely be handled:

1. Review and Oversight Mechanisms:
Any assistance provided under exigent circumstances must be reported immediately to the Under Secretary of Defense for Intelligence and Security (USD(I&S)) or the Secretary of Defense (Section 3.5.a). This reporting ensures that higher authorities can quickly review the officer’s judgment. If the assistance is later judged to be unjustified, it is likely to be investigated through established oversight mechanisms.

The Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency (ATSD(PCLT)) and the General Counsel of the Department of Defense (GC DoD) have roles in ensuring that intelligence activities respect privacy rights and legal constraints (Section 2.2). These offices can initiate investigations into questionable or improper activities.

2. Accountability through the Intelligence Oversight Framework:
Improper or questionable intelligence activities, including poor judgment in applying the exigent circumstances provision, must be reported in accordance with DoD Directive 5148.13, which governs intelligence oversight. This directive requires the reporting of any activities that may violate legal or policy standards, including misuse of intelligence powers.

The Inspector General of the Department of Defense and other internal watchdog entities would likely investigate the misuse of the exigent circumstances provision. If found to be an improper application, the officer could face internal administrative penalties or legal repercussions, depending on the severity of the violation.

3. Potential Penalties:
Penalties for improper application of this policy would depend on the circumstances, severity of the violation, and whether the officer acted negligently or in good faith. The potential consequences could include:

Administrative Action: This could include reprimands, loss of security clearances, suspension, or even termination. Administrative penalties are common for intelligence officers who violate internal policies, particularly when their actions result in breaches of protocol or legal violations.

Legal Penalties: In cases where the improper use of the exigent circumstances provision results in a violation of laws governing intelligence activities or civil liberties (e.g., the unauthorized collection of information about U.S. persons), the officer could face legal penalties. This could involve criminal charges under U.S. law if the actions constituted willful violations of privacy laws or constitutional protections.

Civil Liabilities: If the improper application of the policy results in harm to an individual’s privacy rights, the affected person could potentially file a civil lawsuit against the government. In such cases, the officer’s actions would be scrutinized, and legal liability could fall on the officer, the chain of command, or the DoD, depending on the findings.

4. "Questionable Activities" Reporting Requirement:
The directive also requires that any “questionable intelligence activities” or “significant or highly sensitive matters” be reported in accordance with DoD Directive 5148.13. If the officer’s decision is found to have violated intelligence procedures, it would trigger reporting to oversight bodies, which could result in disciplinary action or a formal investigation.
5. Good Faith Defense:
If the intelligence officer acted in good faith based on the information available at the time but was later found to have misjudged the situation, the penalties might be less severe. Intelligence and military personnel often receive leeway in emergency situations where they have to make quick decisions. However, this would depend on whether their actions were deemed reasonable given the circumstances, and whether the officer followed all other protocols (e.g., reporting promptly, not exceeding the 72-hour limit).
Conclusion:
While DoD Directive 5240.01 does not explicitly outline penalties for improper application of the exigent circumstances condition, violations would likely trigger oversight and investigation from multiple bodies within the DoD. If the intelligence officer is found to have acted improperly, penalties could range from administrative actions (like reprimands or suspension) to legal penalties or civil liabilities, depending on the severity of the violation. The oversight mechanisms and reporting requirements within the directive aim to ensure accountability for decisions made under this provision.

You said:
I can foresee a major issue with the policy on these grounds alone.  Given that most intelligence activity is highly classified and the identities of most intelligence operatives are protected, a US citizen may have no way to know they've been improperly surveilled or targeted, and would certainly not have access to the information required to pursue any sort of remediation.  Additionally, this leaves the agencies to police themselves without any input or even awareness from the citizens they may have harmed or whose rights they may have violated.  Has this problem been addressed within the policy or was it discussed in the debates preceding the policy?
ChatGPT said:
ChatGPT
You raise a critical concern regarding the potential for DoD Directive 5240.01 to allow improper surveillance or targeting of U.S. citizens without their knowledge, which could severely limit any ability for remediation. This issue—where intelligence activities are classified, and individuals may have no awareness that they’ve been surveilled or targeted—touches on significant privacy, civil liberties, and oversight challenges. Here's an analysis of whether and how this issue has been addressed within the policy and its surrounding debates:

1. Self-Policing and Lack of External Awareness:
As you noted, DoD Directive 5240.01 includes extensive internal oversight mechanisms, but these are largely within the intelligence community and the Department of Defense. Oversight is primarily handled by internal bodies such as the Assistant to the Secretary of Defense for Privacy, Civil Liberties, and Transparency (ATSD(PCLT)), the General Counsel of the Department of Defense (GC DoD), and the Inspector General of the Department of Defense. These are internal agencies tasked with ensuring compliance and investigating questionable activities, but they are still part of the same broader structure.

From an individual citizen’s perspective, it’s true that many intelligence activities are classified, and thus the targets of surveillance or intelligence operations—whether intentionally or incidentally targeted—may never be aware of the collection or misuse of their data. This creates an imbalance where citizens have no means to seek redress or even know that their rights have been violated.

2. Limited External Oversight and Judicial Recourse:
While the directive emphasizes protection of civil liberties, privacy, and adherence to U.S. law, it does not provide explicit provisions for external oversight by an independent body that would represent the interests of private citizens. For example, there is no mention of judicial oversight, such as requiring court warrants or providing mechanisms for third-party review when intelligence assistance is provided to law enforcement.

In contrast, for certain types of surveillance activities (such as those conducted under the Foreign Intelligence Surveillance Act (FISA)), judicial oversight and warrant processes exist. However, for intelligence activities under DoD Directive 5240.01, the primary oversight remains internal to the DoD and the Intelligence Community, without direct external review mechanisms for the general public.

3. National Security vs. Civil Liberties:
The tension between national security interests and civil liberties is a long-standing issue in intelligence policy. After 9/11, debates intensified around how much power should be granted to intelligence and law enforcement agencies to prevent terrorism, often at the expense of privacy. Programs such as the Patriot Act and FISA surveillance were controversial for similar reasons—granting expansive powers to intelligence agencies with relatively limited external accountability.

In the case of DoD Directive 5240.01, the emphasis is clearly on enabling quick, decisive action in scenarios involving national security threats or imminent danger to life. However, this priority comes at the cost of individual transparency and public accountability.

4. Discussion of Privacy and Civil Liberties in the Policy:
Within the policy itself, DoD Directive 5240.01 acknowledges the need to protect privacy and civil liberties, stating that intelligence activities must be the "least intrusive means feasible" and that they must comply with existing laws (Section 1.2.n). However, this safeguard primarily applies to how intelligence is collected and retained within the government, rather than providing a mechanism for individual citizens to be notified of or challenge any potential violations of their privacy rights.

While the policy emphasizes internal reporting of “questionable activities” and includes privacy and civil liberties oversight offices, it does not address how individuals who are impacted by improper intelligence activities can seek recourse.

5. Preceding Debates and Lack of Public Discourse:
There has been limited public debate or transparency about the specific provisions of DoD Directive 5240.01 prior to its execution. Intelligence-related policies are typically discussed and reviewed within classified government settings, and any debate about these powers often focuses on national security needs, rather than individual privacy protections.

The issue of how intelligence agencies police themselves, without the knowledge or involvement of the citizens they may surveil, has been a persistent concern in broader civil liberties discussions. It’s likely that such concerns were considered internally, but national security often takes precedence over individual transparency in these debates. Moreover, due to the classified nature of these discussions, there is often little public awareness or opportunity for input until policies are already in place.

6. Potential for Future Civil Liberties Challenges:
As with previous intelligence policies, it is possible that civil liberties groups, advocacy organizations, or even Congress could raise challenges to policies like DoD Directive 5240.01, particularly if evidence arises of improper use against U.S. citizens. Past policies have faced legal challenges on similar grounds, such as the warrantless wiretapping program during the Bush administration.

In the absence of built-in mechanisms within the directive itself for citizens to seek remediation, it would likely take external legal challenges or legislative action to address these potential gaps in transparency and accountability.

Conclusion:
While DoD Directive 5240.01 provides strong internal oversight for intelligence activities, it does not offer external transparency or a clear mechanism for individuals to know if they have been improperly surveilled or targeted. This lack of individual awareness and external input leaves the intelligence community largely policing itself, which can be problematic in cases of overreach or misuse. The directive emphasizes protection of privacy and civil liberties within its operations, but without independent review or a public-facing remediation process, citizens may have limited recourse if their rights are violated. There appears to be no significant discussion of this issue in the public domain or debates leading up to the policy’s execution, likely due to the classified nature of intelligence policies.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 71948
    • View Profile
The Road to AI Utopia
« Reply #247 on: October 18, 2024, 09:53:21 AM »