Author Topic: Intelligence and Psychology, Artificial Intelligence  (Read 75407 times)


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
RANE: Western Regulators take aim at ChatGPT
« Reply #151 on: March 31, 2023, 03:36:02 PM »
Western Regulators Take Aim at ChatGPT
8 MIN READMar 31, 2023 | 21:04 GMT


Calls to pause the development of artificial intelligence (AI) chatbots and moves to ban their use due to data privacy concerns will likely slow down (but not stop) AI's growth, and they illustrate the regulatory challenges that governments will face as the industry progresses. AI research and deployment company OpenAI, which developed the AI chatbot ChatGPT that has remained viral since its release in November 2022, has come under intense scrutiny in recent days over how quickly it has rolled out new features without seriously considering security, regulatory or ethical concerns. On March 31, the Italian Data Protection Authority (DPA) ordered a temporary ban on ChatGPT and said it would open an investigation into OpenAI and block the company from processing data from Italian users. On March 30, the U.S.-based Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission (FTC) against GPT-4, the latest version of ChatGPT's underlying large language model (a deep learning algorithm for language), and asked the FTC to ban future releases of the model. On March 29, the Future of Life Institute, a nonprofit organization focusing on existential risks that humanity faces, published an open letter signed by more than 1,800 people calling for OpenAI and other companies to immediately pause all development of AI systems more advanced than GPT-4, warning that "human-competitive intelligence can pose profound risks to society and humanity."

The Italian DPA laid out a number of concerns with ChatGPT and OpenAI, including that OpenAI lacks a legal basis for its "mass collection and storage of personal data...to 'train' the algorithms," and that it does not verify users' ages or include any ways to restrict content for users 13 years old or younger. The order comes after the Italian DPA issued a similar order on the Replika AI chatbot that aims to be a companion AI to its users.

In its complaint, the Center for AI and Digital Policy called GPT-4 "biased, deceptive, and a risk to privacy and public safety." It also said OpenAI and ChatGPT had not fulfilled the FTC's demand for AI models to be "transparent, explainable, fair, and empirically sound while fostering accountability." The center called for the FTC to open an investigation into OpenAI and ChatGPT and to "find that the commercial release of GPT-4 violates Section 5 of the FTC Act," which prohibits unfair and deceptive acts affecting commerce.

SpaceX and Tesla founder Elon Musk, Apple co-founder Steve Wozniak and former U.S. presidential candidate Andrew Yang were among those to sign the open letter calling for a pause in the development of AI systems more advanced than GPT-4.

The Italian DPA's ban on ChatGPT and its investigation into OpenAI illustrate the data privacy challenges that generative AI tools face under the European Union's General Data Protection Regulation (GDPR). The Italian DPA's investigation kicks off a 20-day deadline for OpenAI to communicate to the regulator how it will bring ChatGPT into compliance with GDPR. The investigation could result in a fine of up to 4% of OpenAI's global annual revenue, and any order for OpenAI to change the structure of ChatGPT would set a precedent for the rest of the European Union. ChatGPT and other generative AI models face a number of data privacy challenges that will not be easy to address under data privacy laws like GDPR, which was designed when current AI training models were in their infancy.

Moreover, one of the fundamental characteristics of GDPR is its provision for a "right to be forgotten," which requires organizations to give individuals the ability to request that their personal data be deleted in a timely manner. From a technical standpoint, fulfilling this requirement will be extremely difficult for an AI model trained on open data sets, as there is no good technique currently to untrain a model if a user were to request that their personal data be removed, nor is there a way for OpenAI to identify which pieces of information in its training data set would need to be removed. The right to be forgotten is not universal, and there are constraints on it, but it remains unclear how European regulators will interpret it in the context of generative AI systems like ChatGPT. As the investigation progresses, it may become clear that the current GDPR is insufficient to address concerns in an even-handed way.

OpenAI trained its model on text data scraped from the internet and other forms of media, including likely copyrighted material that OpenAI did not receive permission to use. Other generative AI models that have used data sets scraped from the internet, such as the Clearview AI facial recognition tool, have also received enforcement notices from data regulators over the use of their information.

In addition, ChatGPT has also had several bugs in recent weeks, including one where users saw other users' input prompts in their own prompt history. This bug violated GDPR and opened up questions about cybersecurity, particularly as a person's prompts can include various privacy concerns. Similar issues could arise if users' input controlled and regulated information.

Potential U.S. regulatory moves against chatbots using large language models are not as advanced as the European Union's, but they will have a more significant impact on the development of AI technology. Most of the West's leading developers of generative AI technologies are U.S.-based, like Google, Meta, Microsoft and OpenAI, which gives U.S. regulators more direct jurisdiction over the companies' actions. But the impact of the request for the FTC to force OpenAI to pause future releases of GPT-4 and beyond is uncertain, as many complaints often do not result in FTC action. Nevertheless, public concern about AI could drive the FTC to open up an investigation, even if it does not order any pause on AI development. And on March 27, FTC Chair Lina Khan said the commission would make ensuring competition in AI a priority, which may suggest that the commission would be more willing to take up AI-related complaints on all issues. From a data privacy perspective, the regulatory market in the United States will be complicated for OpenAI and others to navigate since the United States lacks a federal data privacy law and appears unlikely to adopt one any time soon, as Republicans and Democrats largely disagree on its potential focus. While this partially limits the regulatory and legal challenges that OpenAI and other chatbot developers may face at the federal level in the United States, several states (including California, where companies like OpenAI and Meta are headquartered) have state data privacy laws that are modeled off of GDPR. It is highly unlikely that OpenAI will segment the services that it offers in the United States after state-level rulings, meaning California and other state determinations could have an impact across the United States even in the absence of a federal data privacy law. However, OpenAI and Microsoft, a leading investor in OpenAI, will push back legally and could claim that state data privacy regulations overstep states' rights.

Like GDPR, the California Consumer Privacy Act includes a provision for the right to be forgotten. This means the challenges of untraining AI models that are trained on open data sets may become a material issue in the United States at the state level.

While European and U.S. regulatory decisions — and public skepticism toward AI — will slow some of the industry's development, generative AI will likely maintain its rapid growth. Given the potential widespread innovation that generative AI can bring, even if there are concerns about the impact on the job market, the rapid investment into chatbots that ChatGPT's release has kicked off is likely to continue. In fact, the rapid advancement and new innovations (such as the recent OpenAI plugin feature that enables ChatGPT to pull information from the internet) will only increase AI tools' utility for corporations. However, uncertainty around the future regulatory market means that early adopters of the technology could find their use of the technology quickly upended by future regulatory action, or find themselves in the middle of data privacy and other regulatory challenges if they integrate the technology without proper due diligence and protections. For example, while ChatGPT is not a technology that is learning through the inputs from its users, it appears that Google's Bard AI does, making potential applications using such technologies conduits for issues like the right to be forgotten.

Nevertheless, the pace of innovation in AI is being driven by underlying technical capabilities, particularly the continued advancement of graphics processing units, central processing units and other hardware used to train large datasets. That underlying technology is continuing to improve alongside advancements in semiconductor technology, making larger data sets easier to train computationally, which means developers will continue to create more sophisticated AI models as long as investors remain interested. The West's initial regulatory push is unlikely to dampen that interest, absent an unlikely holistic banning of ChatGPT or other generative AI tools, so generative AI's rapid growth looks set to continue.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Google AI engineer quits
« Reply #152 on: May 02, 2023, 01:43:53 AM »
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?smid=nytcore-ios-share&referringSource=articleShare&fbclid=IwAR2iOUdLmZTcpCfngR4MBjOKMDcUUhKrCL-hmu8aO4_sLX_-ZXUIy2c0Gtk&mibextid=Zxz2cZ

The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Give this article

2.3K
Cade Metz
By Cade Metz
Cade Metz reported this story in Toronto.

May 1, 2023
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

Thanks for reading The Times.
Subscribe to The Times
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

A New Generation of Chatbots
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

ImageIlya Sutskever, in a blue T-shirt and gray pants, sitting on a red couch.
Ilya Sutskever, OpenAI’s chief scientist, worked with Dr. Hinton on his research in Toronto.Credit...Jim Wilson/The New York Times

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
WSJ: Hollywood Strike and Artificial Intelligence
« Reply #156 on: May 05, 2023, 05:03:44 PM »
In Hollywood Strike, AI Is Nemesis
If the Netflix algorithm is unable to recommend a show we’ll like, maybe it can create one instead.
Holman W. Jenkins, Jr. hedcutBy Holman W. Jenkins, Jr.Follow
May 5, 2023 5:38 pm ET


In nine scenarios out of 10, it will likely suit the parties to the Hollywood writers’ walkout to kludge together a deal so they can get back to work churning out the shows demanded by streaming outlets.

But oh boy, the 10th scenario is ugly for writers. It forces the industry to confront the artificial-intelligence opportunity to replace much of what writers do, which is already algorithmic.

The subject has been missing from almost every news account, but not from the minds of show-biz execs attending this week’s Milken conference in Los Angeles, who were positively chortling about the opportunity. The BBC figured it out, commissioning ChatGPT to compose plausible opening monologues for the “Tonight Show Starring Jimmy Fallon” and other late-night programs put on pause by the strike.

Understandable is the indignation of journeyman writers. Employment is up, earnings are up in the streaming era, but so is gig-like uncertainty, forcing many to hold down day jobs as other aspiring artists do. But wanting a full-time job plus benefits is not the same as someone having an obligation to confer them on you.

Surprising, in the press coverage, is how many cite their skin color, gender and sexual orientation as if these are bargaining chips. Audiences don’t care. They may be excited by an interesting show about a gender-fluid person of color but they aren’t excited by a boring show written by one.

This attitude does not bode well. I recently asked ChatGPT-3 an esoteric question about which news accounts have published thousands of words: What is the connection between the 1987 murder of private eye Dan Morgan and Britain’s 2011 phone-hacking scandal? OpenAI’s Large Language Model got every particular wrong, starting with a faulty premise. But its answer was also plausible and inventive, which suggests ChatGPT will be suitable for creating TV plots long before it’s suitable for big-league news reporting and analysis.

Transformed might even be the inept recommendation engines used by Netflix and others. If AI can’t find us a show we’ll like, maybe it can create one, and instantly to our specifications once CGI is good enough to provide a fake Tom Hanks along with fake scenery.

Save for another day the question of whether artificial intelligence must be added to the list of technologies that might save us if they don’t kill us.

ChatGPT comes without Freudian derangements. When it “hallucinates,” it does so to manufacture “coherence” from the word evidence it feeds on, its designers tell us. Humans are doing something else when they hallucinate. So a broadcast statement in which Donald Trump explicitly excludes “neo-Nazis or the white nationalists” from the category of “very fine people” becomes, to our algorithmic press, Mr. Trump calling Nazis and racists “very fine people.”

Dubbed the “fine people hoax” by some, it’s closer to the opposite of a hoax—a Freudian admission by the press that it has a new mission and can’t be trusted on the old terms.


Throw in the probability that libel law will remain more generous to flesh-and-blood press misleaders than to robotic ones. AI is likely to find a home first in the fictional realm, where such pitfalls don’t apply. Meanwhile, the most unsettling revelation of the AI era continues to percolate: We’re the real algorithms. Our tastes, preferences, thoughts and feelings are all too algorithmic most of the time. Collaterally, the most problematic algorithm may be the one in our heads, the one that can’t see ChatGPT outputs for what they are, machine-like arrangements of words, simulating human expression. See the now-iconic furor kicked up by reporter Kevin Roose in the New York Times over Bing chat mode’s proposal that he leave his wife.

My own guess is that this problem will be transitional. One day we’re told every will child will have an AI friend and confidante, but I suspect tomorrow’s kids, from an early age, will also effortlessly interpolate (as we can’t) that AI is a soulless word machine and manage the relationship accordingly.

All this lies in the future. In one consistent pattern of the digital age, a thin layer of Hollywood superstar writers, as creators and owners of the important shows, the ones that impress with their nuance, originality and intelligence, will capture most of the rewards. The eight-figure paychecks will continue to flow. Today’s frisson of solidarity with journeyman colleagues is likely to wear off after a few weeks. After all, the superstars have interesting work to get back to, cultivating their valuable, emotionally resonant and highly investable franchises. And a big job is beckoning: how to adapt artificial intelligence to improve the quality and productivity of their inspirations.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #160 on: May 16, 2023, 10:37:54 AM »
"Some would suggest that humanity would do well to reconsider the whole AI thing."

A fair thought, but then that would leave only the Chinese with AI.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #161 on: May 16, 2023, 12:29:31 PM »
"Some would suggest that humanity would do well to reconsider the whole AI thing."

A fair thought, but then that would leave only the Chinese with AI.

And so we continue down the highway…

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #162 on: May 16, 2023, 12:57:32 PM »
Indeed.

The Adventure continues!




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #166 on: May 19, 2023, 09:49:13 AM »
Those of Natural Law (and US Constitution) and the American Creed.

As the article clearly points out, there is considerable cognitive dissonance with those in the way things are going.



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
WT: Tech team assembles to thwart AI dominance
« Reply #169 on: July 10, 2023, 06:11:07 AM »
Tech team assembles to thwart AI mayhem

BY RYAN LOVELACE THE WASHINGTON TIMES

OpenAI is assembling a team to prevent emerging artificial intelligence technology from going rogue and fueling the extinction of humanity, which the company now fears is a real possibility.

The makers of the popular chatbot ChatGPT say AI will power new superintelligence that will help solve the world’s most important problems and be the most consequential technology ever invented by humans.

And yet, OpenAI’s Ilya Sutskever and Jan Leike warned that humans are not prepared to handle technology smarter than they are.

“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” Mr. Sutskever and Mr. Leike wrote on OpenAI’s blog. “While superintelligence seems far off now, we believe it could
arrive this decade.”

If an AI-fueled apocalypse is right around the corner, OpenAI’s brightest minds say they have no plan to stop it.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Mr. Sutskever and Mr. Leike wrote. “Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.” Mr. Sutskever, OpenAI co-founder and chief scientist, and Mr. Leike, OpenAI alignment head, said they are assembling a new team of researchers and engineers to help forestall the apocalypse by solving the technical challenges of superintelligence. They’ve given the team four years to complete the task.

The potential end of humanity sounds bad, but the OpenAI leaders said they remain hopeful they will solve the problem.

“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” the OpenAI duo wrote. “There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.”

Mr. Sutskever and Mr. Leike said they intended to share the results of their work widely. They said OpenAI is hiring research engineers, scientists and managers who want to help stop the nerds’ new toys from enslaving or eliminating mankind.

Policymakers in Washington are also fretting about AI danger. Senate Majority Leader Charles E. Schumer, New York Democrat, has called for new rules to govern the technology, and the Senate Judiciary Committee has become a center for hearings on oversight of AI.

The committee’s growing investigation of AI has included examinations of fears that AI may enable cyberattacks, political destabilization and the deployment of weapons of mass destruction.

OpenAI CEO Sam Altman called for regulation when he testified before the Senate Judiciary’s subcommittee on privacy, technology and the law in May. Mr. Altman said he harbored concerns about the potential abuse of AI tools for manipulating people.

Senate Judiciary Chairman Richard J. Durbin has expressed an interest in creating an “accountability regime for AI” to include potential federal and state civil liability for when AI tools do harm.

Big Tech companies such as Google and Microsoft, a benefactor of OpenAI, have also called for new regulation of artificial intelligence, and the federal government is listening.

The Biden administration is busy crafting a national AI strategy that the White House Office of Science and Technology Policy has billed as taking a “whole of society” approach.

Top White House officials met multiple times per week on AI as the White House chief of staff’s office has worked on an effort to choose the next steps for President Biden to take on AI, a White House official said in June.

OpenAI said Thursday it is making GPT-4, which is its “most capable” AI model, generally available to increase its accessibility to developers

ccp

  • Power User
  • ***
  • Posts: 19768
    • View Profile
George Will
« Reply #170 on: July 10, 2023, 08:28:54 AM »




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Correlation between fall of Rome and decline of intelligence?
« Reply #177 on: October 02, 2023, 06:56:00 AM »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
GPF: China not a signatory
« Reply #179 on: November 14, 2023, 02:18:52 PM »
Responsible use. Forty-five countries joined a U.S.-led initiative to define responsible military use of artificial intelligence. The declaration, which the U.S. State Department called “groundbreaking,” contains 10 measures that will help mitigate the risks of AI. China was notably absent from the list of signatories.



ccp

  • Power User
  • ***
  • Posts: 19768
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #182 on: December 04, 2023, 10:20:44 AM »
"But the Biden Administration’s biggest concern is that AI models might be applied to hiring by human resource departments, university admissions, or sentencing guidelines for criminals. The EO calls for the “incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems.”

Is this truly their biggest concern? 

"There are billions of possible variants of accident scenarios, and AI models can’t game them all in advance."

psst AI alone is not as fantastic as promised.

This is where quantum computing comes in .
Mix AI with that and now we will see the exponential revolution.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #183 on: December 04, 2023, 11:22:50 PM »
psst AI alone is not as fantastic as promised.

This is where quantum computing comes in .
Mix AI with that and now we will see the exponential revolution.
=============

Flesh that out please.

ccp

  • Power User
  • ***
  • Posts: 19768
    • View Profile
Why AI needs Quantum Computing
« Reply #184 on: December 05, 2023, 06:26:15 AM »
https://www.wired.com/story/power-limits-artificial-intelligence/

If I understand this correctly:

AI can still only do tasks that it is programmed to collect (or the data set it is given by humans)
And the data is all 0s and 1's binary code.  for example an electron the spins one way or the other.

Quantum computer which would help AI compute essentially infinite outcomes at the same moment and come up with the best outcome.

Instead of I's and O's it has infinite data sets.  electrons in multiple states.
It will able to solve problems in minutes that would take billions of years even if all the supercomputers now in existence worked together.

AI would be able to analyze infinite or exponentially more data all at the same time and not like present computers one trial after another .

if I understand this correctly

https://www.forbes.com/sites/forbesbusinessdevelopmentcouncil/2020/10/27/how-can-ai-and-quantum-computers-work-together/?sh=459ab90a6ad1





Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3244
    • View Profile
EEG to Text to Speech?
« Reply #187 on: December 17, 2023, 07:13:41 AM »
Cool stuff if you’ve some sort of gross debilitation. Not so much if you are tied to a chair and being sweated by an intelligence agency:

https://singularityhub.com/2023/12/12/this-mind-reading-cap-can-translate-thoughts-to-text-thanks-to-ai/

Body-by-Guinness

  • Power User
  • ***
  • Posts: 3244
    • View Profile
Human Machine Interface Reacts to Speech
« Reply #188 on: December 17, 2023, 09:09:39 AM »


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
WT
« Reply #190 on: January 02, 2024, 02:53:19 AM »
No opinion yet on my part.
 
==========
AI startups fear Biden regulations will kill chances to compete

Order requires disclosures to government

BY RYAN LOVELACE THE WASHINGTON TIMES

America’s artificial intelligence corporate sector is alive and thriving, but many in the tech trenches are expressing mounting concern that President Biden and his team are out to kill the golden goose in 2024.

AI makers are beginning to grumble about Mr. Biden’s sweeping AI executive order and his administration’s efforts to regulate the emerging technology. The executive order, issued in late October, aims to curtail perceived dangers from the technology by pressuring AI developers to share powerful models’ testing results with the U.S. government and comply with various rules.

Small AI startups say Mr. Biden’s heavy regulatory hand will crush their businesses before they even get off the ground, according to In-Q-Tel, the taxpayer- funded investment group financing technology companies on behalf of America’s intelligence agencies.

China and other U.S. rivals are racing ahead with AI-subsidized sectors, striving to claim market dominance in a technology that some say will upend and disrupt companies across the commercial spectrum. Esube Bekele, who oversees In-Q-Tel’s AI investments, said at the GovAI Summit last month that he heard startups’ concerns that Mr. Biden’s order may create a burden that will disrupt competition and prove so onerous that they could not survive.

“There is a fear from the smaller startups,” Mr. Bekele said on the conference stage. “For instance, in the [executive order], it says after a certain model

AI

From page A1

there is a reporting requirement. Would that be too much?”

The reporting requirements outlined in the executive order say the secretaries of commerce, state, defense and energy, along with the Office of the Director of National Intelligence, will create technical conditions for models and computing clusters on an ongoing basis. The order specifies various levels of computing power that require disclosures until the government issues additional technical guidance.

Mr. Biden said “one thing is clear” as he signed the executive order at the White House: “To realize the promise of AI and avoid the risks, we need to govern this technology.” He called the order “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.”

The R Street Institute’s Adam Thierer said the executive action tees up a turf war in the administration for AI policy leadership. He said he is not sure who will prevail but much AI regulation will now be created away from public view.

Mr. Thierer, a senior fellow on R Street’s technology and innovation team, foresees a tsunami of regulatory activity on AI.

“So much AI regulation is going to happen off the books,” Mr. Thierer said. “It’s going to be in the so-called soft-law arena, soft-power area, through the use of jawboning, regulatory intimidation and sometimes just direct threats.”

Concerns across the board

The little guys are not the only ones afraid of Mr. Biden’s regulations and shadow pressure.

Nvidia and other major players have had a glimpse of the Biden administration’s plans and don’t like what they see.

Santa Clara, California-based Nvidia joined the trillion-dollar market capitalization club in May as businesses rushed to acquire its chips for various AI technologies involving medical imaging and robotics.

Yet the Commerce Department’s anticipated restrictions on Nvidia’s sales to China, in light of security concerns, triggered a stock plunge that jeopardized billions of dollars in expected sales.

Commerce Secretary Gina Raimondo knows her team’s restrictions will hurt technology companies but said she is moving ahead because of national security.

Speaking at the Reagan National Defense Forum in California last month, Ms. Raimondo said she told the “cranky” CEOs of chip manufacturers that they might be in for a “tough quarterly shareholder call,” but she added that it would be worth it in the long run.

“Such is life. Protecting our national security matters more than short-term revenue,” Ms. Raimondo said. Reflecting rising divisions in the exploding marketplace, some technology companies support new regulations. They say they prefer bright legal lines on what they can and cannot do with AI.

Large companies such as Microsoft and Google called for regulations and met with Mr. Biden’s team before the executive order’s release. Microsoft has pushed for a federal agency to regulate AI.

Several leading AI companies voluntarily committed to Mr. Biden’s team to develop and deploy the emerging technology responsibly. Some analysts say the established, well-heeled corporate giants may be better positioned to handle the coming regulatory crush than their smaller, startup rivals.

“The impulse toward early regulation of AI technology may also favor large, well-capitalized companies,” Eric Sheridan, a senior equity research analyst at investment banking giant Goldman Sachs, wrote in a recent analysis.

“Regulation typically comes with higher costs and higher barriers to entry,” he said, and “the larger technology companies can absorb the costs of building these large language models, afford some of these computing costs, as well as comply with regulation.”

Concerns are growing across the AI sector that Mr. Biden’s appointees will look to privately enforce the voluntary agreements.

Mr. Thierer said implicit threats of regulation represent a “sword of Damocles.” The approach has been used as the dominant form of indirect regulation in other tech sectors, including telecommunications.

“The key thing about a sword of Damocles regulation is that the sword need not fall to do the damage. It need only hang in the room,” Mr. Thierer said. “If a sword is hanging above your neck and you fear negative blowback from no less of an authority than the president of the United States … you’re probably going to fall in line with whatever they want.”

Mr. Thierer said he expects shakedown tactics and jawboning efforts to be the governing order for AI in the near term, especially given what he called dysfunction among policymakers in Washington

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile


ccp

  • Power User
  • ***
  • Posts: 19768
    • View Profile
First Qubit at room temperature
« Reply #193 on: January 17, 2024, 06:13:40 PM »
First STABLE Qubit at room temperature only for fraction of second buy a *monumental* baby step:

https://www.iflscience.com/world-first-as-stable-qubit-for-quantum-computers-achieved-at-room-temperature-72502

Perhaps not a good analogy but I am thinking :

One of the first major breakthroughs in electricity occurred in 1831, when British scientist Michael Faraday discovered the basic principles of electricity generation.[2] Building on the experiments of Franklin and others, he observed that he could create or “induce” electric current by moving magnets inside coils of copper wire. The discovery of electromagnetic induction revolutionized how we use energy.

https://en.wikipedia.org/wiki/Michael_Faraday


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Can Artificial Intelligence have agency
« Reply #195 on: January 21, 2024, 05:32:50 AM »
HT BBG


Perhaps not quite constitutional law, but given the question of agency this case will be interesting to track:

https://reason.com/volokh/2024/01/17/court-lets-first-ai-libel-case-go-forward/

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
AI article caught supplanting the real thing from which it stole
« Reply #196 on: January 25, 2024, 05:42:58 AM »
The natural human preference to use hueristics means that AI is a deeply insidious threat.  Skynet anyone?

https://nypost.com/2024/01/22/business/google-news-searches-ranked-ai-generated-ripoffs-above-real-articles-including-a-post-exclusive/


Body-by-Guinness

  • Power User
  • ***
  • Posts: 3244
    • View Profile
AI (Artificial Ignorance) and Eureka Moments
« Reply #198 on: January 29, 2024, 10:25:18 AM »
Interesting thought piece with several elements worth mulling like the amazing thing about the temperature record is not it’s minor rise, but that over the timeframes for which we have data temps have fluctuated so little. This realization leads to a pretty damning indictment of AI’s ability to have flashes of insight:

https://wattsupwiththat.com/2024/01/29/more-about-artificial-ignorance/

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72319
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #199 on: January 30, 2024, 06:26:17 AM »
That was very interesting-- for me about global temperature more than AI  :-D