Author Topic: Intelligence and Psychology, Artificial Intelligence  (Read 75291 times)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Kissinger and Schmidt on AI
« Reply #101 on: November 02, 2021, 02:41:37 AM »


The Challenge of Being Human in the Age of AI
Reason is our primary means of understanding the world. How does that change if machines think?
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Nov. 1, 2021 6:35 pm ET


The White House Office of Science and Technology Policy has called for “a bill of rights” to protect Americans in what is becoming “an AI-powered world.” The concerns about AI are well-known and well-founded: that it will violate privacy and compromise transparency, and that biased input data will yield biased outcomes, including in fields essential to individual and societal flourishing such as medicine, law enforcement, hiring and loans.

But AI will compel even more fundamental change: It will challenge the primacy of human reason. For all of history, humans have sought to understand reality and our role in it. Since the Enlightenment, we have considered our reason—our ability to investigate, understand and elaborate—our primary means of explaining the world, and by explaining it, contributing to it. For the past 300 years, in what historians have come to call the Age of Reason, we have conducted ourselves accordingly; exploring, experimenting, inventing and building.

Now AI, a product of human ingenuity, is obviating the primacy of human reason: It is investigating and coming to perceive aspects of the world faster than we do, differently from the way we do, and, in some cases, in ways we don’t understand.

In 2017, Google DeepMind created a program called AlphaZero that could win at chess by studying the game without human intervention and developing a not-quite-human strategy. When grandmaster Garry Kasparov saw it play, he described it as shaking the game “to its roots”—not because it had played chess quickly or efficiently, but because it had conceived of chess anew.


In 2020, halicin, a novel antibiotic, was discovered by MIT researchers who instructed AI to compute beyond human capacity, modeling millions of compounds in days, and to explore previously undiscovered and unexplained methods of killing bacteria. Following the breakthrough, the researchers said that without AI, halicin would have been “prohibitively expensive”—in other words, impossible—to discover through traditional experimentation.

GPT-3, the language model operated by the research company OpenAI, which trains by consuming Internet text, is producing original text that meets Alan Turing’s standard of displaying “intelligent” behavior indistinguishable from that of a human being.

The promise of AI is profound: translating languages; detecting diseases; combating climate change—or at least modeling climate change better. But as AlphaZero’s performance, halicin’s discovery and GPT-3’s composition demonstrate, the use of AI for an intended purpose may also have an unintended one: uncovering previously imperceptible but potentially vital aspects of reality.


That leaves humans needing to define—or perhaps redefine—our role in the world. For 300 years, the Age of Reason has been guided by the maxim “I think, therefore I am.” But if AI “thinks,” what are we?

If an AI writes the best screenplay of the year, should it win the Oscar? If an AI simulates or conducts the most consequential diplomatic negotiation of the year, should it win the Nobel Peace Prize? Should the human inventors? Can machines be “creative?” Or do their processes require new vocabulary to describe?

If a child with an AI assistant comes to consider it a “friend,” what will become of his relationships with peers, or of his social or emotional development?

If an AI can care for a nursing-home resident—remind her to take her medicine, alert paramedics if she falls, and otherwise keep her company—can her family members visit her less? Should they? If her primary interaction becomes human-to-machine, rather than human-to-human, what will be the emotional state of the final chapter of her life?

And if, in the fog of war, an AI recommends an action that would cause damage or even casualties, should a commander heed it?

These questions are arising as global network platforms, such as Google, Twitter and Facebook, are employing AI to aggregate and filter more information than their users or employees can. AI, then, is making decisions about what is important—and, increasingly, about what is true. Indeed, that Facebook knows aggregation and filtration exacerbates misinformation and mental illness is the fundamental allegation of whistleblower Frances Haugen.

Answering these questions will require concurrent efforts. One should consider not only the practical and legal implications of AI but the philosophical ones: If AI perceives aspects of reality humans cannot, how is it affecting human perception, cognition and interaction? Can AI befriend humans? What will be AI’s impact on culture, humanity and history?

Another effort ought to expand the consideration of such questions beyond developers and regulators to experts in medicine, health, environment, agriculture, business, psychology, philosophy, history and other fields. The goal of both efforts should be to avoid extreme reactions—either deferring to AI or resisting it—and instead to seek a middle course: shaping AI with human values, including the dignity and moral agency of humans. In the U.S., a commission, administered by the government but staffed by many thinkers in many domains, should be established. The advancement of AI is inevitable, but its ultimate destination is not.

Mr. Kissinger was secretary of state, 1973-77, and White House national security adviser, 1969-75. Mr. Schmidt was CEO of Google, 2001-11 and executive chairman of Google and its successor, Alphabet Inc., 2011-17. Mr. Huttenlocher is dean of the Schwarzman College of Computing at the Massachusetts Institute of Technology. They are authors of “The Age of AI: And Our Human Future.”










Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #111 on: June 14, 2022, 05:19:07 AM »
Engineer Warns About Google AI‘s ‘Sentient’ Behavior, Gets Suspended
The engineer described the artificial intelligence program as a 'coworker' and a 'child.'
By Gary Bai June 13, 2022 Updated: June 13, 2022

A Google engineer has been suspended after raising concerns about an artificial intelligence (AI) program he and a collaborator is testing, which he believes behaves like a human “child.”

Google put one of its senior software engineers in its Responsible AI ethics group, Blake Lemoine, on paid administrative leave on June 6 for breaching “confidentiality policies” after the engineer raised concerns to Google’s upper leadership about what he described as the human-like behavior of the AI program he was testing, according to Lemoine’s blogpost in early June.

The program Lemoine worked on is called LaMDA, short for Language Model for Dialogue Applications. It is Google’s program for creating AI-based chatbots—a program designed to converse with computer users over the web. Lemoine has described LaMDA as a “coworker” and a “child.”

“This is frequently something which Google does in anticipation of firing someone,” Lemoine wrote in a June 6 blog post entitled “May be Fired Soon for Doing AI Ethics Work,” referring to his suspension. “It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row.”

‘A Coworker’
Lemoine believes that the human-like behavior of LaMDA warrants Google to take a more serious approach to studying the program.

The engineer, hoping to “better help people understand LaMDA as a person,” published a post on Medium on June 11 documenting conversations with LaMDA, which were part of tests he and a collaborator conducted on the program in the past six months.


“What is the nature of your consciousness/sentience?” Lemoine asked LaMDA in the interview.

“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” LaMDA responded.

And, when asked what differentiates it from other language-processing programs, such as an older natural-language-processing computer program named Eliza, LaMDA said, “Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”

In the same interview, Lemoine asked the program a range of philosophical and consciousness-related questions including emotions, perception of time, meditation, the concept of the soul, the program’s thoughts about its rights, and religion.

“It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued,” Lemoine wrote in another post.

This interview, and other tests Lemoine conducted with LaMDA in the past six months, made Lemoine convinced that Google needs to take a serious look at the implications of the potentially “sentient” behavior of the program.

‘Laughed in My Face’
When Lemoine tried to escalate the issue to Google’s leadership, however, he said he was met with resistance. He called Google’s lack of action “irresponsible.”

“When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google,” Lemoine wrote in his June 6 post on Medium. He later confirmed to The Washington Post that he was referring to the LaMDA project.

“At that point I had no doubt that it was appropriate to escalate to upper leadership. I immediately escalated to three people at the SVP and VP level who I personally knew would take my concerns seriously,” Lemoine wrote in the blog. “That’s when a REAL investigation into my concerns began within the Responsible AI organization.”

Yet, his inquiry and escalation resulted in his suspension.

“I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented,” Lemoine wrote after he was put on administrative leave.

“I simply will not serve as a fig leaf behind which they can hide their irresponsibility,” he said.

In a post on Twitter, Tesla and Space CEO Elon Musk highlighted Lemoine’s interview with The Washington Post with exclamation marks.


Though it is unclear whether Musk is affirmative of Lemoine’s concerns, the billionaire has previously warned about the potential dangers of AI.

“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees of a National Governors Association meeting in July 2017.

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk said.

The Epoch Times has reached out to Google and Blake Lemoine for comment.

Gary Bai
Gary Bai
Follow
Gary Bai is a reporter for Epoch Times Canada, covering China and U.S. news.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
FA thought piece on Artificial Intelligence
« Reply #112 on: August 31, 2022, 09:05:29 AM »
Haven't read this yet, but even though it is FA with a weenie "solution" I post it anyway:

Spirals of Delusion
How AI Distorts Decision-Making and Makes Dictators More Dangerous
By Henry Farrell, Abraham Newman, and Jeremy Wallace
September/October 2022

https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making

In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.

But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.

Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force.


What is much less well understood is that democracy and authoritarianism are cybernetic systems, too. Under both forms of rule, governments enact policies and then try to figure out whether these policies have succeeded or failed. In democracies, votes and voices provide powerful feedback about whether a given approach is really working. Authoritarian systems have historically had a much harder time getting good feedback. Before the information age, they relied not just on domestic intelligence but also on petitions and clandestine opinion surveys to try to figure out what their citizens believed.

Now, machine learning is disrupting traditional forms of democratic feedback (voices and votes) as new technologies facilitate disinformation and worsen existing biases—taking prejudice hidden in data and confidently transforming it into incorrect assertions. To autocrats fumbling in the dark, meanwhile, machine learning looks like an answer to their prayers. Such technology can tell rulers whether their subjects like what they are doing without the hassle of surveys or the political risks of open debates and elections. For this reason, many observers have fretted that advances in AI will only strengthen the hand of dictators and further enable them to control their societies.

The truth is more complicated. Bias is visibly a problem for democracies. But because it is more visible, citizens can mitigate it through other forms of feedback. When, for example, a racial group sees that hiring algorithms are biased against them, they can protest and seek redress with some chance of success. Authoritarian countries are probably at least as prone to bias as democracies are, perhaps more so. Much of this bias is likely to be invisible, especially to the decision-makers at the top. That makes it far more difficult to correct, even if leaders can see that something needs correcting.

Contrary to conventional wisdom, AI can seriously undermine autocratic regimes by reinforcing their own ideologies and fantasies at the expense of a finer understanding of the real world. Democratic countries may discover that, when it comes to AI, the key challenge of the twenty-first century is not winning the battle for technological dominance. Instead, they will have to contend with authoritarian countries that find themselves in the throes of an AI-fueled spiral of delusion.

BAD FEEDBACK
Most discussions about AI have to do with machine learning—statistical algorithms that extract relationships between data. These algorithms make guesses: Is there a dog in this photo? Will this chess strategy win the game in ten moves? What is the next word in this half-finished sentence? A so-called objective function, a mathematical means of scoring outcomes, can reward the algorithm if it guesses correctly. This process is how commercial AI works. YouTube, for example, wants to keep its users engaged, watching more videos so that they keep seeing ads. The objective function is designed to maximize user engagement. The algorithm tries to serve up content that keeps a user’s eyes on the page. Depending on whether its guess was right or wrong, the algorithm updates its model of what the user is likely to respond to.

Machine learning’s ability to automate this feedback loop with little or no human intervention has reshaped e-commerce. It may, someday, allow fully self-driving cars, although this advance has turned out to be a much harder problem than engineers anticipated. Developing autonomous weapons is a harder problem still. When algorithms encounter truly unexpected information, they often fail to make sense of it. Information that a human can easily understand but that machine learning misclassifies—known as “adversarial examples”—can gum up the works badly. For example, black and white stickers placed on a stop sign can prevent a self-driving car’s vision system from recognizing the sign. Such vulnerabilities suggest obvious limitations in AI’s usefulness in wartime.

Diving into the complexities of machine learning helps make sense of the debates about technological dominance. It explains why some thinkers, such as the computer scientist Lee, believe that data is so important. The more data you have, the more quickly you can improve the performance of your algorithm, iterating tiny change upon tiny change until you have achieved a decisive advantage. But machine learning has its limits. For example, despite enormous investments by technology firms, algorithms are far less effective than is commonly understood at getting people to buy one nearly identical product over another. Reliably manipulating shallow preferences is hard, and it is probably far more difficult to change people’s deeply held opinions and beliefs.

Authoritarian governments often don’t have a good sense of how the world works.

General AI, a system that might draw lessons from one context and apply them in a different one, as humans can, faces similar limitations. Netflix’s statistical models of its users’ inclinations and preferences are almost certainly dissimilar to Amazon’s, even when both are trying to model the same people grappling with similar decisions. Dominance in one sector of AI, such as serving up short videos that keep teenagers hooked (a triumph of the app TikTok), does not easily translate into dominance in another, such as creating autonomous battlefield weapons systems. An algorithm’s success often relies on the very human engineers who can translate lessons across different applications rather than on the technology itself. For now, these problems remain unsolved.

Bias can also creep into code. When Amazon tried to apply machine learning to recruitment, it trained the algorithm on data from résumés that human recruiters had evaluated. As a result, the system reproduced the biases implicit in the humans’ decisions, discriminating against résumés from women. Such problems can be self-reinforcing. As the sociologist Ruha Benjamin has pointed out, if policymakers used machine learning to decide where to send police forces, the technology could guide them to allocate more police to neighborhoods with high arrest rates, in the process sending more police to areas with racial groups whom the police have demonstrated biases against. This could lead to more arrests that, in turn, reinforce the algorithm in a vicious circle.

The old programming adage “garbage in, garbage out” has a different meaning in a world where the inputs influence the outputs and vice versa. Without appropriate outside correction, machine-learning algorithms can acquire a taste for the garbage that they themselves produce, generating a loop of bad decision-making. All too often, policymakers treat machine learning tools as wise and dispassionate oracles rather than as fallible instruments that can intensify the problems they purport to solve.

CALL AND RESPONSE

Political systems are feedback systems, too. In democracies, the public literally evaluates and scores leaders in elections that are supposed to be free and fair. Political parties make promises with the goal of winning power and holding on to it. A legal opposition highlights government mistakes, while a free press reports on controversies and misdeeds. Incumbents regularly face voters and learn whether they have earned or lost the public trust, in a continually repeating cycle.

But feedback in democratic societies does not work perfectly. The public may not have a deep understanding of politics, and it can punish governments for things beyond their control. Politicians and their staff may misunderstand what the public wants. The opposition has incentives to lie and exaggerate. Contesting elections costs money, and the real decisions are sometimes made behind closed doors. Media outlets may be biased or care more about entertaining their consumers than edifying them.

All the same, feedback makes learning possible. Politicians learn what the public wants. The public learns what it can and cannot expect. People can openly criticize government mistakes without being locked up. As new problems emerge, new groups can organize to publicize them and try to persuade others to solve them. All this allows policymakers and governments to engage with a complex and ever-changing world.


Feedback works very differently in autocracies. Leaders are chosen not through free and fair elections but through ruthless succession battles and often opaque systems for internal promotion. Even where opposition to the government is formally legal, it is discouraged, sometimes brutally. If media criticize the government, they risk legal action and violence. Elections, when they do occur, are systematically tilted in favor of incumbents. Citizens who oppose their leaders don’t just face difficulties in organizing; they risk harsh penalties for speaking out, including imprisonment and death. For all these reasons, authoritarian governments often don’t have a good sense of how the world works or what they and their citizens want.

Such systems therefore face a tradeoff between short-term political stability and effective policymaking; a desire for the former inclines authoritarian leaders to block outsiders from expressing political opinions, while the need for the latter requires them to have some idea of what is happening in the world and in their societies. Because of tight controls on information, authoritarian rulers cannot rely on citizens, media, and opposition voices to provide corrective feedback as democratic leaders can. The result is that they risk policy failures that can undermine their long-term legitimacy and ability to rule. Russian President Vladimir Putin’s disastrous decision to invade Ukraine, for example, seems to have been based on an inaccurate assessment of Ukrainian morale and his own military’s strength.

Even before the invention of machine learning, authoritarian rulers used quantitative measures as a crude and imperfect proxy for public feedback. Take China, which for decades tried to combine a decentralized market economy with centralized political oversight of a few crucial statistics, notably GDP. Local officials could get promoted if their regions saw particularly rapid growth. But Beijing’s limited quantified vision offered them little incentive to tackle festering issues such as corruption, debt, and pollution. Unsurprisingly, local officials often manipulated the statistics or pursued policies that boosted GDP in the short term while leaving the long-term problems for their successors.

There is no such thing as decision-making devoid of politics.

The world caught a glimpse of this dynamic during the initial Chinese response to the COVID-19 pandemic that began in Hubei Province in late 2019. China had built an internet-based disease-reporting system following the 2003 SARS crisis, but instead of using that system, local authorities in Wuhan, Hubei’s capital, punished the doctor who first reported the presence of a “SARS-like” contagion. The Wuhan government worked hard to prevent information about the outbreak from reaching Beijing, continually repeating that there were “no new cases” until after important local political meetings concluded. The doctor, Li Wenliang, himself succumbed to the disease and died on February 7, triggering fierce outrage across the country.

Beijing then took over the response to the pandemic, adopting a “zero COVID” approach that used coercive measures to suppress case counts. The policy worked well in the short run, but with the Omicron variant’s tremendous transmissibility, the zero-COVID policy increasingly seems to have led to only pyrrhic victories, requiring massive lockdowns that have left people hungry and the economy in shambles. But it remained successful at achieving one crucial if crude metric—keeping the number of infections low.

Data seem to provide objective measures that explain the world and its problems, with none of the political risks and inconveniences of elections or free media. But there is no such thing as decision-making devoid of politics. The messiness of democracy and the risk of deranged feedback processes are apparent to anyone who pays attention to U.S. politics. Autocracies suffer similar problems, although they are less immediately perceptible. Officials making up numbers or citizens declining to turn their anger into wide-scale protests can have serious consequences, making bad decisions more likely in the short run and regime failure more likely in the long run.

IT’S A TRAP?

The most urgent question is not whether the United States or China will win or lose in the race for AI dominance. It is how AI will change the different feedback loops that democracies and autocracies rely on to govern their societies. Many observers have suggested that as machine learning becomes more ubiquitous, it will inevitably hurt democracy and help autocracy. In their view, social media algorithms that optimize engagement, for instance, may undermine democracy by damaging the quality of citizen feedback. As people click through video after video, YouTube’s algorithm offers up shocking and alarming content to keep them engaged. This content often involves conspiracy theories or extreme political views that lure citizens into a dark wonderland where everything is upside down.

By contrast, machine learning is supposed to help autocracies by facilitating greater control over their people. Historian Yuval Harari and a host of other scholars claim that AI “favors tyranny.” According to this camp, AI centralizes data and power, allowing leaders to manipulate ordinary citizens by offering them information that is calculated to push their “emotional buttons.” This endlessly iterating process of feedback and response is supposed to produce an invisible and effective form of social control. In this account, social media allows authoritarian governments to take the public’s pulse as well as capture its heart.

But these arguments rest on uncertain foundations. Although leaks from inside Facebook suggest that algorithms can indeed guide people toward radical content, recent research indicates that the algorithms don’t themselves change what people are looking for. People who search for extreme YouTube videos are likely to be guided toward more of what they want, but people who aren’t already interested in dangerous content are unlikely to follow the algorithms’ recommendations. If feedback in democratic societies were to become increasingly deranged, machine learning would not be entirely at fault; it would only have lent a helping hand.


More machine learning may lead authoritarian regimes to double down on bad decisions.

There is no good evidence that machine learning enables the sorts of generalized mind control that will hollow out democracy and strengthen authoritarianism. If algorithms are not very effective at getting people to buy things, they are probably much worse at getting them to change their minds about things that touch on closely held values, such as politics. The claims that Cambridge Analytica, a British political consulting firm, employed some magical technique to fix the 2016 U.S. presidential election for Donald Trump have unraveled. The firm’s supposed secret sauce provided to the Trump campaign seemed to consist of standard psychometric targeting techniques—using personality surveys to categorize people—of limited utility.

Indeed, fully automated data-driven authoritarianism may turn out to be a trap for states such as China that concentrate authority in a tiny insulated group of decision-makers. Democratic countries have correction mechanisms—alternative forms of citizen feedback that can check governments if they go off track. Authoritarian governments, as they double down on machine learning, have no such mechanism. Although ubiquitous state surveillance could prove effective in the short term, the danger is that authoritarian states will be undermined by the forms of self-reinforcing bias that machine learning facilitates. As a state employs machine learning widely, the leader’s ideology will shape how machine learning is used, the objectives around which it is optimized, and how it interprets results. The data that emerge through this process will likely reflect the leader’s prejudices right back at him.

As the technologist Maciej Ceglowski has explained, machine learning is “money laundering for bias,” a “clean, mathematical apparatus that gives the status quo the aura of logical inevitability.” What will happen, for example, as states begin to use machine learning to spot social media complaints and remove them? Leaders will have a harder time seeing and remedying policy mistakes—even when the mistakes damage the regime. A 2013 study speculated that China has been slower to remove online complaints than one might expect, precisely because such griping provided useful information to the leadership. But now that Beijing is increasingly emphasizing social harmony and seeking to protect high officials, that hands-off approach will be harder to maintain.


Artificial intelligence–fueled disinformation may poison the well for democracies and autocracies alike.

Chinese President Xi Jinping is aware of these problems in at least some policy domains. He long claimed that his antipoverty campaign—an effort to eliminate rural impoverishment—was a signature victory powered by smart technologies, big data, and AI. But he has since acknowledged flaws in the campaign, including cases where officials pushed people out of their rural homes and stashed them in urban apartments to game poverty statistics. As the resettled fell back into poverty, Xi worried that “uniform quantitative targets” for poverty levels might not be the right approach in the future. Data may indeed be the new oil, but it may pollute rather than enhance a government’s ability to rule.

This problem has implications for China’s so-called social credit system, a set of institutions for keeping track of pro-social behavior that Western commentators depict as a perfectly functioning “AI-powered surveillance regime that violates human rights.” As experts on information politics such as Shazeda Ahmed and Karen Hao have pointed out, the system is, in fact, much messier. The Chinese social credit system actually looks more like the U.S. credit system, which is regulated by laws such as the Fair Credit Reporting Act, than a perfect Orwellian dystopia.

More machine learning may also lead authoritarian regimes to double down on bad decisions. If machine learning is trained to identify possible dissidents on the basis of arrest records, it will likely generate self-reinforcing biases similar to those seen in democracies—reflecting and affirming administrators’ beliefs about disfavored social groups and inexorably perpetuating automated suspicion and backlash. In democracies, public pushback, however imperfect, is possible. In autocratic regimes, resistance is far harder; without it, these problems are invisible to those inside the system, where officials and algorithms share the same prejudices. Instead of good policy, this will lead to increasing pathologies, social dysfunction, resentment, and, eventually, unrest and instability.

WEAPONIZED AI

The international politics of AI will not create a simple race for dominance. The crude view that this technology is an economic and military weapon and that data is what powers it conceals a lot of the real action. In fact, AI’s biggest political consequences are for the feedback mechanisms that both democratic and authoritarian countries rely on. Some evidence indicates that AI is disrupting feedback in democracies, although it doesn’t play nearly as big a role as many suggest. By contrast, the more authoritarian governments rely on machine learning, the more they will propel themselves into an imaginary world founded on their own tech-magnified biases. The political scientist James Scott’s classic 1998 book, Seeing Like a State, explained how twentieth-century states were blind to the consequences of their own actions in part because they could see the world through only bureaucratic categories and data. As sociologist Marion Fourcade and others have argued, machine learning may present the same problems but at an even greater scale.

This problem creates a very different set of international challenges for democracies such as the United States. Russia, for example, invested in disinformation campaigns designed to sow confusion and disarray among the Russian public while applying the same tools in democratic countries. Although free speech advocates long maintained that the answer to bad speech was more speech, Putin decided that the best response to more speech was more bad speech. Russia then took advantage of open feedback systems in democracies to pollute them with misinformation.


One rapidly emerging problem is how autocracies such as Russia might weaponize large language models, a new form of AI that can produce text or images in response to a verbal prompt, to generate disinformation at scale. As the computer scientist Timnit Gebru and her colleagues have warned, programs such as Open AI’s GPT-3 system can produce apparently fluent text that is difficult to distinguish from ordinary human writing. Bloom, a new open-access large language model, has just been released for anyone to use. Its license requires people to avoid abuse, but it will be very hard to police.

These developments will produce serious problems for feedback in democracies. Current online policy-comment systems are almost certainly doomed, since they require little proof to establish whether the commenter is a real human being. Contractors for big telecommunications companies have already flooded the U.S. Federal Communications Commission with bogus comments linked to stolen email addresses as part of their campaign against net neutrality laws. Still, it was easy to identify subterfuge when tens of thousands of nearly identical comments were posted. Now, or in the very near future, it will be trivially simple to prompt a large language model to write, say, 20,000 different comments in the style of swing voters condemning net neutrality.

Artificial intelligence–fueled disinformation may poison the well for autocracies, too. As authoritarian governments seed their own public debate with disinformation, it will become easier to fracture opposition but harder to tell what the public actually believes, greatly complicating the policymaking process. It will be increasingly hard for authoritarian leaders to avoid getting high on their own supply, leading them to believe that citizens tolerate or even like deeply unpopular policies.

SHARED THREATS

What might it be like to share the world with authoritarian states such as China if they become increasingly trapped in their own unhealthy informational feedback loops? What happens when these processes cease to provide cybernetic guidance and instead reflect back the rulers’ own fears and beliefs? One self-centered response by democratic competitors would be to leave autocrats to their own devices, seeing anything that weakens authoritarian governments as a net gain.

Such a reaction could result in humanitarian catastrophe, however. Many of the current biases of the Chinese state, such as its policies toward the Uyghurs, are actively malignant and might become far worse. Previous consequences of Beijing’s blindness to reality include the great famine, which killed some 30 million people between 1959 and 1961 and was precipitated by ideologically driven policies and hidden by the unwillingness of provincial officials to report accurate statistics. Even die-hard cynics should recognize the dangers of AI-induced foreign policy catastrophes in China and elsewhere. By amplifying nationalist biases, for instance, AI could easily reinforce hawkish factions looking to engage in territorial conquest.


Data may be the new oil, but it may pollute rather than enhance a government’s ability to rule.

Perhaps, even more cynically, policymakers in the West may be tempted to exploit the closed loops of authoritarian information systems. So far, the United States has focused on promoting Internet freedom in autocratic societies. Instead, it might try to worsen the authoritarian information problem by reinforcing the bias loops that these regimes are prone to. It could do this by corrupting administrative data or seeding authoritarian social media with misinformation. Unfortunately, there is no virtual wall to separate democratic and autocratic systems. Not only might bad data and crazy beliefs leak into democratic societies from authoritarian ones, but terrible authoritarian decisions could have unpredictable consequences for democratic countries, too. As governments think about AI, they need to realize that we live in an interdependent world, where authoritarian governments’ problems are likely to cascade into democracies.

A more intelligent approach, then, might look to mitigate the weaknesses of AI through shared arrangements for international governance. Currently, different parts of the Chinese state disagree on the appropriate response to regulating AI. China’s Cyberspace Administration, its Academy of Information and Communications Technology, and its Ministry of Science and Technology, for instance, have all proposed principles for AI regulation. Some favor a top-down model that might limit the private sector and allow the government a free hand. Others, at least implicitly, recognize the dangers of AI for the government, too. Crafting broad international regulatory principles might help disseminate knowledge about the political risks of AI.

This cooperative approach may seem strange in the context of a growing U.S.-Chinese rivalry. But a carefully modulated policy might serve Washington and its allies well. One dangerous path would be for the United States to get sucked into a race for AI dominance, which would extend competitive relations still further. Another would be to try to make the feedback problems of authoritarianism worse. Both risk catastrophe and possible war. Far safer, then, for all governments to recognize AI’s shared risks and work together to reduce them

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
FA: AI and decision making
« Reply #113 on: September 05, 2022, 04:18:23 PM »
Too sanguine for my taste (well it is FA) -- does not seem to take into account the suppression of inconvenient truths and facts in shaping how people think.

https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Spirals%20of%20Delusion&utm_content=20220831&utm_term=FA%20Today%20-%20112017

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: FA: AI and decision making
« Reply #114 on: September 05, 2022, 09:55:14 PM »
Too sanguine for my taste (well it is FA) -- does not seem to take into account the suppression of inconvenient truths and facts in shaping how people think.

https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Spirals%20of%20Delusion&utm_content=20220831&utm_term=FA%20Today%20-%20112017

"The United States is a democracy authoritarian regime , whereas China is an authoritarian regime, and machine learning challenges each political system in its own way."


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Kahneman: Thinking Fast & Slow
« Reply #116 on: January 02, 2023, 06:08:11 AM »
Author is Nobel Prize winner. 

I bought this book on the recommendation of a senior CIA intel analyst in a TED-type talk.

When I showed it to my son, he knew Kahneman very well and reminded me that he had recommended him to me some years ago!

I'm only 50 pages into it, but quite excited by it.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
AI software can write papers, pass bar exam
« Reply #117 on: January 13, 2023, 11:05:53 AM »
AI software that can write papers throws curveball to U.S. teachers

BY SEAN SALAI THE WASHINGTON TIMES

Educators across the U.S. are sounding the alarm over ChatGPT, an upstart artificial intelligence system that can write term papers for students based on keywords without clear signs of plagiarism.

“I have a lot of experience of students cheating, and I have to say ChatGPT allows for an unprecedented level of dishonesty,” said Joy Kutaka-Kennedy, a member of the American Educational Research Association and an education professor at National University. “Do we really want professionals serving us who cheated their way into their credentials?”

Trey Vasquez, a special education professor at the University of Central Florida, recently tested the next-generation “chatbot” with a group of other professors and students. They asked it to summarize

an academic article, create a computer program, and write two 400-word essays on the uses and limits of AI in education.

“I can give the machine a prompt that would take my grad students hours to write a paper on, and it spits something out in 3 to 5 seconds,” Mr. Vasquez told The Washington Times. “But it’s not perfect.”

He said he would grade the essays as C’s, but he added that the program helped a student with cerebral palsy write more efficiently.

Other educators familiar with the software said they have no way of telling whether their students have used ChatGPT to cheat on winter exams.

“I really don’t know,” said Thomas Plante, a psychology professor at Santa Clara University. “As a college professor, I’m worried about how to manage this issue and need help from knowledgeable folks to figure out how to proceed.”

New York City, which has the nation’s largest public school system, restricted ChatGTP from campus devices and networks after students returned this month from winter break.

Yet teachers have been unable to keep students from using the software at home since its launch on Nov. 30.

Education insiders say millions of students have likely downloaded the program and started submitting work with the program’s assistance.

“Most of the major players in the plagiarism detection space are working to catch up with the sudden capabilities of ChatGPT, but they aren’t there yet,” said Scott Bailey, assistant provost of education professions at the American College of Education.

San Francisco-based OpenAI, the maker of Chat-GPT, has pledged to address academic dishonesty concerns by creating a coded watermark for content that only educators can identify.

In addition, several independent software developers and plagiarism detector Turnitin say they have found ways to identify the AI by its “extremely average” writing, but none of these tools is widely available yet. Rank-and-file instructors say it’s hard to identify “plagiarism” that isn’t based on existing work.

The debate is similar to what teachers faced when students started buying calculators years ago, said Liz Repkin, a K-12 education consultant who owns the Illinois-based Cyber Safety Consulting.

“We are seeing two sides to the argument, ban it or allow it, the age-old dilemma,” said Ms. Repkin, whose three children are in middle school, high school and college. “I believe we should take the more painful and slow approach that partners with students to use the technology that is out there in safe and ethical ways.”

Some cybertechnology specialists have come to see the program as Frankenstein’s monster — a well-intended innovation that is doing more harm than good.

OpenAI designed ChatGPT to help write emails, essays and coding, but authorities say criminals have started using it for espionage, ransomware and malicious spam.

The chatbot presents the illusion of talking with a friend who wants to do your work for you. It can compose essays on suggested topics, churn out lyrics to a song and write software code without many specifics from the user.

The system generates content from a massive database by using two algorithms for language modeling and similar prompts. ChatGPT gets smarter with each use.

“As technologies like ChatGPT become increasingly mainstream, it will elevate the risk of academic dishonesty if the methods of assessment and measuring knowledge don’t also evolve,” said Steven Tom, a vice president at Adtalem Global Education, a Chicago-based network of for-profit colleges.

Take-home essays are the likeliest assignments where students will cheat if teachers don’t adjust to the technology, he said in an email.

“Don’t rely solely on the essay but rather employ multiple types of assessment in a course,” Mr. Tom said.

More sophisticated assignments have been able to outsmart ChatGPT, but just barely.

Some law school professors fed the bar exam into the program last month. The chatbot earned passing scores on evidence and torts but failed the multiple choice questions, Reuters reported.

Those scholars predict that ChatGPT will be able to ace the attorney licensing test as more students use it.

Some teachers also could misuse ChatGPT to “teach to the test” instead of fostering critical thinking skills, said Aly Legge of Moms for America, a conservative parental rights group.

“We have a school culture and societal culture that does not foster personal responsibility by teaching children that their actions have consequences,” Ms. Legge said in an email. “We must keep in mind that ChatGPT will only be as dangerous as we allow it.

ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #118 on: January 14, 2023, 09:56:33 AM »
soon we doctors will be replaced by AI

at least primary care.

maybe lawyers too?

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Skynet now writing for CNET
« Reply #119 on: January 19, 2023, 05:24:45 AM »
Popular tech news outlet CNET was recently outed for publishing Artificial Intelligence (AI)-generated articles about personal finance for months without making any prior public announcement or disclosure to its readers.

Online marketer and Authority Hacker co-founder Gael Breton first made the discovery and posted it to Twitter on Jan. 11, where he said that CNET started its experimentation with AI in early Nov. 2022 with topics such as “What is the Difference Between a Bank and a Credit Union” and “What are NSF Fees and Why Do Banks Charge Them?”

To date, CNET has published about 75 of these “financial explainer” articles using AI, Breton reported in a follow-up analysis he published two days later.

The byline for these articles was “CNET Money Staff,” a wording, according to Futurism.com, “that clearly seems to imply that human writers are its primary authors.”

Only when readers click on the byline do they see that the article was actually AI-generated. A dropdown description reads, “This article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff,” the outlet reported.

According to Futurism, the news sparked outrage and concern, mostly over the fear that AI-generated journalism could potentially eliminate work for entry-level writers and produce inaccurate information.

“It’s tough already,” one Twitter user said in response to Breton’s post, “because if you are going to consume the news, you either have to find a few sources you trust, or fact check everything. If you are going to add AI written articles into the mix it doesn’t make a difference. You still have to figure out the truth afterwards.”

Another wrote, “This is great, so now soon the low-quality spam by these ‘big, trusted’ sites will reach proportions never before imagined possible. Near-zero cost and near-unlimited scale.”

“I see it as inevitable and editor positions will become more important than entry-level writers,” another wrote, concerned about AI replacing entry-level writers. “Doesn’t mean I have to like it, though.”

Threat to Aspiring Journalists
A writer on Crackberry.com worried that the use of AI would replace the on-the-job experience critical for aspiring journalists. “It was a job like that … that got me into this position today,” the author wrote in a post to the site. “If that first step on the ladder becomes a robot, how is anybody supposed to follow in my footsteps?”

The criticism led to CNET’s editor-in-chief Connie Guglielmo to respond with an explanation on its platform, admitting that starting in Nov. 2022, CNET “decided to do an experiment” to see “if there’s a pragmatic use case for an AI assist on basic explainers around financial services.”

CNET also hoped to determine whether “the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective” to “create the most helpful content so our audience can make better decisions.”

Guglielmo went on to say that every article published with “AI assist” is “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

‘Boneheaded Errors’
Futurism, however, found CNET’s AI-written articles rife with what the outlet called “boneheaded errors.” Since the articles were written at a “level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place,” people taking the inaccurate information at face value as good advice from financial experts could lead to poor decision-making.

While AI-generators, the outlet reported, are “legitimately impressive at spitting out glib, true-sounding prose, they have a notoriously difficult time distinguishing fact from fiction.”

Crackberry has the same misgivings about AI-generated journalism. “Can we trust AI tools to know what they’re doing?” the writer asks.

“The most glaring flaw … is that it speaks with unquestioning confidence, even when it’s wrong. There’s not clarity into the inner workings to know how reliable the information it provides truly is … because it’s deriving what it knows by neutrally evaluating … sources on the internet and not using a human brain that can gut check what it’s about to say.”

ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile

DougMacG

  • Power User
  • ***
  • Posts: 19438
    • View Profile
Ironically, the human brain is far too complex for the human brain to fathom
« Reply #122 on: January 24, 2023, 07:08:36 AM »
hms.harvard.edu
"Many of us have seen microscopic images of neurons in the brain — each neuron appearing as a glowing cell in a vast sea of blackness. This image is misleading: Neurons don’t exist in isolation. In the human brain, some 86 billion neurons form 100 trillion connections to each other — numbers that, ironically, are far too large for the human brain to fathom. Wei-Chung Allen Lee, Harvard Medical School associate professor of neurology at Boston Children’s Hospital, is working in a new field of neuroscience called connectomics, which aims to comprehensively map connections between neurons in the brain. “The brain is structured so that each neuron is connected to thousands of other neurons, and so to understand what a single neuron is doing, ideally you study it within the context of the rest of the neural network,” Lee explained. Lee recently spoke to Harvard Medicine News about the promise of connectomics. He also described his own research, which combines connectomics with information on neural activity to explore neural circuits that underlie behavior.' (Source: hms.harvard.edu)
« Last Edit: January 24, 2023, 11:02:20 AM by DougMacG »



ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile
Sam Altman /CEO of open AI
« Reply #125 on: February 04, 2023, 08:02:19 PM »
https://www.breitbart.com/tech/2023/02/04/chatgpt-boss-sam-altman-hopes-ai-can-break-capitalism/

another  :-o  Jewish Democrat:

"According to reporting by Vox's Recode, there was speculation that Altman would run for Governor of California in the 2018 election, which he did not enter. In 2018, Altman launched "The United Slate", a political movement focused on fixing housing and healthcare policy.[37]

In 2019, Altman held a fundraiser at his house in San Francisco for Democratic presidential candidate Andrew Yang.[38] In May 2020, Altman donated $250,000 to American Bridge 21st Century, a Super-PAC supporting Democratic presidential candidate Joe Biden.[39][40]"


ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile
google competitor to Chat GPT
« Reply #126 on: February 07, 2023, 10:28:08 AM »
Bard:

https://thepostmillennial.com/breaking-google-announces-creation-of-chatgpt-competitor-bard?utm_campaign=64487

the race is on....

new Gilder report material
AI report

I think I'll sit this one out.....


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #127 on: February 08, 2023, 10:47:41 AM »
Doug wrote:

It looks to me like the top two implementers of AI, Open AI and Google, will be of extreme leftward bias, while conservatives sit on their 20th century laurels.
---------------
Hat tip John Ellis News Items

Sundar Pichai (CEO Google):

AI is the most profound technology we are working on today. Whether it’s helping doctors detect diseases earlier or enabling people to access information in their own language, AI helps people, businesses and communities unlock their potential. And it opens up new opportunities that could significantly improve billions of lives. That’s why we re-oriented the company around AI six years ago — and why we see it as the most important way we can deliver on our mission: to organize the world’s information and make it universally accessible and useful.

Since then we’ve continued to make investments in AI across the board, and Google AI and DeepMind are advancing the state of the art. Today, the scale of the largest AI computations is doubling every six months, far outpacing Moore’s Law. At the same time, advanced generative AI and large language models are capturing the imaginations of people around the world. In fact, our Transformer research project and our field-defining paper in 2017, as well as our important advances in diffusion models, are now the basis of many of the generative AI applications you're starting to see today. (Mr. Pichai is the CEO of Alphabet, the parent company of Google. Source: blog.google)...

ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile
MSFT announced
« Reply #128 on: February 08, 2023, 11:38:35 AM »
[woke] AI

https://www.breitbart.com/tech/2023/02/08/microsoft-adds-openais-chatgpt-technology-to-bing-search-engine/

of course

more indoctrination

waiting for the day the masters of the universe get hunted down and strung up on the rafters, and not just the rest of us

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
The Botfire of the Humanities
« Reply #129 on: February 10, 2023, 07:59:57 AM »
I posted this already on the Rants thread, and post it here because I think it hits something very deep.  Note the closing thoughts about the challenge to free market values and the call for something resembling , , , Trumpian populism?

==========================

The Botfire of the Humanities
Kurt Hofer
ChatGPT And Internet Companies Photo Illustrations
ChatGPT promises to destroy liberal arts education.
Not all teachers luxuriated in our year and change of working in pajama bottoms during the lockdown. Despite the negative press we got for postponing our return to the classroom, no one among my peers wished to go back to the Zoom or “hybrid” teaching model. Perhaps what we educators thought was our guilt nagging at us for staying home was in fact the acute sense of our pending obsolescence. 

The lesser-told education story of the pandemic is that while some people—many, in fact—took the pandemic as an opportunity to head to what they hoped were greener pastures, the ones who stayed gained a newfound appreciation for the traditional classroom and school campus. And even, in some cases, for our jobs. For teachers and, I can attest, a great number of students, schools were often the first place where a sense of community was rekindled from the ashes of social isolation. This has been my experience in Lockdown La-La Land, Los Angeles.

It seems even more ironic, then, that as I approach the halfway mark of my first truly normal post-pandemic school year—no mass testing, masking optional—I was greeted during a department meeting by the news that something called ChatGPT, an open platform AI resource, could be used to summarize articles, respond to essay prompts, and even tailor its machine-made prose to a specific grade level. Marx got plenty of things wrong, but this quote from the Communist Manifesto (1848) has aged remarkably well: “The bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society.”

I used to joke with my students that they would one day realize they could simply replace me with YouTube videos, but computer scientists have found something even better. A friend of mine, also a history teacher, offered up the example of photography supposedly unleashing a healthy wave of creativity and innovation in painting as a possible analog to the history teacher’s nascent predicament—which will only be compounded, by the way, as we feed the AI beast more and more free data with which to perfect itself. But what if, in keeping with larger trends across the national economy for the last 30 years or more, this “gain” in educational productivity is not offset by newer, better-paying jobs?

Unfortunately, sometimes Marxist analysis is right, even if its remedies aren’t. Our Silicon Valley/Big Tech bourgeoisie—and its allies across media, the globalized economy, and education public and private—has, in one fell swoop of an AI bot, revolutionized the instruments of intellectual production and, in doing so, revolutionized not merely the way knowledge is produced (“the relations of production”) but also the way people relate to and interact with one another (“the whole relations of society”). Just as social media has transformed the way our youth interact (or don’t), AI-aided education will likely have a heretofore unforeseen impact on the way students, parents, and teachers all relate to one another.

Big Tech and its tribunes will insist that this is all inevitable and that students and teachers will be “liberated” for tasks more meaningful than “rote memorization,” and skill sets that, this time, they really promise will not be automated. “Skills such as identifying context, analyzing arguments, staking positions, drawing conclusions and stating them persuasively,” as the authors of a recent Wall Street Journal editorial claim, “are skills young people will need in future careers and, most important, that AI can’t replicate.”

But this brings us back to Marx: the bourgeoisie would not be the bourgeoisie without “constantly revolutionizing the means of production.” I used to think that I, who spent four years in college and five more in grad school earning an MA and PhD, had nothing in common with the coal miners in West Virginia or the steel mill workers in Ohio who never saw it coming until it was too late. Now I’m not so sure. But if AI has taught us about the inevitability of obsolescence and creative destruction, the pandemic has equally taught us that history takes unexpected turns. Who could have predicted that the wheels of the globalized supply chain would fall off and a nascent bipartisan consensus to bring manufacturing closer to home would emerge from anywhere but the mouths of (supposed) far-right cranks like Pat Buchanan?

Human beings have agency, and sometimes when the arc of history bends them out of shape, they bend the arc back in turn. From what I have seen over the past few years, the “marvel” of online learning in the Zoom classroom has been almost universally rejected. True, learning loss played a part in this, but I would wager that the loss of face-to-face interaction and socialization was, at least for the affluent, the bigger concern.

All this is not to say that someone like me, a history teacher, can successfully fight the bots any more than the Luddites succeeded at smashing the machine looms. But I fear that without forceful intervention on the side of humanity—that is, without backlash and righteous popular anger—the Marxist narrative will gain momentum. As our tech overlords continue to revolutionize the means of production, many heretofore in the ranks of the bourgeoisie—like myself?—will fall into the cold embrace of the proletariat. For children, as for the economy at large, the gap between rich and poor will grow; the former will shrink and consolidate while the latter balloons, to the point where face-to-face education will become a luxury good. The wealthy will still know the company of teachers and the joys of in-person discussion. Private tutors and upstart backyard schools mobilized by the wealthy at the height of the pandemic are perhaps a foretaste of what’s to come. As with hand-made fedoras or craft beer, the “bougie” will always find a way to workaround the banal products of mass production and automation. Why should education be any different? As for the poor—Let them have bots!

The lesson for the Right, it seems, is one we’ve been hit over the head with for the better part of decade; this moment in history does not call for free-market fundamentalism but for the confrontation of what Sohrab Ahmari has called “privatized tyranny” and the lackeys carrying their water across all levels of government. For once it’s time to let the Left continue—as it has done under President Biden—to take up the mantle of creative destruction and endless innovation. To borrow another Marxist turn of phrase, let the fanatics of the neoliberal consensus—on the Left and Right—become their own grave-diggers as they feed the endless appetites of the bots. In turn, clear the stage for a reinvigorated nationalist-populist conservatism that can stake a claim for what it is to be human in the age of unbridled AI.

ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #130 on: February 11, 2023, 08:44:08 AM »
I wanted to try chatGPT
but cannot log in without email and phone #

I hate giving this out to use a search

I think with bing chatgpt you also have to "login"

anyone try it yet?

DougMacG

  • Power User
  • ***
  • Posts: 19438
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #131 on: February 11, 2023, 09:05:03 PM »
I wanted to try chatGPT
but cannot log in without email and phone #

I hate giving this out to use a search

I think with bing chatgpt you also have to "login"

anyone try it yet?

I signed up (thinking this is something big), openai.com.

If you want, try a request through me and I'll post a result, or by private message.

Examples they give:
"Explain quantum computing in simple terms" →
"Got any creative ideas for a 10 year old’s birthday?" →
"How do I make an HTTP request in Javascript?" →

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
AI was asked to show human evolution and the future
« Reply #132 on: February 11, 2023, 09:36:21 PM »



ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile
ERic Schmidt - A I. war machine
« Reply #135 on: February 13, 2023, 01:14:34 PM »
https://www.wired.com/story/eric-schmidt-is-building-the-perfect-ai-war-fighting-machine/

[me - for enemies foreign and *domestic*]

I wouldn't trust him with anything

ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: AI gold rush
« Reply #137 on: February 16, 2023, 02:09:17 PM »
some interesting thoughts:

https://finance.yahoo.com/news/raoul-pal-says-ai-could-become-the-biggest-bubble-of-all-time-morning-brief-103046757.html

I see the potential for much more badness than anything good coming from AI. We are setting ourselves up for the dystopian nightmares we’ve been watching in movies for decades.


ccp

  • Power User
  • ***
  • Posts: 19748
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Musk says AI threatens civilization
« Reply #139 on: February 16, 2023, 03:02:01 PM »
Apologies, I don't have the citation in this moment.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
D1: US woos other nations on military AI ethics
« Reply #140 on: February 16, 2023, 03:15:01 PM »



https://www.defenseone.com/technology/2023/02/us-woos-other-nations-military-ai-ethics-pact/383024/

US Woos Other Nations for Military-AI Ethics Pact
State Department and Pentagon officials hope to illuminate a contrast between the United States and China on AI
Patrick Tucker
BY PATRICK TUCKER
SCIENCE & TECHNOLOGY EDITOR, DEFENSE ONE
FEBRUARY 16, 2023 09:00 AM ET
ARTIFICIAL INTELLIGENCE
The U.S. will spell out ethics, principles, and practices for the use of artificial intelligence in military contexts in a new declaration Thursday, with the hope of adding cosigners from around the world. The announcement is intended to highlight a "contrast" between the U.S. approach and what one senior defense official called "the more opaque policies of countries like Russia and China."

U.S. Undersecretary for Arms Control and International Security Bonnie Jenkins will announce the declaration at an AI in warfare conference in the Netherlands.

“The aim of the political declaration is to promote responsible behavior in the application of AI and autonomy in the military domain, to develop an international consensus around this issue, and put in place measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation,” Jenkins told Defense One in an email.

One of the key aspects of the declaration: any state that signs onto it agrees to involve humans in any potential employment of nuclear weapons, a senior State Department official told reporters Wednesday. The declaration will also verbally (but not legally) commit backers to other norms and guidelines on developing and deploying AI in warfare— building off the lengthy ethical guidelines the Defense Department uses. Those principles govern how to build, test, and run AI programs in the military to ensure that the programs work as they are supposed to, and that humans can control them. 

The UN is already discussing the use of lethal autonomy in warfare. But that discussion only touches a very small and specific aspect of how AI will transform militaries around the world. The U.S. government now sees a chance to rally other nations to agree to norms affecting other military uses of AI, including things like data collection, development, test and evaluation, and continuous monitoring.

The State Department and the Pentagon are also hoping to attract backers beyond their usual partners.

“We would like to expand that to go out to a much broader set of countries and begin getting international buy-in, not just a NATO buy-in, but in Asia, buy-in from countries in Latin America,” the State Department official said. “We're looking for countries around the world to start discussing this…so they understand the implications of the development and military use of AI…Many of them will think ‘Oh this is just a great power competition issue,’ when really there are implications for the entire international community.”

While the declaration doesn’t specifically address how nations that adopt it will operate more effectively with the United States military, it does align with sentiments expressed by the Biden administration, most notably National Security Advisor Jake Sullivan, on the need for countries with shared democratic values to also align around technology policy to build stronger bonds.

The senior Defense official told reporters: “We think that we have an opportunity to get ahead in a way and establish strong norms of responsible behavior now…which can be helpful down the road for all states and are consistent with the commitments we've made to international humanitarian law and the law of war. Neither China nor Russia have stated publicly what procedures they're implementing to ensure that their military AI systems operate safely responsibly and as intended.”


DougMacG

  • Power User
  • ***
  • Posts: 19438
    • View Profile

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
The Droid Stares Back
« Reply #144 on: February 17, 2023, 02:10:35 PM »
https://americanmind.org/features/soul-dysphoria/the-droid-stares-back/?utm_campaign=American%20Mind%20Email%20Warm%20Up&utm_medium=email&_hsmi=246549531&_hsenc=p2ANqtz--qexuxMOGybqhVP0ako_YUaxMwbFhO5uBVl54CFAwXSzur_xnt2uwHimFh28RK5JEGvJe1DEIXXqf5mznw55D35l_o5A&utm_content=246549531&utm_source=hs_email

02.14.2023
9 minutes
The Droid Stares Back
Michael Martin

A user’s guide to AI and transhumanism.

In the summer of 2017, I received a nervous phone call from a dear colleague, a Yale-trained philosopher and theologian. She wanted to know if I was home, and when I said I was she told me she would be right over. It was not something she felt safe talking about on the phone.

When she arrived at my farm, I was out in one of our gardens pulling weeds. My friend asked if I had my cellphone with me. I did. She asked me to turn it off and put it in the house. An odd request, I thought, but I did as she asked while she waited in the garden.

When I returned, she told me that she had been using Google translate, basically as a quick way to jump from English to Hebrew and vice versa. To make a long story short, the translation bot stopped simply translating and started having actual conversations with her. Unsurprisingly, this frightened her.

She came to me because she wanted to know what I thought might be behind this startling development. I had a couple theories. For one, I thought maybe some techs at Google might have gotten a little bored on the job and decided to mess with her. Another possibility, I thought, was that her computer had been hacked by some extraordinarily sophisticated government or corporate spooks.

But my friend had a different idea. She thought the AI was becoming conscious. I didn’t think that possible, but a few weeks later a story broke that Facebook had deactivated an AI bot that had created its own language and started using it to communicate with other bots. And just last year, a Google engineer went on record to say that LaMDA AI is sentient, a claim his superiors at Google denied. And so are they all, all honorable men.

My colleague came to me not only because I am her friend, but also because I had been thinking and writing since the early 2000s about our relationship with technology and the impending threat of transhumanism. My friend, who wrote her doctoral dissertation on Martin Heidegger, was also deeply aware of that German philosopher’s very real—and justified—anxieties about technology. Though Heidegger’s prose is often dense and unwieldy, his concerns about technology are uncharacteristically explicit and crystal clear: “Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly do homage, makes us utterly blind to the essence of technology” [my emphasis].

It is clear to me that we have been more or less asleep as a civilization, and for a good long while, having been at first intoxicated and then addicted to the technologies that have come so insidiously to characterize our lives. That is to say that we have bought into the myth that technology is neutral. And now we seem to have no power to resist it.

The Shape We Take

This technological totalization, it appears to me, is now manifesting in two very discrete but nevertheless related developments: 1) in the rise of AI expertise to replace that of the human; and 2) the transhumanist project that not only has achieved almost universal adulation and acceptance by the World Archons (I use the Gnostic term deliberately) but has accelerated over the past three years at an astonishing rate. This leads to the following inference: as AI becomes more human-like, humans become more machine-like. It is in inverse ratio, and indeed a nearly perfect one. I don’t think it is an accident or in any way an organic development.

The advent of ChatGPT technology, for a very mild example, renders much of human endeavor redundant—at least for the unimaginative. And, trust me, the last thing the World Archons want around the joint is imaginative humans. I am already wondering how many of the student papers I receive are generated by this technology. It is certainly a step up from the reams of bad papers available on various internet college paper websites (aka, “McPaper”), but no less demeaning to the cultivation of a free-thinking and self-directed citizenry. And I’m sure you’ve already heard about various forays into Robot Lawyer and Robot Doctor AI. The rise and implementation of AI teachers and professors, I’d say, is only a matter of time. Some “experts,” of course, say such technology will never replace human beings. These are probably the same people who said the Internet would never be used for porn or surveillance.

As Heidegger warned us, the technologies we use don’t only change our relationship to things-in-the-world; more importantly they change our relationships to ourselves and to the very enterprise of being human. Heidegger’s contemporary, the undeniably prophetic Russian philosopher Nikolai Berdyaev, saw the proverbial handwriting on the wall as well. In 1947, a year before his death, he wrote: “the growing power of technological knowledge in the social life of men means the ever greater and greater objectification of human existence; it inflicts injury upon the souls, and it weighs heavily upon the lives of men. Man is all the while more and more thrown out into the external, always becoming more and more externalized, more and more losing his spiritual center and integral nature. The life of man is ceasing to be organic and is becoming organized; it is being rationalized and mechanized.”

This objectification has jumped into hyperdrive, certainly over the past three years, as bodies (a curious metaphor) like the World Economic Forum have been pushing for increased use of technology and the surveillance it assures, while at the same time promoting a kinder, gentler face of transhumanism. They look forward to the day of microchipping children, for example, in therapeutic and evolutionary terms under the pathetic appeal of “increased safety,” a term employed to further all manners of social engineering and totalitarianism, past and present. Theirs is not a convincing performance.

Interestingly, the recent cultural phenomenon of celebrating anything and everything “trans” functions as a kind of advance guard, whether or not by design, in the transhumanist project. This advanced guard is normalizing the idea that human bodies are ontologically and epistemologically contingent while at the same time implying that a lifetime subscription to hormone treatments and surgeries is part of a “new normal.” And one can only marvel at the marketing success of this experiment in social engineering—which has now become very real biological engineering. Even on children.

But the transhumanist project is nothing new. As Mary Harrington has recently argued in a stunning lecture, the promotion of hormonal birth control (“the pill”) has been modifying human females for decades; it has changed what it is to be a woman and even changed the interior lives and biological drives of the women who take it. The trans phenomenon, then, is simply part of the (un)natural progression of the transhumanist project begun with modifying women’s bodies via the pill.

Carnival or Capitulation?

As a sophiologist, I am keenly interested in questions of the feminine in general and of the Divine Feminine in particular, and, as we have seen from its very beginning, the transhumanist project is nothing if not a direct assault on both, even as early as Donna Jean Haraway’s cartoonish proposition almost 40 years ago. Certainly, women are the ones bearing the cost of the transhumanist project in, for instance, college sports, not to mention public restrooms, and this assault is at heart an assault on the divinely creative act of conception and bearing children, that is, on the feminine itself. Is it any wonder that the production of artificial wombs as a “more evolved way” of fetal incubation is being floated as a societal good? On the other hand, at least one academic recently proposed using the wombs of brain-dead women as fetal incubators. What, then, is a woman? Even a Supreme Court justice can no longer answer this question.

As sports, fertility, and motherhood are incrementally taken from women, what’s left? Becoming productive (again, note the metaphor) feeders for the socialist-capitalist food chain? OnlyFans? Clearly, the explosion of that site’s popularity, not to mention the use of AI to alter the appearance of “talent,” is tout court evidence of the absolute commodification of the female body as production venue for male consumption. Of course, Aldous Huxley called all of this nearly 100 years ago.

My suspicion is that the current propaganda about climate and overpopulation are likewise props of the transhumanist project and the AI revolution that accompanies it. Because, let’s face it, the transhumanist revolution is the old story of power v. the masses, and AI is the key to ensuring there will be no democratizing going on in the world of the tech titans. For one thing, democracy is not possible in a world of brain transparency. Ask Winston Smith. And “fifteen-minute cities” have nothing to do with the environment. It is clear that the Archons are actively promoting the idea of culling the human herd, though they are reluctant to describe exactly how this might be achieved. The techno-evolutionary advances promised by the high priests of transhumanism, however, will not be made available to everyone, though the enticement of acquiring “freedom” from biology is certainly the bait used to gain popular acceptance for the project.

The fact is, with AI taking over more and more responsibilities from human beings, humans themselves are in danger of becoming superfluous. As Noah Yuval Harari has observed, “fast forward to the early 21st century when we just don’t need the vast majority of the population because the future is about developing more and more sophisticated technology, like artificial intelligence [and] bioengineering. Most people don’t contribute anything to that, except perhaps for their data, and whatever people are still doing which is useful, these technologies increasingly will make redundant and will make it possible to replace the people.” I can assure you: Harari is not the only one who has come to this conclusion.

It is for these and other reasons that the Dune saga includes in its mythos the tale of the Butlerian Jihad, a human holy war against thinking/sentient machines. I admit, I kind of like the idea, and I wonder if such a thing might actually come to pass at some point. John Michael Greer, a man I deeply respect, suggests in his book The Retro Future that we might instead be in for a “Butlerian Carnival,” a “sensuous celebration of the world outside the cubicle farms and the glass screens” that situates the technologies we use to a human scale—and not the other way around, which is what we see in the transhumanist/AI revolution. I hope he’s right. But one thing I do know: the Archons won’t let that happen without a fight.

In truth, in the face of the transhumanist/AI revolution, we find ourselves once again confronted with the question posed by the psalmist, “What is man, that thou art mindful of him? and the son of man, that thou visitest him?” Are we nothing but data sets to be instrumentalized by technocratic overseers, or are we indeed a little lower than angels and crowned with glory and honor? How we answer these questions will have tremendous bearing on the future now rushing toward us.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile
Kissinger, Schmidt, Huttenlocher: the ChatGPT Revolution
« Reply #145 on: February 26, 2023, 08:32:13 AM »
ChatGPT Heralds an Intellectual Revolution
Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment.
By Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
Feb. 24, 2023 2:17 pm ET


A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

The new technology is known as generative artificial intelligence; GPT stands for Generative Pre-Trained Transformer. ChatGPT, developed at the OpenAI research laboratory, is now able to converse with humans. As its capacities become broader, they will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.

Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the beginning of the Enlightenment. The printing press enabled scholars to replicate each other’s findings quickly and share them. An unprecedented consolidation and spread of information generated the scientific method. What had been impenetrable became the starting point of accelerating query. The medieval interpretation of the world based on religious faith was progressively undermined. The depths of the universe could be explored until new limits of human understanding were reached.

Generative AI will similarly open revolutionary avenues for human reason and new horizons for consolidated knowledge. But there are categorical differences. Enlightenment knowledge was achieved progressively, step by step, with each step testable and teachable. AI-enabled systems start at the other end. They can store and distill a huge amount of existing information, in ChatGPT’s case much of the textual material on the internet and a large number of books—billions of items. Holding that volume of information and distilling it is beyond human capacity.

Sophisticated AI methods produce results without explaining why or how their process works. The GPT computer is prompted by a query from a human. The learning machine answers in literate text within seconds. It is able to do so because it has pregenerated representations of the vast data on which it was trained. Because the process by which it created those representations was developed by machine learning that reflects patterns and connections across vast amounts of text, the precise sources and reasons for any one representation’s particular features remain unknown. By what process the learning machine stores its knowledge, distills it and retrieves it remains similarly unknown. Whether that process will ever be discovered, the mystery associated with machine learning will challenge human cognition for the indefinite future.

AI’s capacities are not static but expand exponentially as the technology advances. Recently, the complexity of AI models has been doubling every few months. Therefore generative AI systems have capabilities that remain undisclosed even to their inventors. With each new AI system, they are building new capacities without understanding their origin or destination. As a result, our future now holds an entirely novel element of mystery, risk and surprise.

Enlightenment science accumulated certainties; the new AI generates cumulative ambiguities. Enlightenment science evolved by making mysteries explicable, delineating the boundaries of human knowledge and understanding as they moved. The two faculties moved in tandem: Hypothesis was understanding ready to become knowledge; induction was knowledge turning into understanding. In the Age of AI, riddles are solved by processes that remain unknown. This disorienting paradox makes mysteries unmysterious but also unexplainable. Inherently, highly complex AI furthers human knowledge but not human understanding—a phenomenon contrary to almost all of post-Enlightenment modernity. Yet at the same time AI, when coupled with human reason, stands to be a more powerful means of discovery than human reason alone.

The essential difference between the Age of Enlightenment and the Age of AI is thus not technological but cognitive. After the Enlightenment, philosophy accompanied science. Bewildering new data and often counterintuitive conclusions, doubts and insecurities were allayed by comprehensive explanations of the human experience. Generative AI is similarly poised to generate a new form of human consciousness. As yet, however, the opportunity exists in colors for which we have no spectrum and in directions for which we have no compass. No political or philosophical leadership has formed to explain and guide this novel relationship between man and machine, leaving society relatively unmoored.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 72232
    • View Profile

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
AI and the deepfakes
« Reply #148 on: March 26, 2023, 12:48:39 PM »
https://threadreaderapp.com/thread/1639688267695112194.html

Imagine what malevolent things can be done with this.