Author Topic: Intelligence and Psychology, Artificial Intelligence  (Read 61053 times)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Stratfor: AI and great power competition
« Reply #58 on: October 18, 2018, 09:52:05 AM »
Highlights

    Aging demographics and an emerging great power competition pitting China against the United States form the backdrop to a high-stakes race in artificial intelligence development.
    The United States, for now, has a lead overall in AI development, but China is moving aggressively to try and overtake its American rivals by 2030.
    While deep integration across tech supply chains and markets has occurred in the past couple of decades, rising economic nationalism and a growing battle over international standards will balkanize the global tech sector.
    AI advancements will boost productivity and economic growth, but creative destruction in the workforce will drive political angst in much of the world, putting China's digital authoritarianism model as well as liberal democracies to the test.

For better or worse, the advancement and diffusion of artificial intelligence technology will come to define this century. Whether that statement should fill your soul with terror or delight remains a matter of intense debate. Techno-idealists and doomsdayers will paint their respective utopian and dystopian visions of machine-kind, making the leap from what we know now as "narrow AI" to "general AI" to surpass human cognition within our lifetime. On the opposite end of the spectrum, yawning skeptics will point to Siri's slow intellect and the human instinct of Capt. Chesley "Sully" Sullenberger – the pilot of the US Airways flight that successfully landed on the Hudson River in 2009 – to wave off AI chatter as a heap of hype not worth losing sleep over.

The fact is that the development of AI – a catch-all term that encompasses neural networks and machine learning and deep learning technologies – has the potential to fundamentally transform civilian and military life in the coming decades. Regardless of whether you're a businessperson pondering your next investment, an entrepreneur eyeing an emerging opportunity, a policymaker grappling with regulation or simply a citizen operating in an increasingly tech-driven society, AI is a global force that demands your attention.

The Big Picture

As Stratfor wrote in its 2018 Third-Quarter Forecast, the world is muddling through a blurry transition from the post-Cold War world to an emerging era of great power competition. The race to dominate AI development will be a defining feature of U.S.-China rivalry.

An Unstoppable Force

Willingly or not, even the deepest skeptics are feeding the AI force nearly every minute of every day. Every Google (or Baidu) search, Twitter (or Weibo) post, Facebook (or Tencent) ad and Amazon (or Alibaba) purchase is another click creating mountains of data – some 2.2. billion gigabytes globally every day – that companies are using to train their algorithms to anticipate and mimic human behavior. This creates a virtuous (or vicious, depending on your perspective) cycle: the more users engage with everyday technology platforms, the more data is collected; the more data that's collected, the more the product improves; the more competitive the product, the more users and billions of dollars in investment it will attract; a growing number of users means more data can be collected, and the loop continues.

And unlike previous AI busts, the development of this technology is occurring amid rapidly advancing computing power, where the use of graphical processing units (GPUs) and development of custom computer chips is giving AI developers increasingly potent hardware to drive up efficiency and drive down cost in training their algorithms. To help fuel advancements in AI hardware and software, AI investment is also growing at a rapid pace.

The Geopolitical Backdrop to the Global AI Race

AI is both a driver and a consequence of structural forces reshaping the global order. Aging demographics – an unprecedented and largely irreversible global phenomenon – is a catalyst for AI development. As populations age and shrink, financial burdens on the state mount and labor productivity slows, sapping economic growth over time. Advanced industrial economies already struggling to cope with the ill effects of aging demographics with governments that are politically squeamish toward immigration will relentlessly look to machine learning technologies to increase productivity and economic growth in the face of growing labor constraints.

The global race for AI supremacy will feature prominently in a budding great power competition between the United States and China. China was shocked in 2016 when Google DeepMind's AlphaGo beat the world champion of Go, an ancient Chinese strategy game (Chinese AI state planners dubbed the event their "Sputnik moment"), and has been deeply shaken by U.S. President Donald Trump's trade wars and the West's growing imperative to keep sensitive technology out of Chinese competitors' hands. Just in the past couple of years alone, China's state focus on AI development has skyrocketed to ensure its technological drive won't suffer a short circuit due to its competition with the United States.

How the U.S. and China Stack Up in AI Development

Do or Die for Beijing

The United States, for now, has the lead in AI development when it comes to hardware, research and development, and a dynamic commercial AI sector. China, by the sheer size of its population, has a much larger data pool, but is critically lagging behind the United States in semiconductor development. Beijing, however, is not lacking in motivation in its bid to overtake the United States as the premier global AI leader by 2030. And while that timeline may appear aggressive, China's ambitious development in AI in the coming years will be unfettered by the growing ethical, privacy and antitrust concerns occupying the West. China is also throwing hundreds of billions of dollars into fulfilling its AI mission, both in collaboration with its standing tech champions and by encouraging the rise of unicorns, or privately held startups valued at $1 billion or more.

By incubating and rewarding more and more startups, Beijing is finding a balance between focusing its national champions on the technologies most critical to the state (sometimes by taking an equity stake in the company) without stifling innovation. In the United States, on the other hand, it would be disingenuous to label U.S.-based multinational firms, which park most of their corporate profits overseas, as true "national" champions. Instead of the state taking the lead in funding high-risk and big-impact research in emerging technologies as it has in the past, the roles in the West have been flipped; private tech companies are in the driver's seat while the state is lunging at the steering wheel, trying desperately to keep China in its rear view.

The Ideological Battleground

The United States may have thought its days of fighting globe-spanning ideological battles ended with the Cold War. Not so. AI development is spawning a new ideological battlefield between the United States and China, pitting the West's notion of liberal democracy against China's emerging brand of digital authoritarianism. As neuroscientist Nicholas Wright highlights in his article, "How Artificial Intelligence Will Reshape the Global Order," China's 2017 AI development plan "describes how the ability to predict and grasp group cognition means 'AI brings new opportunities for social construction.'" Central to this strategic initiative is China's diffusion of a "social credit system" (which is set to be fully operational by 2020) that would assign a score based on a citizen's daily activities to determine everything from airfare class and loan eligibility to what schools your kids are allowed to attend. It's a tech-powered, state-driven approach to parse model citizens from the deplorables, so to speak.

The ability to harness AI-powered facial recognition and surveillance data to shape social behavior is an appealing tool, not just for Beijing, but for other politically paranoid states that are hungry for an alternative path to stability and are underwhelmed by the West's messy track record in promoting democracy. Wright describes how Beijing has exported its Great Firewall model to Thailand and Vietnam to barricade the internet while also supplying surveillance technology to the likes of Iran, Russia, Ethiopia, Zimbabwe, Zambia and Malaysia. Not only does this aid China's goal of providing an alternative to a U.S.-led global order, but it widens China's access to even wider data pools around the globe to hone its own technological prowess.

The European Hustle

Not wanting to be left behind in this AI great power race, Europe and Russia are hustling to catch up, but they will struggle in the end to keep pace with the United States and China. Russian President Vladimir Putin made headlines last year when he told an audience of Russian youths that whoever rules AI will rule the world. But the reality of Russia's capital constraints means Russia will have to choose carefully where it puts its rubles. Moscow will apply a heavy focus on AI military applications and will rely on cyber espionage and theft to try and find shortcuts to AI development, all while trying to maintain its strategic alignment with China to challenge the United States.

The EU Struggle to Create Unicorn Companies

While France harbors ambitious plans to develop an AI ecosystem for Europe and Germany frets over losing its industrial edge to U.S. and Chinese tech competitors, unavoidable and growing fractures within the European Union will hamper Europe's ability to play a leading AI role on the world stage. The European Union's cumbersome regulatory environment and fragmented digital market has been prohibitive for tech startups, a fact reflected in the European Union's low global share and value of unicorn companies. Meanwhile, the United Kingdom, home to Europe's largest pool of tech talent, will be keen on unshackling itself from the European Union's investment-inhibitive regulations as it stumbles out of the bloc.

A Battle over Talent and Standards

But wherever pockets of tech innovation already exist on the Continent, those relatively few companies and individuals are already prime targets for U.S. and Chinese tech juggernauts prowling the globe for AI talent. AI experts are a precious global commodity. According to a 2018 study by Element AI, there are roughly 22,000 doctorate-level researchers in the world, but only around 3,000 are actually looking for work and around 5,400 are presenting their research at AI conferences. U.S. and Chinese tech giants are using a variety of means – mergers and acquisitions, aggressive poaching, launchings labs in cities like Paris, Montreal and Taiwan – to gobble up this tiny talent pool.

Largest Tech Companies by Market Capitalization

Even as Europe struggles to build up its own tech champions, the European Union can use its market size and conscientious approach to ethics, privacy and competition to push back on encroaching tech giants through hefty fines, data localization and privacy rules, taxation and investment restrictions. The bloc's rollout of its General Data Protection Regulation (GDPR) is designed to give Europeans more control over their personal data by limiting data storage times, deleting data on request and monitoring for data breaches. While big-tech firms have the means to adapt and pay fines, the move threatens to cripple smaller firms struggling to comply with the high cost of compliance. It also fundamentally restricts the continental data flows needed to fuel Europe's AI startup culture.

The United States in many ways shares Europe's concerns over issues like data privacy and competition, but it has a fundamentally different approach in how to manage those concerns. The European Union is effectively prioritizing individual privacy rights over free speech, while the United States does the reverse. Brussels will fixate on fairness, even at the cost of the bloc's own economic competitiveness, while Washington will generally avoid getting in the way of its tech champions. For example, while the European Union will argue that Google's dominance in multiple technological applications is by itself an abuse of its power that stifles competition, the United States will refrain from raising the antitrust flag unless tech giants are using their dominant position to raise prices for consumers.

U.S. and European government policy overlap instead in their growing scrutiny over foreign investment in sensitive technology sectors. Of particular concern is China's aggressive, tech-focused overseas investment drive and the already deep integration of Chinese hardware and software in key technologies used globally. A highly diversified company like Huawei, a pioneer in cutting-edge technologies like 5G and a mass producer of smartphones and telecommunications equipment, can leverage its global market share to play an influential role in setting international standards.

Washington, meanwhile, is lagging behind Brussels and Beijing in the race to establish international norms for cyber policy. While China and Russia have been persistent in their attempts to use international venues like the United Nations to codify their version of state-heavy cyber policy, the European Union has worked to block those efforts while pushing their own standards like GDPR.

This emerging dynamic of tightening restrictions in the West overall against Chinese tech encroachment, Europe's aggressive regulatory push against U.S. tech giants and China's defense and export of digital authoritarianism may altogether lead to a much more balkanized market for global tech companies in the future.

The AI Political Test of the Century

There is no shortage of AI reports by big-name consulting firms telegraphing to corporate audiences the massive productivity gains to come from AI in a range of industries, from financial, auto, insurance and retail to construction, cleaning and security. A 2017 PwC report estimated that AI could add $15.7 trillion to the global economy in 2030, of which $6.6 trillion would come from increased productivity and $9.1 trillion would come from increased consumption. The potential for double-digit impacts on GDP after years of stalled growth in much of the world is appealing, no doubt.

But lurking behind those massive figures is the question of just how well, how quickly and how much of a country's workforce will be able to adapt to these fast-moving changes. As Austrian Joseph Schumpeter described in his 1942 book, Capitalism, Socialism and Democracy, the "creative destruction" that results from so-called industrial mutations "incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one." In the age of AI, the market will incessantly seek out scientists and creative thinkers. Machines will endlessly render millions of workers irrelevant. And new jobs, from AI empathy trainers to life coaches, will be created. Even as technology translates into productivity and economic gains overall, this will be a wrenching transition if workers are slow to learn new skills and if wage growth remains stagnant for much of the population.

Time will tell which model will be better able to cope with an expected rise in political angst as the world undergoes this AI revolution: China's untested model of digital authoritarianism or the West's time-tested, yet embattled, tradition in liberal democracy.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Stratfor: How the US-China Power Comp. shapes the future of AI ethics
« Reply #59 on: October 18, 2018, 09:57:49 AM »
second post

How the U.S.-China Power Competition Is Shaping the Future of AI Ethics
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor
A U.S. Air Force MQ-1B Predator unmanned aerial vehicle returns from a mission to an air base in the Persian Gulf region.


    As artificial intelligence applications develop and expand, countries and corporations will have different opinions on how and when technologies should be employed. First movers like the United States and China will have an advantage in setting international standards.
    China will push back against existing Western-led ethical norms as its level of global influence rises and the major powers race to become technologically dominant.
    In the future, ethical decisions that prevent adoption of artificial intelligence applications in certain fields could limit political, security and economic advantages for specific countries.

Controversial new technologies such as automation and artificial intelligence are quickly becoming ubiquitous, prompting ethical questions about their uses in both the private and state spheres. A broader shift on the global stage will drive the regulations and societal standards that will, in turn, influence technological adoption. As countries and corporations race to achieve technological dominance, they will engage in a tug of war between different sets of values while striving to establish ethical standards. Western values have long been dominant in setting these standards, as the United States has traditionally been the most influential innovative global force. But China, which has successfully prioritized economic growth and technological development over the past several decades, is likely to play a bigger role in the future when it comes to tech ethics.

The Big Picture

The great power competition between China and the United States continues to evolve, leading to pushback against international norms, organizations and oversight. As the world sits at a key intersection of geopolitical and technological development, the battles to set new global standards will play out on emerging technological stages.

The field of artificial intelligence will be one of the biggest areas where different players will be working to establish regulatory guardrails and answer ethical questions in the future. Science fiction writer Isaac Asimov wrote his influential laws of robotics in the first half of the 20th century, and reality is now catching up to fiction. Questions over the ethics of AI and its potential applications are numerous: What constitutes bias within the algorithms? Who owns data? What privacy measures should be employed? And just how much control should humans retain in applying AI-driven automation? For many of these questions, there is no easy answer. And in fact, as the great power competition between China and the United States ramps up, they prompt another question: Who is going to answer them?

Questions of right and wrong are based on the inherent cultural values ingrained within a place. From an economic perspective, the Western ideal has always been the laissez-faire economy. And ethically, Western norms have prioritized privacy and the importance of human rights. But China is challenging those norms and ideals, as it uses a powerful state hand to run its economy and often chooses to sacrifice privacy in the name of development. On yet another front, societal trust in technology can also differ, influencing the commercial and military use of artificial intelligence.

Different Approaches to Privacy

One area where countries that intend to set global ethical standards for the future of technology have focused their attention is in the use and monetization of personal data. From a scientific perspective, more data equals better, smarter AI, meaning those with access to and a willingness to use that data could have a future advantage. However, ethical concerns over data ownership and the privacy of individuals and even corporations can and do limit data dispersion and use.

How various entities are handling the question of data privacy is an early gauge for how far AI application can go, in private and commercial use. It is also a question that reveals a major divergence in values. With its General Data Protection Regulation, which went into effect this year, the European Union has taken an early global lead on protecting the rights of individuals. Several U.S. states have passed or are working to pass similar legislation, and the U.S. government is currently considering an overarching federal policy that covers individual data privacy rights.

China, on the other hand, has demonstrated a willingness to prioritize the betterment of the state over the value of personal privacy. The Chinese public is generally supportive of initiatives that use personal data and apply algorithms. For example, there has been little domestic objection to a new state-driven initiative to monitor behavior — from purchases to social media activity to travel — using AI to assign a corresponding "social score." The score would translate to a level of "trustworthiness" that would allow, or deny, access to certain privileges. The program, meant to be fully operational by 2020, will track citizens, government officials and businesses. Similarly, facial recognition technology is already used, though not ubiquitously, throughout the country and is projected to play an increasingly important role in Chinese law enforcement and governance. China's reliance on such algorithmic-based systems would make it among the first entities to place such a hefty reliance on the decision-making capabilities of computers.

When Ethics Cross Borders and Machine Autonomy Increases

Within a country's borders, the use of AI technology for domestic security and governance purposes may certainly raise questions from human rights groups, but those questions are amplified when use of the technology crosses borders and affects international relationships. One example is Google's potential project to develop a censored search app for the Chinese market. By intending to take advantage of China's market by adhering to the country's rules and regulations, Google could also be seen as perpetuating the Chinese government's values and views on censorship. The company left China in 2010 over objections to that very matter.

Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores.

And these current issues are relatively small in comparison to questions looming on the horizon. Ever-improving algorithms and applications will soon prompt queries about how much autonomy machines "should" have, going far beyond today's credit scores, loans or even social scores. Take automated driving, for example, a seemingly more innocuous application of artificial intelligence and automation. How much control should a human have while in a vehicle? If there is no human involved, who is responsible if and when there is an accident? The answer varies depending where the question is asked. In societies that trust in technology more, like Japan, South Korea or China, the ability to remove key components from cars, such as steering wheels, in the future will likely be easier. In the United States, despite its technological prowess and even as General Motors is applying for the ability to put cars without steering wheels on the road, the current U.S. administration appears wary.
Defense, the Human Element and the First Rule of Robotics

Closely paraphrased, Asimov's first rule of robotics is that a robot should never harm a human through action or inaction. The writer was known as a futurist and thinker, and his rule still resonates. In terms of global governance and international policy, decisions over the limits of AI's decision-making power will be vital to determining the future of the military. How much human involvement, after all, should be required when it comes to decisions that could result in the loss of human life? Advancements in AI will drive the development of remote and asymmetric warfare, requiring the U.S. Department of Defense to make ethical decisions prompted by both Silicon Valley and the Chinese government.

At the dawn of the nuclear age, the scientific community questioned the ethical nature of using nuclear understanding for military purposes. More recently, companies in Silicon Valley have been asking similar questions about whether their technological developments should be used in warfare. Google has been vocal about its objections to working with the U.S. military. After controversy and internal dissent about the company's role in Project Maven, a Pentagon-led project to incorporate AI into the U.S. defense strategy, Google CEO Sundar Pinchai penned the company's own rules of AI ethics, which required, much like Asimov intended, that it not develop AI for weaponry or uses that would cause harm. Pinchai also stated that Google would not contribute to the use of AI in surveillance that pushes boundaries of "internationally accepted norms." Recently, Google pulled out of bidding for a Defense Department cloud computing project as part of JEDI (Joint Enterprise Defense Initiative). Microsoft employees also issued a public letter voicing objections to their own company's intent to bid for the same contract. Meanwhile, Amazon's CEO, Jeff Bezos, whose company is still in the running for the JEDI contract, has bucked this trend, voicing his belief that technology companies partnering with the U.S. military is necessary to ensure national security.

There are already certain ethical guidelines in place when it comes to integrating AI into military operations. Western militaries, including that of the United States, have pledged to always maintain a "human-in-the-loop" structure for operations involving armed unmanned vehicles, so as to avoid the ethical and legal consequences of AI-driven attacks. But these rules may evolve as technology improves. The desire for quick decisions, the high cost of human labor and basic efficiency needs are all bound to challenge countries' commitment to keeping a human in the loop. After all, AI could function like a non-human commander, making command and control decisions conceivably better than any human general could.

Even if the United States still abides by the guidelines, other countries — like China — may have far less motivation to do so. China has already challenged international norms in a number of arenas, including the World Trade Organization, and may well see it as a strategic imperative to employ AI in controversial ways to advance its military might. It's unclear where China will draw the line and how it will match up with Western military norms. But it's relatively certain that if one great power begins implementing cutting-edge technology in controversial ways, others will be forced to consider whether they are willing to let competing countries set ethical norms.
Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Stratfor: AI makes personal privacy a matter of national strategy
« Reply #60 on: October 18, 2018, 09:59:51 AM »
Third post

AI Makes Personal Privacy a Matter of National Strategy
By Rebecca Keller
Senior Science and Technology Analyst, Stratfor
Rebecca Keller
Rebecca Keller
Senior Science and Technology Analyst, Stratfor

The latest social media scandals have generated a backlash in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and prediction abilities. And much of that data comes from individuals.
(GarryKillian, Andrea Danti/Shutterstock, Robert Zavala/Stratfor)

    Growing concern in the United States and Europe over the collection and distribution of personal data could decrease the quality and accessibility of a subset of data used to develop artificial intelligence.
    Though the United States is still among the countries best poised to take advantage of AI technologies to drive economic growth, changes in privacy regulations and social behaviors will impair its tech sector over the course of the next decade.
    China, meanwhile, will take the opportunity to close the gap with the United States in the race to develop AI. 

It seems that hardly a 24-hour news cycle passes without a story about the latest social media controversy. We worry about who has our information, who knows our buying habits or searching habits, and who may be turning that information into targeted ads for products or politicians. Calls for stricter control and protection of privacy and for greater transparency follow. Europe will implement a new set of privacy regulations later this month — the culmination of a yearslong negotiating process and a move that could ease the way for similar policies in the United States, however eventually. Individuals, meanwhile, may take their own steps to guard their data. The implications of that reaction could reverberate far beyond our laptops or smartphones. It will handicap the United States in the next leg of the technology race with China.

Big Picture

Artificial intelligence is more than simply a disruptive technology. It is poised to become an anchor for the Fourth Industrial Revolution and to change the factors that contribute to economic growth. As AI develops at varying rates throughout the world, it will influence the global competition underway between the world's great powers.

See The 4th Industrial Revolution

More than a quarter-century after the fall of the Soviet Union, the world is slowly shifting away from a unipolar system. As the great powers compete for global influence, technology will become an increasingly important part of their struggle. The advent of disruptive technologies such as artificial intelligence stands to revolutionize the ways in which economies function by changing the weight of the factors that fuel economic growth. In several key sectors, China is quickly catching up to its closest competitor in technology, the United States. And in AI, it could soon gain an advantage.

Of the major contenders in the AI arena today, China places the least value on individual privacy, while the European Union places the most. The United States is somewhere in between, though recent events seem to be pushing the country toward more rigorous privacy policies. Since the scandal erupted over Cambridge Analytica's use of Facebook data to target political ads in the 2016 presidential election, outcry has been building in the United States among internet users who want greater control over their personal data. But AI runs on data. AI algorithms use robust sets of data to learn, honing their pattern recognition and predictive abilities. Much of that data comes from individuals.

Learning to Read Personal Data

Online platforms such as social media networks, retail sites, search engines and ride-hailing apps all collect vast amounts of data from their users. Facebook collects a total of nearly 200 billion data points in 98 categories. Amazon's virtual assistant, Alexa, tracks numerous aspects of its users' behavior. Medical databases and genealogy websites gather troves of health and genetic information, and the GPS on our smartphones can track our every move. Drawing on this wealth of data, AI applications could evolve that would revolutionize aspects of everyday life far beyond online shopping. The data could enable applications to track diseases and prevent or mitigate future outbreaks, to help solve cold criminal cases, to relieve traffic congestion, to better assess risk for insurers, or to increase the efficiency of electrical grids and decrease emissions. The potential productivity gains that these innovations offer, in turn, would boost global economic growth.

Using the wealth of data that online platforms collect, AI applications could evolve to revolutionize aspects of everyday life far beyond online shopping.

To reap the greatest benefit, however, developers can't use just any data. Quality is as important as quantity, and that means ensuring that data collection methods are free of inherent bias. Preselecting participants for a particular data set, for example, would introduce bias to it. Likewise, placing a higher value on privacy, as many countries in the West are doing today, could skew data toward certain economic classes. Not all internet users, after all, will have the resources to pay to use online platforms that better protect personal data or to make informed choices about their privacy.

Calls for greater transparency in data collection also will pose a challenge for AI developers in the West. The European Union's General Data Protection Regulation, effective May 25, will tighten restrictions on all companies that handle the data of EU citizens, many of which are headquartered in the United States. The new regulation may prove difficult to enforce in practice, but it will nevertheless force companies around the world to improve their data transparency. And though the United States is still in the best position to take economic advantage of the AI revolution, thanks to its regulatory environment, the growing cultural emphasis on privacy could hinder technological development over the next decade.

The Privacy Handicap

As a general rule, precautionary regulations pose a serious threat to technological progress. The European Union historically has been more proactive than reactive in regulating innovation, a tendency that has done its part in hampering the EU tech sector. The United States, on the other hand, traditionally has fallen into the category of permissionless innovator — that is, a country that allows technological innovations to develop freely before devising the regulations to govern them. This approach has facilitated its rise to the fore in the global tech scene. While the United States still leads the pack in AI, recent concerns about civil liberties could slow it down relative to other tech heavyweights, namely China. The public demands for transparency and privacy aren't going away anytime soon. Furthermore, as AI becomes more powerful, differential privacy — the ability to extract personal information without identifying its source — will become more difficult to preserve.

These are issues that China doesn't have to worry about yet. For the most part, Chinese citizens don't have the same sensitivity over matters of individual privacy as their counterparts in the West. And China is emerging as a permissionless innovator, like the United States. Chinese privacy protections are vague and give the state wide latitude to collect information for security purposes. As a result, its government and the companies working with it have more of the information they need to make their own AI push, which President Xi Jinping has highlighted as a key national priority. Chinese tech giants Baidu, Alibaba and Tencent are all heavily invested in AI and are working to gather as much data as possible to build their AI empire. Together, these factors could help China gain ground on its competition.

In the long run, however, privacy is likely to become a greater priority in China. Chinese corporations value privacy, despite their history of intellectual property violations against the West, and they will take pains to protect their innovations. In addition, the country's younger generations and growing middle class probably will have more of an interest in securing their personal information. A recent art exhibit in China displayed the online data of more than 300,000 individuals, indicating a growing awareness of internet privacy among the country's citizenry.

Even so, over the course of the next decade, the growing concern in the West over privacy could hobble the United States in the AI race. The push for stronger privacy protections may decrease the quality of the data U.S. tech companies use to train and test their AI applications. But the playing field may well even out again. As AI applications continue to improve, more people in the United States will come to recognize their wide-ranging benefits in daily life and in the economy. The value of privacy is constantly in flux; the modern-day notion of a "right to privacy" didn't take shape in the United States until the mid-20th century. In time, U.S. citizens may once again be willing to sacrifice their privacy in exchange for a better life.

Rebecca Keller focuses on areas where science and technology intersect with geopolitics. This diverse area of responsibility includes changes in agricultural technology and water supplies that affect global food supplies, nanotechnology and other developments.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #67 on: January 14, 2019, 01:35:09 PM »
Very interesting and scary piece on AI on this week's "60 Minutes"-- worth tracking down.










Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Jordan Peterson on Depression
« Reply #78 on: December 30, 2019, 06:39:10 AM »




Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Why do smart people do foolish things?
« Reply #82 on: April 03, 2020, 07:50:27 AM »
Why Do Smart People Do Foolish Things?
Intelligence is not the same as critical thinking—and the difference matters.
Scientific American
|
Heather A. Butler
GettyImages-601398828.jpg
Photo by francescoch / Getty Images.

We all probably know someone who is intelligent but does surprisingly stupid things. My family delights in pointing out times when I (a professor) make really dumb mistakes. What does it mean to be smart or intelligent? Our everyday use of the term is meant to describe someone who is knowledgeable and makes wise decisions, but this definition is at odds with how intelligence is traditionally measured. The most widely known measure of intelligence is the intelligence quotient, more commonly known as the IQ test, which includes visuospatial puzzles, math problems, pattern recognition, vocabulary questions and visual searches.

The advantages of being intelligent are undeniable. Intelligent people are more likely to get better grades and go farther in school. They are more likely to be successful at work. And they are less likely to get into trouble (for example, commit crimes) as adolescents. Given all the advantages of intelligence, though, you may be surprised to learn that it does not predict other life outcomes, such as well-being. You might imagine that doing well in school or at work might lead to greater life satisfaction, but several large-scale studies have failed to find evidence that IQ impacts life satisfaction or longevity. University of Waterloo psychologist Igor Grossmann and his colleagues argue that most intelligence tests fail to capture real-world decision-making and our ability to interact well with others. This is, in other words, perhaps why “smart” people do “dumb” things.

The ability to think critically, on the other hand, has been associated with wellness and longevity. Though often confused with intelligence, critical thinking is not intelligence. Critical thinking is a collection of cognitive skills that allow us to think rationally in a goal-orientated fashion and a disposition to use those skills when appropriate. Critical thinkers are amiable skeptics. They are flexible thinkers who require evidence to support their beliefs and recognize fallacious attempts to persuade them. Critical thinking means overcoming all kinds of cognitive biases (for instance, hindsight bias or confirmation bias).

Critical thinking predicts a wide range of life events. In a series of studies, conducted in the U.S. and abroad, my colleagues and I have found that critical thinkers experience fewer bad things in life. We asked people to complete an inventory of life events and take a critical thinking assessment (the Halpern Critical Thinking Assessment). The critical thinking assessment measures five components of critical thinking skills, including verbal reasoning, argument analysis, hypothesis testing, probability and uncertainty, decision-making and problem-solving.

The inventory of negative life events captures different domains of life such as academic (for example, “I forgot about an exam”), health (“I contracted a sexually transmitted infection because I did not wear a condom”), legal (“I was arrested for driving under the influence”), interpersonal (“I cheated on my romantic partner who I had been with for more than a year”), financial (“I have over $5,000 of credit-card debt”), and so on. Repeatedly, we found that critical thinkers experience fewer negative life events. This is an important finding because there is plenty of evidence that critical thinking can be taught and improved.

Is it better to be a critical thinker or to be intelligent? My latest research pitted critical thinking and intelligence against each other to see which was associated with fewer negative life events. People who were strong on either intelligence or critical thinking experienced fewer negative events, but critical thinkers did better.

Intelligence and improving intelligence are hot topics that receive a lot of attention. It is time for critical thinking to receive a little more of that attention. Keith E. Stanovich wrote an entire book in 2009 about What Intelligence Tests Miss. Reasoning and rationality more closely resemble what we mean when we say a person is smart rather than spatial skills and math ability. Furthermore, improving intelligence is difficult. Intelligence is largely determined by genetics. Critical thinking, though, can improve with training, and the benefits have been shown to persist over time. Anyone can improve their critical thinking skills. Doing so, we can say with certainty, is a smart thing to do.

Heather A. Butler is an assistant professor in the psychology department at California State University, Dominguez Hills. Her numerous research interests include critical thinking, advanced learning technologies, and the use of psychological science to prevent wrongful convictions.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
China exporting its AI panopticon
« Reply #89 on: August 21, 2020, 11:48:43 AM »
Sent to me by someone with professional military interest in this subject:

https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile
Re: Intelligence and Psychology, Artificial Intelligence
« Reply #94 on: December 08, 2020, 09:30:48 AM »


DougMacG

  • Power User
  • ***
  • Posts: 18252
    • View Profile
Artificial Intelligence will work with people, not replace them
« Reply #96 on: January 22, 2021, 08:01:41 AM »
These are the top MIT AI people.

https://news.mit.edu/2021/3-questions-thomas-malone-daniela-rus-how-work-will-change-ai-0121
Massachusetts Institute of Technology

3 Questions: Thomas Malone and Daniela Rus on how AI will change work
MIT Task Force on the Work of the Future releases research brief "Artificial Intelligence and the Future of Work."
MIT Task Force on the Work of the Future
Publication Date:January 21, 2021
As part of the MIT Task Force on the Work of the Future’s series of research briefs, Professor Thomas Malone, Professor Daniela Rus, and Robert Laubacher collaborated on "Artificial Intelligence and the Future of Work," a brief that provides a comprehensive overview of AI today and what lies at the AI frontier.

The authors delve into the question of how work will change with AI and provide policy prescriptions that speak to different parts of society. Thomas Malone is director of the MIT Center for Collective Intelligence and the Patrick J. McGovern Professor of Management in the MIT Sloan School of Management. Daniela Rus is director of the Computer Science and Artificial Intelligence Laboratory, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and a member of the MIT Task Force on the Work of the Future. Robert Laubacher is associate director of the MIT Center for Collective Intelligence.

Here, Malone and Rus provide an overview of their research.

Q: You argue in your brief that despite major recent advances, artificial intelligence is nowhere close to matching the breadth and depth of perception, reasoning, communication, and creativity of people. Could you explain some of the limitations of AI?

Rus: Despite recent and significant strides in the AI field, and great promise for the future, today’s AI systems are still quite limited in their ability to reason, make decisions, interact reliably with people and the physical world. Some of today’s greatest successes are due to a machine learning method called deep learning. These systems are trained using vast amounts of data that needs to be manually labeled. Their performance is dependent on the quantity and quality of data used to train them. The larger the training set for the network, the better its performance, and, in turn, the better the product that relies on the machine learning engine. But training large models has high computation cost. Also, bad training data leads to bad performance: when the data has bias, the system response propagates the same bias.

Another limitation of current AI systems is robustness. Current state-of-the-art classifiers achieve impressive performance on benchmarks, but their predictions tend to be brittle. Specifically, inputs that were initially classified correctly can become misclassified once a carefully constructed but indiscernible perturbation is added to them. An important consequence of the lack of robustness is the lack of trust. One of the worrisome factors about the use of AI is the lack of guarantee that an input will be processed and classified correctly. The complex nature of training and using neural networks leads to systems that are difficult for people to understand. The systems are not able to provide explanations for how they reached decisions.

Q: What are the ways AI is complementing, or could complement, human work?

Malone: Today’s AI programs have only specialized intelligence; they’re only capable of doing certain specialized tasks. But humans have a kind of general intelligence that lets them do a much broader range of things.

That means some of the best ways for AI systems to complement human work is to do specialized tasks that computers can do better, faster, or more cheaply than people can. For example, AI systems can be helpful by doing tasks such as interpreting medical X-rays, evaluating the risk of fraud in a credit card charge, or generating unusual new product designs.

And humans can use their social skills, common sense, and other kinds of general intelligence to do things computers can’t do well. For instance, people can provide emotional support to patients diagnosed with cancer. They can decide when to believe customer explanations for unusual credit card transactions, and they can reject new product designs that customers would probably never want.

In other words, many of the most important uses of computers in the future won’t be replacing people; they’ll be working with people in human-computer groups — “superminds” — that can do things better than either people or computers alone could do.

The possibilities here go far beyond what people usually think of when they hear a phrase like “humans in the loop,” Instead of AI technologies just being tools to augment individual humans, we believe that many of their most important uses will occur in the context of groups of humans — often connected by the internet. So we should move from thinking about humans in the loop to computers in the group.

Q: What are some of your recommendations for education, business, and government regarding policies to help smooth the transition of AI technology adoption?

Rus: In our report, we highlight four types of actions that can reduce the pain associated with job transitions: education and training, matching jobs to job seekers, creating new jobs, and providing counseling and financial support to people as they transition from old to new jobs. Importantly, we will need partnership among a broad range of institutions to get this work done.

Malone: We expect that — as with all previous labor-saving technologies — AI will eventually lead to the creation of more new jobs than it eliminates. But we see many opportunities for different parts of society to help smooth this transition, especially for the individuals whose old jobs are disrupted and who cannot easily find new ones.

For example, we believe that businesses should focus on applying AI in ways that don’t just replace people but that create new jobs by providing novel kinds of products and services. We recommend that all schools include computer literacy and computational thinking in their curricula, and we believe that community colleges should offer more reskilling and online micro-degree programs, often including apprenticeships at local employers.

We think that current worker organizations (such as labor unions and professional associations) or new ones (perhaps called “guilds”) should expand their roles to provide benefits previously tied to formal employment (such as insurance and pensions, career development, social connections, a sense of identity, and income security).

And we believe that governments should increase their investments in education and reskilling programs to make the American workforce once again the best-educated in the world. And they should reshape the legal and regulatory framework that governs work to encourage creating more new jobs.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69401
    • View Profile