Author Topic: The Goolag, Facebook, Youtube, Amazon, Twitter, Gov censorship via Tech Octopus  (Read 103239 times)

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
Tail wags for the props  8-)

To be precise, I use "Pravda" thus

a) "the Pravdas"-- the whole lot of them-- print and media
b) Pravda on the Hudson/Potomac/Beach for the NYT, WaPo, and the LA Times.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
Smoking gun?!?
« Reply #952 on: December 15, 2022, 03:48:47 PM »
third

https://www.theepochtimes.com/documents-uncover-secret-twitter-portal-us-government-used-to-censor-covid-19-content_4924270.html?utm_source=Goodevening&src_src=Goodevening&utm_campaign=gv-2022-12-15&src_cmp=gv-2022-12-15&utm_medium=email&est=UT3y%2B%2BOPmxRRshXxf6NQKO5%2B2TXU1qb1BudBoDwTa%2FxJdPT05mLuxpvroqrxSZNRdgO8

Documents Uncover Secret Twitter Portal US Government Used to Censor COVID-19 Content
Elon Musk calls it 'extremely concerning'
By Patricia Tolson December 15, 2022 Updated: December 15, 2022biggersmaller Print


New documents reveal how the United States government used a secret Twitter portal to censor COVID-19 content that contradicted the government’s narrative.

In its ongoing probe into Twitter’s censorship practices, America First Legal has obtained a fourth set of documents (pdf) exposing a secret Twitter portal, which U.S. government officials used to censor dissenting COVID-19 views in violation of the First Amendment. It’s a revelation Elon Musk described as “extremely concerning.”

The documents reveal that the Centers for Disease Control and Prevention (CDC) was collaborating with UNICEF, the World Health Organization, and Mafindo to mitigate “disinformation.” Mafindo is a Facebook third-party fact-checking partner based in Indonesia that is funded by Google, known to have censored searches for keywords like Coronavirus, and COVID-19 as well as blocking information regarding adverse reactions and deaths caused by COVID-19 vaccines. Facebook started its third-party fact-checking program in 2016, working with fact-checkers from around the world who are certified by the International Fact-Checking Network (IFCN) at Poynter to rate and review the accuracy of the content on their platform. According to the IFCN website, they believe “nonpartisan and transparent fact-checking can be a powerful instrument of accountability journalism.” However, among their advisory board, U.S.-based representatives appear to be from liberal-leaning outlets such as the Washington Post and PolitiFact, which is owned by Poynter.

The Twitter Portal
On March 10, 2021 email from a US Public Policy employee at Facebook to several CDC employees spoke of the social media giant’s “weekly sync with CDC” and how the CDC was “to invite other agencies as needed.”

A March 24, 2021 email from the same Facebook employees to CDC employees said “this is my regular FB meeting and they would like to discuss 2 misinformation topics” and “misinformation that was removed.”

On May 10, 2021, a Twitter employee recommended to a CDC official to enroll in Twitter’s Partner Support Portal, which he described as “the best way to get a spreadsheet like this reviewed.”

On May 11, 2021, the CDC official enrolled her personal Twitter account into Twitter’s Partner Support Portal, which allowed “a special, expedited reporting flow in the Twitter Help Center.”

A May 19, 2021 Facebook Community Standards manual reveals how the company works with lawmakers and legal council as well as human rights activists in developing policies in their goals of “bringing 50 million people a step closer to vaccinations” while “combatting COVID-19 and vaccine misinformation” and “overcoming global challenges in vaccination.” Methods used to accomplish this included removing “false information that has been debunked by public health experts” and rejecting ads that violate their policies, “including those that discourage vaccination.” They also reduced the distribution of “misleading claims rated by independent fact-checkers.”

Removed Content
Posts that Facebook would delete—which the CDC or any other public health authority deemed as “false and likely to contribute to imminent violence of physical harm”—included:

Claims that COVID-19 is no more dangerous than the common flu or cold.
Claims that COVID-19 cannot be transmitted in certain climates, weather conditions, or locations.
Claims that for the average person, something can guarantee prevention from getting COVID-19 or can guarantee recovery from COVID-19 before such a cure or prevention has been approved.
Claims that COVID-19 tests cause cancer.
Claims about the availability or existence of COVID-19 vaccines.
Claims about the safety or serious side effects of COVID-19 vaccines.
Claims about the efficacy of COVID-19 vaccines.
Claims about how the COVID-19 vaccine was developed or its ingredients.
Claims involving conspiracy theories about COVID-19 vaccines or vaccine programs.
Content deemed to have been “debunked” included among other things, “vaccines cause the disease against which they meant to protect, or cause the person to be more likely to get the disease,” that “natural immunity is safer than vaccine acquired immunity,” and  “vaccines are not effective to prevent the disease against what they purport to protect.”

Repeat offenders would face restrictions “including (but not limited to) reduced distribution, removal from recommendations” or removal from the site.

These punishments were conducted despite evidence that the vaccines do not prevent transmission, that vaccines cause adverse effects and even death, and that more vaccinated people are now dying than unvaccinated. Even the CDC admitted in June that vaccinated people could contract the disease again.

‘The Coolest Misinformation Fighting Speakeasy’
In August 2021, the head of Google’s News Lab for the Asia Pacific region (APAC), emailed a CDC Vaccine Confidence Strategist to invite her to the APAC’s “Trusted Media Summit.” The CDC’s vaccine confidence strategist then emailed the event planner for Google’s APAC Trusted Media Summit, noting her excitement over being invited to what she referred to as “the coolest misinformation fighting speakeasy.”

The same CDC employee was then invited to the summit to give a keynote addressing how the CDC was working with WHO and other international organizations to address a so-called “infodemic” and using “social inoculation” to mitigate it.


An Oct. 28 email from a CDC to the Twitter team was regarding preliminary discussions on the CDC plans for communicating guidance regarding “pediatric vaccines.”

A Nov. 2, 2021 email from a Facebook employee to multiple CDC employees was related to “vaccine misinformation” relative to the Emergency Use Authorization (EUA). The Facebook employee informed the CDC employees that they had “launched a new feature in Instagram, where accounts that repeatedly post content that violates our policies on COVID-19 or vaccine misinformation may now lose the ability to be tagged or mentioned or may see pop-ups asking if they’d like to delete certain posts that may violate our policies.”

Finally, Facebook boasts of removing over 16 million posts on Facebook and Instagram, including over 2 million between February and May 2021.

AFL’s first release of documents revealed the explicit collusion between the CDC and Big Tech to censor what the Biden administration deemed “misinformation” and push covert COVID-19 propaganda. AFL’s second release built the evidentiary record showing that CDC specifically sent Facebook and Twitter-specific posts to take down, throttle, censor, or flag. AFL’s third release revealed that the CDC’s mask guidance policies for school children were driven by political polling by liberal dark money group The Kaiser Family Foundation.

The Epoch Times reached out to Facebook for comment.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
NRO: More FBI-Twitter smoking gun
« Reply #953 on: December 16, 2022, 04:58:39 PM »
FBI Was in Constant Contact with Twitter ‘Trust and Safety’ Team, Documents Show

By CAROLINE DOWNEY
December 16, 2022 5:31 PM

The FBI frequently communicated with Twitter’s Trust and Safety team before Elon Musk acquired the company, the sixth installment of the “Twitter Files” exposé series reveals.

Between January 2020 and November 2022, over 150 emails were exchanged between the FBI and former Twitter Trust and Safety head Yoel Roth, journalist Matt Taibbi reported. Roth, who resigned shortly after Musk’s takeover, led the team responsible for suppressing the New York Post’s Hunter Biden laptop bombshell story on the platform.

Some of those virtual conversations involved the FBI asking for information about Twitter users that related to active investigations. But in a significant number of instances, the agency allegedly demanded Twitter crack down on election ‘misinformation,’ Taibbi noted.

The FBI’s social media-specializing task force born after the 2016 election expanded to 80 agents and collaborated with Twitter to find foreign election meddling.

The latest batch of documents indicates a pattern, Taibbi said, of a government body aggressively exerting pressure on Twitter to moderate certain content. As late as November 2022, the FBI’s San Francisco branch reached out in an email, addressed “Hello Twitter Contacts,” to flag accounts that could violate internal terms of service.

However, Taibbi notes that the FBI’s scrutiny sometimes applied to left-leaning accounts too. Media personality Dr. Claire Foster, who runs somewhat of a parody account, also caught the branch private sector engagement squad’s attention in November for joking about manipulating ballots on behalf of Democrats.


“Anyone who cannot discern obvious satire from reality has no place making decisions for others or working for the feds,” Foster told Taibbi when she learned her account was flagged.

Four out of six accounts the FBI flagged with Twitter were ultimately suspended.

The greatest FBI interference came on November 5 of this year, however, when the agency’s National Election Command Post sent the San Francisco field office a list of 25 accounts that it believed “may warrant additional action.” NECP expressed concerns that the accounts were engaging in misinformation about the upcoming midterm election on November 8.

Twitter largely obliged the warnings, taking various disciplinary steps against the over a dozen figures. Seven accounts were permanently deplatformed, one account was temporarily locked out for spam behaviors, and nine accounts had tweets bounced for “civic misinformation policy violations,” according to Taibbi’s files.
« Last Edit: December 16, 2022, 05:07:46 PM by Crafty_Dog »

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
NRO: The best hope for a better Twitter
« Reply #954 on: December 16, 2022, 05:08:16 PM »
The Best Hope for a Better Twitter

Ellon Musk speaks before the start of the SpaceX Hyperloop Pod Competition in Hawthorne, Los Angeles, Calif., January 29, 2017. (Monica Almeida/Reuters)
Share
180 Comments
Listen to article
By CHARLES C. W. COOKE
December 16, 2022 2:12 PM
Now as ever, market forces beat government intervention as an answer to the social-media platform’s problems.
Ihave maintained the same of view of Twitter, and of social media in general, for the last decade. That view is:

That Section 230 and the First Amendment are good laws, and ought to be respected;
That our social-media companies are standard-issue private corporations, and are thereby able to moderate themselves as they see fit;
That, as a consumer, I would like to see them prioritize free expression, but;
They don’t have to.
This was my view five years ago, and it was my view five minutes ago. It was my view before Elon Musk bought Twitter, and it remained my view after Elon Musk bought Twitter. It was my view when conservative accounts were being suspended for stupid reasons, and it is my view now that progressive accounts are being suspended for stupid reasons. Twitter’s decisions are not mine to make.

But, of course, not everyone agrees with this view. Some people believe that the First Amendment does not apply to Twitter, or hope to gut or outright repeal Section 230. Some believe that Twitter isn’t, in any meaningful sense, a “private company.” Some do not want Twitter to prioritize free expression; they want it to moderate its users heavily in the name of “safety” or fighting “misinformation” or stamping out “hate.”

Given the events of last night, I have a question for all these people: Now what?

TOP STORIES
The DeSantis Delusion
The World Suddenly Realizes China’s Covid Stats Are Totally Made Up
The Great Republican Freak-Out
Yes, Brett Kavanaugh Is Allowed to Attend Christmas Parties
Eric Adams Is Defying Decades of Hands-Off Homeless Policy: ‘The Pendulum Swung Too Far’
Outraged Parents Demand Loudoun County School-Board Members ‘Resign Now’ after Superintendent Indicted
Yesterday, Elon Musk suspended a bunch of journalists from Twitter because they were pointing users toward the location of his private plane. Almost to a man, those journalists were outraged by this move. So were their friends. But why? Almost none of them want Twitter to be a place that prioritizes free expression, and almost none of them made a fuss when their ideological opponents were being kicked off the platform under the most frivolous of pretexts. On what neutral principle could their case for a personal exemption possibly rest?

A few years ago, Twitter nixed a bunch of accounts for tweeting the words “learn to code” at members of the press, and when this happened, the people who are now complaining about Twitter’s capricious moderation policies said . . . absolutely nothing. I agree with Phil Klein that Musk is a hypocrite on free speech. But I also think that this hypocrisy can only matter to the people who care about free speech themselves. Most of the journalists in question do not care about free speech — indeed, if anything, their complaints about Musk have been that he is not going to moderate his platform enough. Well, here he is, heavily moderating Twitter in the ostensible name of “safety.” What’s the problem, guys? What happened to “build your own Twitter” and “the gates of hell” and democracy needing referees? You didn’t think you’d always be in charge of defining the limits of acceptable discourse, did you?

I might ask a similar question of those who ranted wildly about the supposed perniciousness of Section 230 right up until the exact moment that Elon Musk took over Twitter. For years, a bunch of absolute nonsense was spewed about Section 230 — nonsense that, in most cases, served as a cowardly stand-in for the griper’s real objection, which is to the First Amendment. Of these people, I must now ask: Do you still believe all that stuff about “publishers” and “platforms” and the “public square”? Do you still want federal oversight of Twitter? Do you still think that there exists some nebulous right to post? Because, if you do, you realize what would be happening now had you gotten your way over the last few years, right? In response to last night’s suspensions, Joe Biden would have been pushed by figures such as Taylor Lorenz and Alexandria Ocasio-Cortez and Aaron Rupar to use the powers he had been given by Congress to investigate Elon Musk’s decision to suspend their accounts, and, if necessary, to change the moderation criteria in their favor.

The result of this would not have been a Twitter that was neutral, or a Twitter that was friendly to conservatives, but a Twitter that exhibited precisely the same penchant for Calvinball as it did before Musk took over. This week, President Biden made it clear that he considers even thoughtful opposition to the policies he prefers to represent “hate.” In the House of Representatives on Wednesday, a host of America’s worst would-be censors gave a masterclass in explaining why speech they personally dislike is dangerous but speech they favor is not. A federalized Twitter would be a Twitter in which those people decided who gets an account and who does not. Why has all the talk of federal superintendence suddenly gone suspiciously quiet on the right? That’s why.

Now, as ever, the best hope for social media lies in the market. If Twitter is to work properly, it will need to adopt a set of precise rules and stick closely to them. It will also need two prerequisites to obtain: First, people of all ideological persuasions will have to grasp that a capricious system that can be used to destroy their enemies is a capricious system that can be used to destroy them, too. Second, the rulemaking process will have to be kept as far away as possible from the perverse incentives that are created by our fractious, pendulous politics. Though at first glance, last night’s fracas might look like an unfortunate mess, a step in the wrong direction, on closer inspection it represented yet another step toward such a voluntary equilibrium, one that will accommodate — and benefit — partisans on both sides.

It’ll take a while, but we’ll get there eventually.




ccp

  • Power User
  • ***
  • Posts: 15546
    • View Profile
sounds like DNC operatives have bribed FBI and others in government intelligence
with promises of nice "bonuses/salaries" [me -> bribes] after they retire or otherwise

Trump's idea of making sure these people cannot simply go out and work in such big tech jobs
for 7 yrs
seems sound here.



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
From the trenches of the day-to-day skirmishes
« Reply #959 on: December 19, 2022, 06:31:04 AM »
https://www.zerohedge.com/political/fbi-whistleblower-slams-ted-lieu-says-he-was-moved-child-porn-cases-focus-j6?utm_source=&utm_medium=email&utm_campaign=1138

When I was in CA, Lieu was my Congressman.  What a smarmy little cunt he was and is!  Good to see Taibbi double down in the interaction!






Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
Taibbi
« Reply #965 on: December 30, 2022, 05:52:47 AM »
Note From San Francisco
On the way home after the holidays, notes on "cherry-picking" and a few other odds and ends
MATT TAIBBI
DEC 30
 

Having seen the redwoods with the boys by day, sampled dim sum last evening, and overdosed nights on San Francisco movies (Bullitt, Vertigo, the underrated Zodiac), I’m headed home tonight. A terrific trip, which I won’t forget.

In the coming days you’ll find a new thread on Twitter, along with a two-part article here at TK explaining the latest #TwitterFiles findings. Even as someone in the middle of it, naturally jazzed by everything I’m reading, I feel the necessity of explaining why it’s important to keep hammering at this.

Any lawyer who’s ever sifted though a large discovery file will report the task is like archaeology. You dig a little, find a bit of a claw, dust some more and find a tooth, then hours later it’s the outline of a pelvis bone, and so on. After a while you think you’re looking at something that was alive once, but what?

Who knows? At the moment, all we can do is show a few pieces of what we think might be a larger story. I believe the broader picture will eventually describe a company that was directly or indirectly blamed for allowing Donald Trump to get elected, and whose subjugation and takeover by a furious combination of politicians, enforcement officials, and media then became a priority as soon as Trump took office.

These next few pieces are the result of looking at two discrete data sets, one ranging from mid-2017 to early 2018, and the other spanning from roughly March 2020 through the present. In the first piece focused on that late 2017 period, you see how Washington politicians learned that Twitter could be trained quickly to cooperate and cede control over its moderation process through a combination of threatened legislation and bad press.

In the second, you see how the cycle of threats and bad media that first emerged in 2017 became institutionalized, to the point where a long list of government enforcement agencies essentially got to operate Twitter as an involuntary contractor, heading into the 2020 election. Requests for moderation were funneled mainly through the FBI, the self-described “belly button” of the federal government (not a joke, an agent really calls it that).

The company leadership knew as far back as 2017 that giving in to even one request to suspend this or that set of accused “hostile foreign accounts” would lead to an endless cycle of such demands. “Will work to contain that,” offered one comms official, without much enthusiasm, after the company caved for the first time that year. By 2020, Twitter was living the hell its leaders created for themselves.

What does it all mean? I haven’t really had time to think it over. Surely, though, it means something. I’ve been amused by the accusation that these stories are “cherry-picked.” As opposed to what, the perfectly representative sample of the human experience you normally read in news? Former baseball analytics whiz Nate Silver chimed in on this front:

Twitter avatar for @NateSilver538
Nate Silver
@NateSilver538
People don't distinguish enough between:

1) A ~random cross-section of communications from person/organization X
2) A cherry-picked set of favorable examples from a ~comprehensive set of communications from X

The threshold to move your priors should be **MUCH** higher under 2.
3:29 PM ∙ Dec 26, 2022

What does he mean by “favorable examples”? Take a quote like this, from FBI agent Elvis Chan telling Yoel Roth at Twitter “possible violative activity” reports from the intelligence community will come via the Bureau, while a DHS agency will handle the home front.

“We can give you everything we're seeing from the FBI and USIC agencies,” he wrote. “CISA will know what is going on in each state.”

Who wouldn’t pick that cherry? Also, is the implication that another email exists somewhere telling Roth he won’t be getting requests from the “USIC” by way of the FBI? Come on now. This is just silly. It may be early to say exactly what these passages mean, but the emails say what they say, not something else. Let’s at least try to stop lying for a while, see what happens. How bad can it be?

Alright, signing out from the Bay. Thanks, everyone, and see you again from back home.



ccp

  • Power User
  • ***
  • Posts: 15546
    • View Profile
working ok for me in NJ

maybe some squad members banned it in Minn.?


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
Goolag says SCOTUS ruling could upend the internet
« Reply #969 on: January 12, 2023, 02:14:08 PM »
Google Says Supreme Court Ruling Could Potentially Upend the Internet
Tech giant files brief in YouTube case brought by family of woman killed in Paris terrorist attacks

Google says YouTube ‘abhors terrorism and over the years has taken increasingly effective actions to remove terrorist and other potentially harmful content.’
PHOTO: DAVID PAUL MORRIS/BLOOMBERG NEWS
By John D. McKinnonFollow
Updated Jan. 12, 2023 3:48 pm ET



WASHINGTON—A case before the Supreme Court challenging the liability shield protecting websites such as YouTube and Facebook could “upend the internet,” resulting in both widespread censorship and a proliferation of offensive content, Google said in a court filing Thursday.

In a new brief filed with the high court, Google said that scaling back liability protections could lead internet giants to block more potentially offensive content—including controversial political speech—while also leading smaller websites to drop their filters to avoid liability that can arise from efforts to screen content.

“This Court should decline to adopt novel and untested theories that risk transforming today’s internet into a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content,” Google said in its brief.

Google, a unit of Alphabet Inc., GOOG -0.38%decrease; red down pointing triangle owns YouTube—which is at the center of the case set for oral arguments before the Supreme Court Feb. 21.


The case was brought by the family of Nohemi Gonzalez, who was killed in the 2015 Islamic State terrorist attack in Paris. The plaintiffs claim that YouTube, a unit of Google, aided ISIS by recommending the terrorist group’s videos to users.


The Gonzalez family contends that the liability shield—enacted by Congress as Section 230 of the Communications Decency Act of 1996—has been stretched to cover actions and circumstances never envisioned by lawmakers. The plaintiffs say certain actions by platforms, such as recommending harmful content, shouldn’t be protected.

The immunity law “is not available for material that the website itself created,” the petitioners wrote in their brief filed in November. “If YouTube were to write on its home page, or on the home page of a user, ‘YouTube strongly recommends that you watch this video,’ that obviously would not be ‘information provided by another information content provider.’ ”

Section 230 generally protects internet platforms such as YouTube, Meta Platforms Inc.’s Facebook and Yelp Inc. from being sued for harmful content posted by third parties on their sites. It also gives them broad ability to police their sites without incurring liability.

The Supreme Court agreed last year to hear the lawsuit, in which the plaintiffs have contended Section 230 shouldn’t protect platforms when they recommend harmful content, such as terrorist videos, even if the shield law protects the platforms in publishing the harmful content.

Google contends that Section 230 protects it from any liability for content posted by users on its site. It also argues that there is no way to draw a meaningful distinction between recommendation algorithms and the related algorithms that allow search engines and numerous other crucial ranking systems to work online, and says Section 230 should protect them all.

“Section 230 is fundamentally the economic backbone of the internet,” said Halimah DeLaine Prado, Google’s general counsel. “A ruling that undermines Section 230 would have significant unintended and harmful consequences.”

Advertisement - Scroll to Continue

In the plaintiffs’ lawsuit, they asserted that YouTube had knowingly permitted ISIS to post hundreds of radicalizing videos. They also alleged that YouTube affirmatively recommended ISIS videos to users.


In its latest filing, Google said YouTube “abhors terrorism and over the years has taken increasingly effective actions to remove terrorist and other potentially harmful content.” Google has questioned the plaintiffs’ factual evidence of YouTube recommendations of terrorist videos.

Google also contested the case on legal grounds, saying that Section 230 barred the Gonzalez family’s claims.

A trial judge and the Ninth Circuit U.S. Court of Appeals agreed with Google. The Supreme Court agreed to review the question of whether Section 230 covers a platform’s recommendations.

The court also has agreed to hear a similar case involving Twitter Inc. as well as Google and Facebook, although that case isn’t expected to focus on Section 230.

Lawmakers and President Biden have long called for modifying Section 230 to address what they say are flaws in the law, but legislation to do so has repeatedly fizzled amid partisan differences.

Meanwhile, Texas and Florida laws targeting alleged online censorship by Big Tech platforms are under separate legal challenges pending before the high court. The industry contends those laws, which seek to tightly regulate the platforms as common carriers, violate the platforms’ First Amendment free-speech rights by curbing their ability to take down or otherwise restrict content.

Write to John D. McKinnon at John.McKinnon@wsj.com


DougMacG

  • Power User
  • ***
  • Posts: 16773
    • View Profile
Seven year cooling off period for (wrongfully) censoring free speech
« Reply #971 on: January 14, 2023, 04:52:11 PM »
Seven year cooling off period for (wrongfully) censoring free speech

Credit where credit is due (Trump), this is actually a pretty good idea.

Like the penalty box in hockey or a time-out for children, you wronged someone, you can sit out for a bit.  Go do something else for a while.  Maybe shut a university down if they stifle free speech on campus.  But especially Google, Twitter, etc.

https://www.rsbnetwork.com/news/trump-hits-back-at-big-tech-calls-for-seven-year-cooling-off-period-for-employees-caught-censoring-free-speech/


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 62817
    • View Profile
WSJ: Barr: Congress must halt Big Tech's Power Grab
« Reply #972 on: January 23, 2023, 07:30:47 AM »
ongress Must Halt Big Tech’s Power Grab
Lawmakers don’t have to rewrite the antitrust laws; instead, these three steps could make a difference.
By William P. Barr
Jan. 22, 2023 3:27 pm ET

Big Tech has far too much power. Lawmakers from both parties agree, but for years Congress has been all talk and no action. Meanwhile, tech giants are threatening to use their control over digital platforms to gain unfair advantage in other markets where competing products depend on access to those platforms.

Over the past 20 years, the scope of commercial and personal activities relying on access to digital platforms has mushroomed. A few giant companies—Google, Apple, Amazon, Facebook—have achieved monopoly or near-monopoly control over key platforms, among them online search and advertising, mobile operating systems, online marketplaces, maps and social media.

All these dominant platforms, though distinctive, pose the same threefold danger. First, they have a chokehold over essential channels of communication and commerce, allowing them to be gatekeepers to the digital world. Second, they vacuum up a trove of personal information about users—what they see, hear, read, think and buy. This raises profound privacy concerns and permits these companies to manipulate users’ beliefs and behavior. Third, they distort the “marketplace of ideas.” The gatekeepers can shape the flow of information to advance their own economic and political agendas.

Case-by-case antitrust litigation alone won’t rein in Big Tech. Unlike regulatory power, which entails actively supervising an overall market and setting uniform rules for it, antitrust litigation is slow. It can target only discrete transactions or instances of wrongdoing by individual companies. That approach can’t produce a coherent response to the multifaceted problems caused by Big Tech’s dominance.

A lot of energy has been misspent arguing whether Big Tech dominance arises from misconduct. That is beside the point. When serious anticompetitive conditions arise in markets providing critical services to the public, regulatory intervention may be needed whether or not those conditions involve misconduct. The U.S. often has regulated markets involving competing networks, such as transportation, media, broadcast, cable and telephone. As in Big Tech, the largest player’s power tends, absent intervention, to snowball quickly into a monopoly.

I argue in my recent book that Congress should adopt a limited regulatory framework opening the markets dominated by Big Tech to more competition. But as Congress has dawdled, another pressing danger has emerged. The tech giants are moving into new markets in which products and services depend on access to their digital platforms. Since their competitors’ products must have access to those platforms—for example, smart home devices need access to Apple and Android mobile platforms—the tech giants can seize an unfair advantage by giving their own products access of better quality or on better terms.

As more products and services depend on digital platforms, the number of markets vulnerable to these tactics rises. Big Tech has made inroads into “smart” homes, automobiles, payment systems and healthcare. The mind boggles at the prospect of the digital aspects of these markets, including mountains of private customer information, being absorbed into the Apple or Google ecosystem.

Congress must act quickly to prohibit the tech giants from unfairly leveraging their dominance into more markets. This doesn’t mean rewriting the antitrust laws but rather taking these three steps:

First, prohibit dominant platforms from giving their own products an unfair advantage by reserving for themselves higher-quality access than they grant competitors. Amazon was charged by the European Union for excluding rival merchants’ products from the “Buy Box”—a valuable space on its website that helps generate sales. Amazon settled the case, agreeing to give competitors’ products the same placement as its own. EU regulators were right and Congress should prohibit a dominant platform from giving preferential platform access to its own products.

This same rule should apply when the platform provides a function through hardware rather than software. Apple has installed a “near field communication” chip in its phones to complete secure short-distance mobile payments with its Apple Pay service, but it excludes other mobile wallet payment services from access to the chip. This discriminatory access relegates competing payment services to lower-quality connectivity. The EU has said it has an eye on the matter.


Second, prohibit Big Tech from using dominant platforms to extract competitors’ business data and exploiting that data in developing competing products. Last month, Amazon settled EU charges that it gleaned nonpublic information from independent merchants using its platform to inform Amazon’s own competing product offerings.

Apple and Google are entering the car-manufacturing business, and that market illustrates why Big Tech’s ability to extract data from competitors should be limited. Drivers should be able to use their cellphones easily in and outside their cars. Car makers can facilitate this by supporting interoperability between the phone and the car. But Apple and Google shouldn’t be allowed to abuse this capacity to gain access to user and vehicle data.

Third, protect consumer privacy. Congress has yet to address the massive amount of personal information these companies already collect. But the scope and scale of these data are about to explode as Big Tech hoovers up information from our homes, cars, financial transactions, healthcare and other markets. This will be augmented, before long, by a 24/7 flow of video, audio and electronic signals collected by their fleets of autonomous vehicles zipping about our streets. Add to this the capacity to sift data with artificial-intelligence tools, and things start resembling the dystopian surveillance societies portrayed in sci-fi movies.

It is past time for Congress to add real bite to its bark and address the harmful effects of Big Tech’s power. This should be at the top of lawmakers’ priorities for 2023.

Mr. Barr is author of the memoir “One Damn Thing After Another.” He served as U.S. attorney general, 1991-93 and 2019-20.