Author Topic: The Goolag, Facebook, Youtube, Amazon, Twitter, Gov censorship via Tech Octopus  (Read 143497 times)




Body-by-Guinness

  • Power User
  • ***
  • Posts: 2036
    • View Profile
Faucing Zuckers .v America
« Reply #1053 on: April 02, 2024, 04:59:19 PM »
Check out this exchange between Marky and Tony. It doesn’t get more clear cut than this:

https://x.com/DrJBhattacharya/status/1775221797098852545?s=20

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69666
    • View Profile
Clear as day and will be unseen and unnoticed by most people.


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69666
    • View Profile
FO: FBI and CISA resume disinfo campaign ahead of election
« Reply #1056 on: May 09, 2024, 02:27:21 PM »

(3) FBI, CISA RESUME DISINFORMATION WORK WITH SOCIAL MEDIA AHEAD OF ELECTION: At the RSA Conference yesterday, Senator Mark Warner (D-VA) said federal agencies including the Cybersecurity and Infrastructure Security Agency (CISA) and the FBI resumed communications with social media companies after the Supreme Court appeared to favor the Biden administrations argument in Murthy v Missouri earlier this year.

“The secretary delivered a very clear message that we view interference in our domestic democratic process as dangerous and unacceptable,” Cyberspace and Digital Policy Ambassador Nathaniel Fick said during the conference.

Why It Matters: Expect social media censorship to increase as the election nears. Many tech company Trust & Safety divisions, which handle requests from government agencies to deemphasize or take down posts, are staffed by former government officials, and they have coordinated in the past with CISA and the FBI. – R.C.

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2036
    • View Profile
New “Censorship Industrial Complex” Revelations Looming?
« Reply #1057 on: May 21, 2024, 04:35:23 PM »
Given Tabbi’s work breaking the Twitter Files I suspect this bodes some very interesting revelations:

Note to Readers: That Eerie Silence
Getcha popcorn ready.
MATT TAIBBI
MAY 21, 2024

“THE AI ELECTION”: Forget Russians, domestic terrorists, or “Coordinated Inauthentic Behavior.” This year’s censorship hobby horse is AI

Subscribe
Racket readers may have noticed it’s been a bit quiet in here of late. That’s because I’ve been spending the last few weeks on an investigative series in cooperation with another site. What seemed like a cut-and-dried report turned into a bit of a rabbit hole on us; hence the delay.

When I first started publishing the Twitter Files in 2022-2023 along with Michael Shellenberger, Bari Weiss, Lee Fang, David Zweig, Paul Thacker, and others, there was an emphasis on speed. Once we saw phrases like “flagged by DHS,” I knew the project was temporary, and guessed we’d probably need to stay ahead of the news cycle in order to avoid seeing material drown in blowback. So, we set aside some explosive bigger-picture storylines to focus on things that could be confirmed and published quickly. There were also topics we didn’t fully understand at the time.

Some of those broader stories will begin coming out now, hopefully starting this week. There’s a reason for working back through this material now. Sources tell me at least two different active groups are working on political content moderation programs for the November election that tactically would go a step or two beyond what we observed with groups like Stanford’s Election Integrity Partnership, proposing not just deamplification or removals, but fakery, use of bots, and other “offensive” forms of manipulation.

If the recent rush of news stories about the horror of foreign-inspired AI deepfakes (“No one can stop them,” gasps the Washington Post) creating intolerable risk to the coming “AI election” sounds a bit off to you, you’re not alone. This is one of many potential threats pro-censorship groups are playing up in hopes of deploying more aggressive “counter-messaging” tools. Some early proposals along those lines are in the unpublished Twitter Files documents we’ve been working on. Again, more on this topic soon.

Also: beginning around the time we published the “Report on the Censorship-Industrial Complex,” Racket in partnership with UndeadFOIA began issuing Freedom of Information requests in bulk. The goal was to identify inexcusably secret contractors of content-policing agencies like the State Department’s Global Engagement Center. The FOIA system is designed to exhaust citizens, but our idea was to match the irritating resolve of FOIA officers by pre-committing resources for inevitable court disputes, fights over production costs, etc. Thanks to UndeadFOIA’s great work, we now have a sizable library of documents about publicly-funded censorship programs (and a few private ones scooped up in official correspondence).

We’ll be releasing those, too, focusing on a few emails per batch, and publishing the rest in bulk. There’s so much material that a quick global summary here would be difficult, but suffice to say that the anti-disinformation/content control world is much bigger than I thought, enjoying cancer-like growth on campuses in particular, in the same way military research became primary sources of grants and took over universities in the fifties and sixties. Some of these FOIA documents are damning, some entertaining, some just interesting, but all of them belong to the public. We’re going to start the process of turning them over, hopefully today.

In any case, thanks to Racket readers for their patience. I’m very appreciative of the commitment every subscriber makes, especially in this narrowing media environment, which is why I want to make sure readers understand what’s usually going on when things go dark around here. My idea of a vacation is one or two days. If you don’t hear from me for six, I’m working on something. Back soon, and thanks again.

https://www.racket.news/p/note-to-readers-that-eerie-silence

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69666
    • View Profile
Feel free to double post this in the Deep State thread too.

Body-by-Guinness

  • Power User
  • ***
  • Posts: 2036
    • View Profile
Social Media Censorship Blueprint
« Reply #1059 on: Today at 05:29:27 PM »
Just Security is a reliable Deep State mouthpiece. As such, this post of theirs likely serves as a blueprint for what we are likely to see as the 2024 election looms:

Tech Platforms Must Do More to Avoid Contributing to Potential Political Violence
Just Security / by Yaël Eisenstat / May 22, 2024 at 10:05 AM
This essay is co-published with Tech Policy Press.

At the end of March, we convened a working group of experts on social media, election integrity, extremism, and political violence to discuss the relationship between online platforms and election-related political violence. The goal was to provide realistic and effective recommendations to platforms on steps they can take to ensure their products do not contribute to the potential for political violence, particularly in the lead-up to and aftermath of the U.S. general election in November, but with implications for states around the world.

Today, we released a paper that represents the consensus of the working group titled “Preventing Tech-Fueled Political Violence: What online platforms can do to ensure they do not contribute to election-related violence.” Given the current threat landscape in the United States, we believe this issue is urgent. While relying on online platforms to “do the right thing” without the proper regulatory and business incentives in place may seem increasingly futile, we believe there remains a critical role for independent experts to play in both shaping the public conversation and shining a light on where these companies can act more responsibly.

Indications of potential political violence mount

The January 6th, 2021, attack on the U.S. Capitol looms large over the 2024 election cycle. Former President Donald Trump and many Republican political elites continue to advance false claims about the outcome of the 2020 election, a potential predicate to efforts to delegitimize the outcome of the vote this November.

Yet such rhetoric is but one potential catalyst for political violence in the United States this political season. In a feature on the subject this month, The New York Times noted that across the country, “a steady undercurrent of violence and physical risk has become a new normal,” particularly targeting public officials and democratic institutions. And, a survey from the Brennan Center conducted this spring found that 38% of election officials have experienced violent threats. And to this already menacing environment, add conflict over Israel-Gaza protests on college campuses and in major cities, potentially controversial developments in the various trials of the former president, and warnings from the FBI and the Department of Homeland Security about potential threats to LGBTQ+ Pride events this summer. It would appear that the likelihood of political violence in the United States is, unfortunately, elevated.

The neglect of tech platforms may exacerbate the situation

What role do online platforms play in this threat environment? It is unclear if the major platforms are prepared to meet the moment. A number of platforms have rolled back moderation policies on false claims of electoral fraud, gutted trust and safety teams, and appear to be sleep walking into a rising tide of threats to judges and election officials. These developments suggest the platforms have ignored the lessons of the last few years, both in the United States and abroad. For instance, a year after January 6th, supporters of Brazil’s outgoing president Jair Bolsonaro used social media to organize and mobilize attacks on governmental buildings. And an American Progress study of the 2022 U.S. midterm elections concluded that “social media companies have again refused to grapple with their complicity in fueling hate and informational disorder…with key exceptions, companies have again offered cosmetic changes and empty promises not backed up by appropriate staffing or resources.”

Platforms’ failure to prepare for election violence suggests that in many ways, 2024 mirrors 2020. In advance of that election, two of the authors (Eisenstat and Kreiss) convened a working group of experts to lay out what platforms needed to do to protect elections. Sadly, platforms largely ignored these and many other recommendations from independent researchers and civil society groups, including enforcing voting misinformation restrictions against all users (including political leaders), clearly refuting election disinformation, and amplifying reliable electoral information. The failure of platforms to adequately follow such recommendations helped create the context for January 6th, as documented by the draft report on the role of social media in the assault on the Capitol prepared by an investigative team of the House Select Committee on the January 6 Attacks.

Recommendations

To avoid a similar outcome, we propose a number of steps the platforms can, and should, take if they want to ensure they do not fuel political violence. None of the recommendations are entirely novel. In fact, a number of them are congruent with any number of papers that academics and civil society leaders have published over the years. And yet, they bear repeating, even though time is short to implement them.

The full set of seven recommendations and details can be found in our report, but in general they center on a number of themes where online platforms are currently falling short, including:

Platforms must develop robust standards for threat assessment and engage in scenario planning, crisis training, and engagement with external stakeholders, with as much transparency as possible.
Platforms should enforce clear and actionable content moderation policies that address election integrity year-round, proactively addressing election denialism and potential threats against election workers.

Politicians and other political influencers should not receive exemptions from content policies or special treatment from the platforms. Platforms should enforce their rules uniformly.
Platforms must clearly explain important content moderation decisions during election periods, ensuring transparency especially when it comes to the moderation of high profile accounts.

This election cycle, so much of the conversation about tech accountability has moved on to what to do about deceptive uses of AI. But the distribution channels for AI-generated content still run largely through the online platforms where users spread the “Stop the Steal” narrative in 2020 and galvanized the people who ultimately engaged in political violence at the U.S. Capitol. We will continue to draw attention to these unresolved issues, in the hope that rising demands for accountability will prompt platforms to act more responsibly and prioritize the risk of political violence both in the United States and abroad.

The post Tech Platforms Must Do More to Avoid Contributing to Potential Political Violence appeared first on Just Security.