Author Topic: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments  (Read 544049 times)

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1050 on: April 02, 2017, 03:46:37 PM »
Michael Savage mentioned this last week and questioned it. He wondered why they would do this.  This is opposite of personal freedom.  I think this is a very bad move.  Sounds like a give away to business interests while sacrificing everyone else's personal freedom.

So we can scream about government collecting and spying on us but it is ok for private firms to collect all our data and sell it for their benefit without us getting any say in the matter.


 :x

Well, you only have the privacy you fight for. Most thoughtlessly give it away every day.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Here’s How Facebook Knows Who You Meet In Real Life
« Reply #1051 on: May 16, 2017, 01:34:37 PM »
http://www.vocativ.com/425482/facebook-tracking-friend-requests/

Here’s How Facebook Knows Who You Meet In Real Life
It may seem like Mark Zuckerburg is personally tracking your every move — but there's another explanation for those creepy friend requests you're getting
Social Media
Photo Illustration: Diana Quach
By Alejandro Alba
May 16, 2017 at 12:15 PM ET

A couple months ago a friend and I went to Colombia for vacation. While we were at the beach one day, we met a group of people and spent several hours hanging out with them. We never exchanged phone numbers or email addresses, we didn’t share much information about ourselves other than our names and where we lived, and we didn’t connect on social media. I didn’t even have my phone on me at the time. However, when I got back to New York and checked Facebook, I saw that two of the people we met popped up in my “People You May Know” recommendations. Weird, I thought. Actually, it’s creepy. Is Facebook tracking my every step?

Facebook’s brand is based on the community it creates, and its mission is to connect everybody in the world. So it only makes sense that the platform frequently suggests new friends for users to add to their networks. But in the past, the company’s suggestions for connecting users have raised some eyebrows.

For example, take the story about a psychiatrist who claimed her patients were popping up on her list of suggested friends (and on each other’s lists) after visiting her office, which is obviously problematic for medical privacy reasons. The psychiatrist is far from the only Facebook user to discover mysterious friend suggestions — for years there have been stories of people who go on dates, attend parties or browse through a book store only to see people they interacted with in person pop up in their Facebook at a later date. None of these connections are coincidences, of course. So how does it happen?

Just how the company goes about identifying potential new connections — especially when the users have no obvious digital connection to one another — isn’t always clear. The first possible reason, and most likely, for someone to appear in your “People You May Know” list is that one of you searched for the other, according to Facebook. So if Angel and Angie (two random strangers) go on a blind date and Angel searches Facebook for Angie’s profile, but doesn’t add her as a friend, Facebook will suggest both add each other. It only takes one person to trigger the algorithm.



Another possible explanation is that the two parties shared some type of digital information such as email or phone numbers, since most people use either to open a profile. So if you share your contacts with either Facebook or Messenger, someone you recently added to your contacts will be suggested first over someone you’ve had on your phone for years.

A Facebook spokesperson said the company does not see who users text, call or email, therefore the algorithm wouldn’t be able to make friend suggestions based on that. Yet, if you’re using email programs on your phone such as Gmail, you are saving email addresses to your phone and Facebook will be able to see them — again, only if you’re sharing your contacts with Facebook apps.

There is also a theory that Facebook tracks users’ web activity, but in a statement Facebook debunked it. Facebook apps use web cookies for targeting ads, but not for recommending new friends, a spokesperson said. Facebook also claims it no longer tracks its users with location data to rank friend suggestions based on where they live or work. The company said location tracking was just a brief test it ran last year at a very small city-wide scale.



Facebook said that its friend recommendations are based on variety of factors, which includes mutual friends, work and education information, groups you’re part of, and any digital information stored on your phone that is shared with Facebook. Other than that, Facebook deems all other connections coincidences.

Other social media platforms such as Twitter and Tumblr aren’t usually scrutinized about follow suggestions because they seem to be based on interests and mutual friends, rather than just random people you’ve met literally 10 minutes ago. Facebook, however, is not the only platform that has received criticism for the questionable ways it suggests new contacts. LinkedIn recently had to issue an apology because it pushed a new update that told iPhone users it would turn on their Bluetooth in order to share data with people nearby and “connect” them even when not using the app, if they didn’t opt out.

The update has since been “fixed” and LinkedIn apologized for confusing users, since the language used in the update did not specify which data was being shared and under what conditions. If all of this still seems freaky to you, shutting off Facebook’s access to your contacts is quite easy. All you have to do is go into your smartphone’s settings, look for your apps section, tap on Facebook and disable the “Contacts” access.



Experts also recommend that people disable Facebook from using location data, which can also be toggled under the same Settings menu. Another piece of advice for those who are concerned with privacy is logging off from Facebook whenever going to medical offices, big events, or even theme parks like Disney Land. But perhaps the most failsafe approach is just to uninstall the Facebook app from your phone and wait to see all your notifications from your desktop computer when you get home.

ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1052 on: May 16, 2017, 03:20:01 PM »
can anyone imagine the power Zuckerberg will have when , not if he runs
he will have a 100 million times more data then any DNC or RNC on voters.



Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
WSJ: SCOTUS to hear case on cellphone location data
« Reply #1054 on: June 05, 2017, 10:35:54 AM »
Supreme Court to Hear Case on Cellphone Location Data
Court to hear appeal by a defendant who was convicted based on evidence obtained from wireless service providers about his cellphone’s whereabouts
The U.S. Supreme Court building in Washington, D.C.
The U.S. Supreme Court building in Washington, D.C. Photo: yuri gripas/Reuters
By Brent Kendall
June 5, 2017 10:06 a.m. ET
47 COMMENTS

WASHINGTON—The Supreme Court on Monday agreed to consider whether law-enforcement officials need search warrants to obtain data about the location of cellphone users, a case that raises questions about privacy rights in the digital age.

The court said it would hear an appeal by a defendant who was convicted in part based on evidence prosecutors obtained from wireless service providers about the whereabouts of his cellphone at particular times.

Timothy Carpenter was convicted of armed robberies in Michigan and Ohio, in part based on cell-site location information obtained from MetroPCS and Sprint that placed his phone in the vicinity of several robberies around the time the crimes took place.

The government didn’t obtain a search warrant for the records, which would have required a showing of probable cause to obtain the cell data. Instead, it sought and obtained the data under the Stored Communications Act, which allows law enforcers to seek records when there are reasonable grounds for believing the information is relevant to a criminal investigation.

Mr. Carpenter sought to suppress the evidence, arguing it was obtained in violation of his Fourth Amendment right to be free from unreasonable government searches.

An appeals court ruled for the government, citing a 1979 Supreme Court ruling involving home telephone records that said people don’t enjoy Fourth Amendment protection for information they voluntarily reveal to a third party, such as a phone company.

Several lower courts have grappled with how that ruling ought to apply in today’s world, where people travel around with phones in their pockets and reveal their various locations to wireless providers as their cell signals bounce from one tower to the next.

The court will hear oral arguments during its next term, which begins in October.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Apple notes and privacy?
« Reply #1055 on: June 05, 2017, 02:53:46 PM »
http://www.rollingstone.com/music/live-reviews/ariana-grandes-one-love-manchester-benefit-our-report-w485769

And it felt incredibly safe. As I made my own way to the tram, I wrote in my Apple Notes app, "Helicopter hovering overhead," which to me signified that the fans were being watched over. Then two policemen stopped me and asked me who I was with and whether I'd written anything about a helicopter into my phone, without explaining the technology of how they'd read my Notes app. After a friendly back-and-forth, they looked through my bag, checked my ID and business card and determined I wasn't a threat. "You have to understand, tensions are running high," one of the men said with a smile and a handshake, allowing me through the gate. Manchester was secure tonight.

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1056 on: June 05, 2017, 09:16:20 PM »
 :-o :-o :-o

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1057 on: June 05, 2017, 09:20:53 PM »
:-o :-o :-o

And yet, despite this ability, they did not stop the latest jihad attacks.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Welcome to Our Global Censorship and Surveillance Platform
« Reply #1058 on: July 29, 2017, 09:48:47 AM »
http://globalguerrillas.typepad.com/globalguerrillas/2017/07/welcome-to-our-global-censorship-and-surveillance-platform.html

Monday, 24 July 2017

Welcome to Our Global Censorship and Surveillance Platform
I recently ran into a European counter-terrorism expert who complained that it was getting very difficult to build a fake profile on Facebook.  Every time his team tried to set up a fake profile, it was shut down in less than 24 hours.  Here's why he ran into problems.

Facebook has an initiative to prevent the creation fake accounts (something Facebook strangely calls recidivism). 
This initiative is a small part of a larger overall effort being undertaken by Facebook, Google and others, to become what can best be described as fully functional global censorship and surveillance systems.  I know that people have been concerned about this for a while, but it's not speculation anymore folks.  It's here. 
The surprising thing to me?  The US and nearly all of the governments of the world (outside of China and Russia) are pushing them to do it. 
A global censorship and surveillance platform

Here are some of the aspects of these efforts:

AIs that can identify violent imagery and extremist symbols in videos and pictures and rapidly delete them -- or better yet, block their upload or shut down a livestream as soon as they show up.  For example:  a live broadcast during a terrorist attack or murder (both happened). 
Routine censorship and surveillance.  For example:  Facebook has ~7,500 (largely low paid subcontractors) reading posts and (private) messages to find and delete content they deem objectionable and ban the people who post it.  However, these folks are just temporary employees.  The real goal is to build AIs that can read posts and messages to ID objectionable content to do what the human team (above) is already doing, but on a global scale. 
A complete social graph.  A real-time census of every living person in the world (outside of China and Russia).  One that knows all about you, whether or not you are on Facebook/Google/etc.  These companies are already close to this goal in Europe and the US, and at 2 billion daily users (Facebook and Android), so it won't be long before they expand that to the rest of the world. 
Where is this Headed?

The decline of the US security framework at home and abroad, growing political and economic instability and widespread distrust/illegitimacy will make an expansion of this platform inevitable.  Let's look at this expansion from a couple of angles: 

Already, the social networks are replacing the media as the gatekeepers and the shapers of national and global public opinion.  It's clear that the media can't play this role anymore, they are outgunned. To wit: millions watch TV news while billions get their news from Facebook.  How will they replace the media?  They will use AIs to subtly block, blur or bury fake or objectionable information and conversations while promoting those they approve of.  This process will become extremely apparent during the next presidential campaign.  It also suggests that we will see candidates from within these companies running for office in many countries, and given their edge in using these platforms: win.
AI's built using real time and detailed social graph information could become better at detecting violent behavior far sooner and better than human analysts.  Simply, it may not matter if the attackers were using Facebook or Google, the ripples from their actions on adjacent social networks might be more than enough to detect them.  Pushing this forward even further, as the data flows and the depth of the information increases, how far down in the stack of violence could these AIs prove to be effective?  Down to domestic murders, abuse, and rape?   
A global ID.  Simply, Facebook is getting close to being able to create a global ID for everyone on the planet (sans China/Russia).  It's not a bit of paper or something you put in your wallet.  It'll be passive.  It'll replace your passport and driver's license. If you can be seen by a camera, you will be known. 
Sincerely,

John Robb

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1059 on: August 01, 2017, 12:13:26 PM »
 :-o :-o :-o :-o :-o :-o :-o :-o :-o



ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
kooks in control; about THE law
« Reply #1062 on: August 02, 2017, 06:28:06 PM »
http://dailycaller.com/2017/06/16/canada-passes-law-criminalizing-use-of-wrong-gender-pronouns/

So if I am in Canada and I call a man who does not want to be a man
it is a hate crime?  What kind of crap is this?



G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Escape the Goolag
« Reply #1064 on: August 19, 2017, 08:04:13 AM »


ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1066 on: August 23, 2017, 05:58:37 AM »
"    Google secretly recording you? "

This is perfectly ok because Google supports the politics of the LEFT; as does FB.    :wink:


G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Facebook Figured Out My Family Secrets, And It Won't Tell Me How
« Reply #1068 on: August 30, 2017, 10:50:18 AM »
http://gizmodo.com/facebook-figured-out-my-family-secrets-and-it-wont-tel-1797696163/amp

Facebook Figured Out My Family Secrets, And It Won't Tell Me How

Kashmir Hill
Friday 11:30am
Filed to:CREEPY

Illustration: Jim Cooke/GMG, photo: Getty
Rebecca Porter and I were strangers, as far as I knew. Facebook, however, thought we might be connected. Her name popped up this summer on my list of “People You May Know,” the social network’s roster of potential new online friends for me.

The People You May Know feature is notorious for its uncanny ability to recognize who you associate with in real life. It has mystified and disconcerted Facebook users by showing them an old boss, a one-night-stand, or someone they just ran into on the street.


These friend suggestions go far beyond mundane linking of schoolmates or colleagues. Over the years, I’d been told many weird stories about them, such as when a psychiatrist told me that her patients were being recommended to one another, indirectly outing their medical issues.

What makes the results so unsettling is the range of data sources—location information, activity on other apps, facial recognition on photographs—that Facebook has at its disposal to cross-check its users against one another, in the hopes of keeping them more deeply attached to the site. People generally are aware that Facebook is keeping tabs on who they are and how they use the network, but the depth and persistence of that monitoring is hard to grasp. And People You May Know, or “PYMK” in the company’s internal shorthand, is a black box.

To try to get a look into that black box—and the unknown and apparently aggressive data collection that feeds it—I began downloading and saving the list of people Facebook recommended to me, to see who came up, and what patterns might emerge.

On any given day, it tended to recommend about 160 people, some of them over and over again; over the course of the summer, it suggested more than 1,400 different people to me. About 200, or 15 percent of them, were, in fact, people I knew, but the rest appeared to be strangers.


And then there was Rebecca Porter. She showed up on the list after about a month: an older woman, living in Ohio, with whom I had no Facebook friends in common. I did not recognize her, but her last name was familiar. My biological grandfather is a man I’ve never met, with the last name Porter, who abandoned my father when he was a baby. My father was adopted by a man whose last name was Hill, and he didn’t find out about his biological father until adulthood.

The Porter family lived in Ohio. Growing up half a country away, in Florida, I’d known these blood relatives were out there, but there was no reason to think I would ever meet them.

A few years ago, my father eventually did meet his biological father, along with two uncles and an aunt, when they sought him out during a trip back to Ohio for his mother’s funeral. None of them use Facebook. I asked my dad if he recognized Rebecca Porter. He looked at her profile and said he didn’t think so.

I sent the woman a Facebook message explaining the situation and asking if she was related to my biological grandfather.


“Yes,” she wrote back.

Rebecca Porter, we discovered, is my great aunt, by marriage. She is married to my biological grandfather’s brother; she met him 35 years ago, the year after I was born. Facebook knew my family tree better than I did

“I didn’t know about you,” she told me, when we talked by phone. “I don’t understand how Facebook made the connection.”

It was an enjoyable conversation. After we finished the phone call, I sat still for 15 minutes. I was grateful that Facebook had given me the chance to talk to an unknown relation, but awed and disconcerted by its apparent omniscience.


How Facebook had linked us remained hard to fathom. My father had met her husband in person that one time, after my grandmother’s funeral. They exchanged emails, and my father had his number in his phone. But neither of them uses Facebook. Nor do the other people between me and Rebecca Porter on the family tree.

Facebook is known to buy information from data brokers, and a person who previously worked for the company and who is familiar with how the tool works suggested the familial connection may have been discerned that way. But when asked about that scenario, a Facebook spokesperson said, “Facebook does not use information from data brokers for People You May Know.”

What information had Facebook used, then? The company would not tell me what triggered this recommendation, citing privacy reasons. A Facebook spokesperson said that if the company helped me figure out how it made the connection between me and my great aunt, then every other user who got an unexpected friend suggestion would come around asking for an explanation, too.

It was not a very convincing excuse. Facebook gets people to hand over information about themselves all the time; by what principle would it be unreasonable to sometimes hand some of that information back?


The bigger reason the social network may be shy about revealing how the recommendations work is that many of Facebook’s competitors, such as LinkedIn and Twitter, offer similar features to their users. In a 2010 presentation about PYMK, Facebook’s vice-president of engineering explained its value: “People with more friends use the site more.” There’s a competitive advantage to be gained by being the best at this, meaning Facebook is reluctant to reveal what goes into its algorithm.

The caginess is longstanding. Back in 2009, users getting creepily accurate friend suggestions suspected that Facebook was basing the recommendations on their contact information—which they had volunteered when they first signed up, not realizing Facebook would keep it and use it.

Though Facebook is upfront about its use of contact information now, when asked about it in 2009, the company’s then-chief privacy officer, Chris Kelly, wouldn’t confirm what was going on.

“We are constantly iterating on the algorithm that we use to determine the Suggestions section of the home page,” Kelly told Adweek in 2009. “We do not share details about the algorithm itself.”


Not being told exactly how this tool works is frustrating for users, who want to understand the extent of Facebook’s knowledge about them and how deeply the social network peers into their lives. The spokesperson did say that more than 100 signals go into making the friend recommendations and that no one signal alone would trigger a friend suggestion.

One hundred signals! I told the spokesperson that it might be in the search giant’s interest to be more transparent about how this feature works so that users are less creeped out by it. She said Facebook had “in the name of transparency” recently added more information to its help page explaining how People You May Know works, an update noted by USA Today.

That help page offers a brief bulleted list:

People You May Know suggestions come from things like:

• Having friends in common, or mutual friends. This is the most common reason for suggestions

• Being in the same Facebook group or being tagged in the same photo

• Your networks (example: your school, university or work)

• Contacts you’ve uploaded
Depending on how you count them, the listed possibilities are roughly 95 signals shy of adding up to 100 signals. What are all the others?

ADVERTISEMENT

“We’ve chosen to list the most common reasons someone might be suggested as part of People You May Know,” a Facebook spokesperson wrote in an email when asked about the brevity of the list.

Rather than explaining how Facebook connected me to my great aunt, a spokesperson told me via email to delete the suggestion if I don’t like it.

“People don’t always like some of their PYMK suggestions, so one action people can take to control People You May Know is to ‘X’ out suggestions that they are uninterested in,” the spokesperson wrote via email. “This is the best way to tell us that they’re not interested in connecting with someone online and that feedback helps improve our suggestions over time.”

Now, when I look at my friend recommendations, I’m unnerved not just by seeing the names of the people I know offline, but by all the seeming strangers on the list. How many of them are truly strangers, I wonder—and how many are connected to me in ways I’m unaware of. They are not people I know, but are they people I should know?

ADVERTISEMENT

If you’ve had a similar experience with a recommendation, or if you’ve worked on PYMK technology, I could use your help.

This story was produced by Gizmodo Media Group’s Special Projects Desk.

Kashmir Hillkashmir.hill@gizmodomedia.com@kashhill

Kashmir Hill is a senior reporter for the Special Projects Desk, which produces investigative work across all of Gizmodo Media Group's web sites. She writes about privacy and technology.
PGP Fingerprint: AE77 9CA9 59C8 0469 76D5 CC2D 0B3C BD37 D934 E5E9

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
You ARE the Product
« Reply #1069 on: September 02, 2017, 06:23:08 PM »
https://www.lrb.co.uk/v39/n16/john-lanchester/you-are-the-product

You Are the Product
John Lanchester

BUYThe Attention Merchants: From the Daily Newspaper to Social Media, How Our Time and Attention Is Harvested and Sold by Tim Wu
Atlantic, 416 pp, £20.00, January, ISBN 978 1 78239 482 2
BUYChaos Monkeys: Inside the Silicon Valley Money Machine by Antonio García Martínez
Ebury, 528 pp, £8.99, June, ISBN 978 1 78503 455 8
BUYMove Fast and Break Things: How Facebook, Google and Amazon have Cornered Culture and What It Means for All of Us by Jonathan Taplin
Macmillan, 320 pp, £18.99, May, ISBN 978 1 5098 4769 3

At the end of June, Mark Zuckerberg announced that Facebook had hit a new level: two billion monthly active users. That number, the company’s preferred ‘metric’ when measuring its own size, means two billion different people used Facebook in the preceding month. It is hard to grasp just how extraordinary that is. Bear in mind that thefacebook – its original name – was launched exclusively for Harvard students in 2004. No human enterprise, no new technology or utility or service, has ever been adopted so widely so quickly. The speed of uptake far exceeds that of the internet itself, let alone ancient technologies such as television or cinema or radio.

Also amazing: as Facebook has grown, its users’ reliance on it has also grown. The increase in numbers is not, as one might expect, accompanied by a lower level of engagement. More does not mean worse – or worse, at least, from Facebook’s point of view. On the contrary. In the far distant days of October 2012, when Facebook hit one billion users, 55 per cent of them were using it every day. At two billion, 66 per cent are. Its user base is growing at 18 per cent a year – which you’d have thought impossible for a business already so enormous. Facebook’s biggest rival for logged-in users is YouTube, owned by its deadly rival Alphabet (the company formerly known as Google), in second place with 1.5 billion monthly users. Three of the next four biggest apps, or services, or whatever one wants to call them, are WhatsApp, Messenger and Instagram, with 1.2 billion, 1.2 billion, and 700 million users respectively (the Chinese app WeChat is the other one, with 889 million). Those three entities have something in common: they are all owned by Facebook. No wonder the company is the fifth most valuable in the world, with a market capitalisation of $445 billion.

Zuckerberg’s news about Facebook’s size came with an announcement which may or may not prove to be significant. He said that the company was changing its ‘mission statement’, its version of the canting pieties beloved of corporate America. Facebook’s mission used to be ‘making the world more open and connected’. A non-Facebooker reading that is likely to ask: why? Connection is presented as an end in itself, an inherently and automatically good thing. Is it, though? Flaubert was sceptical about trains because he thought (in Julian Barnes’s paraphrase) that ‘the railway would merely permit more people to move about, meet and be stupid.’ You don’t have to be as misanthropic as Flaubert to wonder if something similar isn’t true about connecting people on Facebook. For instance, Facebook is generally agreed to have played a big, perhaps even a crucial, role in the election of Donald Trump. The benefit to humanity is not clear. This thought, or something like it, seems to have occurred to Zuckerberg, because the new mission statement spells out a reason for all this connectedness. It says that the new mission is to ‘give people the power to build community and bring the world closer together’.

Hmm. Alphabet’s mission statement, ‘to organise the world’s information and make it universally accessible and useful’, came accompanied by the maxim ‘Don’t be evil,’ which has been the source of a lot of ridicule: Steve Jobs called it ‘bullshit’.​1 Which it is, but it isn’t only bullshit. Plenty of companies, indeed entire industries, base their business model on being evil. The insurance business, for instance, depends on the fact that insurers charge customers more than their insurance is worth; that’s fair enough, since if they didn’t do that they wouldn’t be viable as businesses. What isn’t fair is the panoply of cynical techniques that many insurers use to avoid, as far as possible, paying out when the insured-against event happens. Just ask anyone who has had a property suffer a major mishap. It’s worth saying ‘Don’t be evil,’ because lots of businesses are. This is especially an issue in the world of the internet. Internet companies are working in a field that is poorly understood (if understood at all) by customers and regulators. The stuff they’re doing, if they’re any good at all, is by definition new. In that overlapping area of novelty and ignorance and unregulation, it’s well worth reminding employees not to be evil, because if the company succeeds and grows, plenty of chances to be evil are going to come along.

Google and Facebook have both been walking this line from the beginning. Their styles of doing so are different. An internet entrepreneur I know has had dealings with both companies. ‘YouTube knows they have lots of dirty things going on and are keen to try and do some good to alleviate it,’ he told me. I asked what he meant by ‘dirty’. ‘Terrorist and extremist content, stolen content, copyright violations. That kind of thing. But Google in my experience knows that there are ambiguities, moral doubts, around some of what they do, and at least they try to think about it. Facebook just doesn’t care. When you’re in a room with them you can tell. They’re’ – he took a moment to find the right word – ‘scuzzy’.

That might sound harsh. There have, however, been ethical problems and ambiguities about Facebook since the moment of its creation, a fact we know because its creator was live-blogging at the time. The scene is as it was recounted in Aaron Sorkin’s movie about the birth of Facebook, The Social Network. While in his first year at Harvard, Zuckerberg suffered a romantic rebuff. Who wouldn’t respond to this by creating a website where undergraduates’ pictures are placed side by side so that users of the site can vote for the one they find more attractive? (The film makes it look as if it was only female undergraduates: in real life it was both.) The site was called Facemash. In the great man’s own words, at the time:

I’m a little intoxicated, I’m not gonna lie. So what if it’s not even 10 p.m. and it’s a Tuesday night? What? The Kirkland dormitory facebook is open on my desktop and some of these people have pretty horrendous facebook pics. I almost want to put some of these faces next to pictures of some farm animals and have people vote on which is the more attractive … Let the hacking begin.

As Tim Wu explains in his energetic and original new book The Attention Merchants, a ‘facebook’ in the sense Zuckerberg uses it here ‘traditionally referred to a physical booklet produced at American universities to promote socialisation in the way that “Hi, My Name Is” stickers do at events; the pages consisted of rows upon rows of head shots with the corresponding name’. Harvard was already working on an electronic version of its various dormitory facebooks. The leading social network, Friendster, already had three million users. The idea of putting these two things together was not entirely novel, but as Zuckerberg said at the time, ‘I think it’s kind of silly that it would take the University a couple of years to get around to it. I can do it better than they can, and I can do it in a week.’

Wu argues that capturing and reselling attention has been the basic model for a large number of modern businesses, from posters in late 19th-century Paris, through the invention of mass-market newspapers that made their money not through circulation but through ad sales, to the modern industries of advertising and ad-funded TV. Facebook is in a long line of such enterprises, though it might be the purest ever example of a company whose business is the capture and sale of attention. Very little new thinking was involved in its creation. As Wu observes, Facebook is ‘a business with an exceedingly low ratio of invention to success’. What Zuckerberg had instead of originality was the ability to get things done and to see the big issues clearly. The crucial thing with internet start-ups is the ability to execute plans and to adapt to changing circumstances. It’s Zuck’s skill at doing that – at hiring talented engineers, and at navigating the big-picture trends in his industry – that has taken his company to where it is today. Those two huge sister companies under Facebook’s giant wing, Instagram and WhatsApp, were bought for $1 billion and $19 billion respectively, at a point when they had no revenue. No banker or analyst or sage could have told Zuckerberg what those acquisitions were worth; nobody knew better than he did. He could see where things were going and help make them go there. That talent turned out to be worth several hundred billion dollars.

Jesse Eisenberg’s brilliant portrait of Zuckerberg in The Social Network is misleading, as Antonio García Martínez, a former Facebook manager, argues in Chaos Monkeys, his entertainingly caustic book about his time at the company. The movie Zuckerberg is a highly credible character, a computer genius located somewhere on the autistic spectrum with minimal to non-existent social skills. But that’s not what the man is really like. In real life, Zuckerberg was studying for a degree with a double concentration in computer science and – this is the part people tend to forget – psychology. People on the spectrum have a limited sense of how other people’s minds work; autists, it has been said, lack a ‘theory of mind’. Zuckerberg, not so much. He is very well aware of how people’s minds work and in particular of the social dynamics of popularity and status. The initial launch of Facebook was limited to people with a Harvard email address; the intention was to make access to the site seem exclusive and aspirational. (And also to control site traffic so that the servers never went down. Psychology and computer science, hand in hand.) Then it was extended to other elite campuses in the US. When it launched in the UK, it was limited to Oxbridge and the LSE. The idea was that people wanted to look at what other people like them were doing, to see their social networks, to compare, to boast and show off, to give full rein to every moment of longing and envy, to keep their noses pressed against the sweet-shop window of others’ lives.

Change your perspective - subscribe now
This focus attracted the attention of Facebook’s first external investor, the now notorious Silicon Valley billionaire Peter Thiel. Again, The Social Network gets it right: Thiel’s $500,000 investment in 2004 was crucial to the success of the company. But there was a particular reason Facebook caught Thiel’s eye, rooted in a byway of intellectual history. In the course of his studies at Stanford – he majored in philosophy – Thiel became interested in the ideas of the US-based French philosopher René Girard, as advocated in his most influential book, Things Hidden since the Foundation of the World. Girard’s big idea was something he called ‘mimetic desire’. Human beings are born with a need for food and shelter. Once these fundamental necessities of life have been acquired, we look around us at what other people are doing, and wanting, and we copy them. In Thiel’s summary, the idea is ‘that imitation is at the root of all behaviour’.

Girard was a Christian, and his view of human nature is that it is fallen. We don’t know what we want or who we are; we don’t really have values and beliefs of our own; what we have instead is an instinct to copy and compare. We are homo mimeticus. ‘Man is the creature who does not know what to desire, and who turns to others in order to make up his mind. We desire what others desire because we imitate their desires.’ Look around, ye petty, and compare. The reason Thiel latched onto Facebook with such alacrity was that he saw in it for the first time a business that was Girardian to its core: built on people’s deep need to copy. ‘Facebook first spread by word of mouth, and it’s about word of mouth, so it’s doubly mimetic,’ Thiel said. ‘Social media proved to be more important than it looked, because it’s about our natures.’ We are keen to be seen as we want to be seen, and Facebook is the most popular tool humanity has ever had with which to do that.

*

The view of human nature implied by these ideas is pretty dark. If all people want to do is go and look at other people so that they can compare themselves to them and copy what they want – if that is the final, deepest truth about humanity and its motivations – then Facebook doesn’t really have to take too much trouble over humanity’s welfare, since all the bad things that happen to us are things we are doing to ourselves. For all the corporate uplift of its mission statement, Facebook is a company whose essential premise is misanthropic. It is perhaps for that reason that Facebook, more than any other company of its size, has a thread of malignity running through its story. The high-profile, tabloid version of this has come in the form of incidents such as the live-streaming of rapes, suicides, murders and cop-killings. But this is one of the areas where Facebook seems to me relatively blameless. People live-stream these terrible things over the site because it has the biggest audience; if Snapchat or Periscope were bigger, they’d be doing it there instead.

In many other areas, however, the site is far from blameless. The highest-profile recent criticisms of the company stem from its role in Trump’s election. There are two components to this, one of them implicit in the nature of the site, which has an inherent tendency to fragment and atomise its users into like-minded groups. The mission to ‘connect’ turns out to mean, in practice, connect with people who agree with you. We can’t prove just how dangerous these ‘filter bubbles’ are to our societies, but it seems clear that they are having a severe impact on our increasingly fragmented polity. Our conception of ‘we’ is becoming narrower.

This fragmentation created the conditions for the second strand of Facebook’s culpability in the Anglo-American political disasters of the last year. The portmanteau terms for these developments are ‘fake news’ and ‘post-truth’, and they were made possible by the retreat from a general agora of public debate into separate ideological bunkers. In the open air, fake news can be debated and exposed; on Facebook, if you aren’t a member of the community being served the lies, you’re quite likely never to know that they are in circulation. It’s crucial to this that Facebook has no financial interest in telling the truth. No company better exemplifies the internet-age dictum that if the product is free, you are the product. Facebook’s customers aren’t the people who are on the site: its customers are the advertisers who use its network and who relish its ability to direct ads to receptive audiences. Why would Facebook care if the news streaming over the site is fake? Its interest is in the targeting, not in the content. This is probably one reason for the change in the company’s mission statement. If your only interest is in connecting people, why would you care about falsehoods? They might even be better than the truth, since they are quicker to identify the like-minded. The newfound ambition to ‘build communities’ makes it seem as if the company is taking more of an interest in the consequence of the connections it fosters.

Fake news is not, as Facebook has acknowledged, the only way it was used to influence the outcome of the 2016 presidential election. On 6 January 2017 the director of national intelligence published a report saying that the Russians had waged an internet disinformation campaign to damage Hillary Clinton and help Trump. ‘Moscow’s influence campaign followed a Russian messaging strategy that blends covert intelligence operations – such as cyber-activity – with overt efforts by Russian government agencies, state-funded media, third-party intermediaries, and paid social media users or “trolls”,’ the report said. At the end of April, Facebook got around to admitting this (by then) fairly obvious truth, in an interesting paper published by its internal security division. ‘Fake news’, they argue, is an unhelpful, catch-all term because misinformation is in fact spread in a variety of ways:

Information (or Influence) Operations – Actions taken by governments or organised non-state actors to distort domestic or foreign political sentiment.

False News – News articles that purport to be factual, but which contain intentional misstatements of fact with the intention to arouse passions, attract viewership, or deceive.

False Amplifiers – Co-ordinated activity by inauthentic accounts with the intent of manipulating political discussion (e.g. by discouraging specific parties from participating in discussion, or amplifying sensationalistic voices over others).

Disinformation – Inaccurate or manipulated information/content that is spread intentionally. This can include false news, or it can involve more subtle methods, such as false flag operations, feeding inaccurate quotes or stories to innocent intermediaries, or knowingly amplifying biased or misleading information.

The company is promising to treat this problem or set of problems as seriously as it treats such other problems as malware, account hacking and spam. We’ll see. One man’s fake news is another’s truth-telling, and Facebook works hard at avoiding responsibility for the content on its site – except for sexual content, about which it is super-stringent. Nary a nipple on show. It’s a bizarre set of priorities, which only makes sense in an American context, where any whiff of explicit sexuality would immediately give the site a reputation for unwholesomeness. Photos of breastfeeding women are banned and rapidly get taken down. Lies and propaganda are fine.

The key to understanding this is to think about what advertisers want: they don’t want to appear next to pictures of breasts because it might damage their brands, but they don’t mind appearing alongside lies because the lies might be helping them find the consumers they’re trying to target. In Move Fast and Break Things, his polemic against the ‘digital-age robber barons’, Jonathan Taplin points to an analysis on Buzzfeed: ‘In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets such as the New York Times, Washington Post, Huffington Post, NBC News and others.’ This doesn’t sound like a problem Facebook will be in any hurry to fix.

The fact is that fraudulent content, and stolen content, are rife on Facebook, and the company doesn’t really mind, because it isn’t in its interest to mind. Much of the video content on the site is stolen from the people who created it. An illuminating YouTube video from Kurzgesagt, a German outfit that makes high-quality short explanatory films, notes that in 2015, 725 of Facebook’s top one thousand most viewed videos were stolen. This is another area where Facebook’s interests contradict society’s. We may collectively have an interest in sustaining creative and imaginative work in many different forms and on many platforms. Facebook doesn’t. It has two priorities, as Martínez explains in Chaos Monkeys: growth and monetisation. It simply doesn’t care where the content comes from. It is only now starting to care about the perception that much of the content is fraudulent, because if that perception were to become general, it might affect the amount of trust and therefore the amount of time people give to the site.

Zuckerberg himself has spoken up on this issue, in a Facebook post addressing the question of ‘Facebook and the election’. After a certain amount of boilerplate bullshit (‘Our goal is to give every person a voice. We believe deeply in people’), he gets to the nub of it. ‘Of all the content on Facebook, more than 99 per cent of what people see is authentic. Only a very small amount is fake news and hoaxes.’ More than one Facebook user pointed out that in their own news feed, Zuckerberg’s post about authenticity ran next to fake news. In one case, the fake story pretended to be from the TV sports channel ESPN. When it was clicked on, it took users to an ad selling a diet supplement. As the writer Doc Searls pointed out, it’s a double fraud, ‘outright lies from a forged source’, which is quite something to have right slap next to the head of Facebook boasting about the absence of fraud. Evan Williams, co-founder of Twitter and founder of the long-read specialist Medium, found the same post by Zuckerberg next to a different fake ESPN story and another piece of fake news purporting to be from CNN, announcing that Congress had disqualified Trump from office. When clicked-through, that turned out to be from a company offering a 12-week programme to strengthen toes. (That’s right: strengthen toes.) Still, we now know that Zuck believes in people. That’s the main thing.

*

A neutral observer might wonder if Facebook’s attitude to content creators is sustainable. Facebook needs content, obviously, because that’s what the site consists of: content that other people have created. It’s just that it isn’t too keen on anyone apart from Facebook making any money from that content. Over time, that attitude is profoundly destructive to the creative and media industries. Access to an audience – that unprecedented two billion people – is a wonderful thing, but Facebook isn’t in any hurry to help you make money from it. If the content providers all eventually go broke, well, that might not be too much of a problem. There are, for now, lots of willing providers: anyone on Facebook is in a sense working for Facebook, adding value to the company. In 2014, the New York Times did the arithmetic and found that humanity was spending 39,757 collective years on the site, every single day. Jonathan Taplin points out that this is ‘almost fifteen million years of free labour per year’. That was back when it had a mere 1.23 billion users.

Taplin has worked in academia and in the film industry. The reason he feels so strongly about these questions is that he started out in the music business, as manager of The Band, and was on hand to watch the business being destroyed by the internet. What had been a $20 billion industry in 1999 was a $7 billion industry 15 years later. He saw musicians who had made a good living become destitute. That didn’t happen because people had stopped listening to their music – more people than ever were listening to it – but because music had become something people expected to be free. YouTube is the biggest source of music in the world, playing billions of tracks annually, but in 2015 musicians earned less from it and from its ad-supported rivals than they earned from sales of vinyl. Not CDs and recordings in general: vinyl.

Something similar has happened in the world of journalism. Facebook is in essence an advertising company which is indifferent to the content on its site except insofar as it helps to target and sell advertisements. A version of Gresham’s law is at work, in which fake news, which gets more clicks and is free to produce, drives out real news, which often tells people things they don’t want to hear, and is expensive to produce. In addition, Facebook uses an extensive set of tricks to increase its traffic and the revenue it makes from targeting ads, at the expense of the news-making institutions whose content it hosts. Its news feed directs traffic at you based not on your interests, but on how to make the maximum amount of advertising revenue from you. In September 2016, Alan Rusbridger, the former editor of the Guardian, told a Financial Times conference that Facebook had ‘sucked up $27 million’ of the newspaper’s projected ad revenue that year. ‘They are taking all the money because they have algorithms we don’t understand, which are a filter between what we do and how people receive it.’

Change your perspective - subscribe now
This goes to the heart of the question of what Facebook is and what it does. For all the talk about connecting people, building community, and believing in people, Facebook is an advertising company. Martínez gives the clearest account both of how it ended up like that, and how Facebook advertising works. In the early years of Facebook, Zuckerberg was much more interested in the growth side of the company than in the monetisation. That changed when Facebook went in search of its big payday at the initial public offering, the shining day when shares in a business first go on sale to the general public. This is a huge turning-point for any start-up: in the case of many tech industry workers, the hope and expectation associated with ‘going public’ is what attracted them to their firm in the first place, and/or what has kept them glued to their workstations. It’s the point where the notional money of an early-days business turns into the real cash of a public company.

Martínez was there at the very moment when Zuck got everyone together to tell them they were going public, the moment when all Facebook employees knew that they were about to become rich:

I had chosen a seat behind a detached pair, who on further inspection turned out to be Chris Cox, head of FB product, and Naomi Gleit, a Harvard grad who joined as employee number 29, and was now reputed to be the current longest-serving employee other than Mark.

Naomi, between chats with Cox, was clicking away on her laptop, paying little attention to the Zuckian harangue. I peered over her shoulder at her screen. She was scrolling down an email with a number of links, and progressively clicking each one into existence as another tab on her browser. Clickathon finished, she began lingering on each with an appraiser’s eye. They were real estate listings, each for a different San Francisco property.

Martínez took note of one of the properties and looked it up later. Price: $2.4 million. He is fascinating, and fascinatingly bitter, on the subject of class and status differences in Silicon Valley, in particular the never publicly discussed issue of the huge gulf between early employees in a company, who have often been made unfathomably rich, and the wage slaves who join the firm later in its story. ‘The protocol is not to talk about it at all publicly.’ But, as Bonnie Brown, a masseuse at Google in the early days, wrote in her memoir, ‘a sharp contrast developed between Googlers working side by side. While one was looking at local movie times on their monitor, the other was booking a flight to Belize for the weekend. How was the conversation on Monday morning going to sound now?’

When the time came for the IPO, Facebook needed to turn from a company with amazing growth to one that was making amazing money. It was already making some, thanks to its sheer size – as Martínez observes, ‘a billion times any number is still a big fucking number’ – but not enough to guarantee a truly spectacular valuation on launch. It was at this stage that the question of how to monetise Facebook got Zuckerberg’s full attention. It’s interesting, and to his credit, that he hadn’t put too much focus on it before – perhaps because he isn’t particularly interested in money per se. But he does like to win.

The solution was to take the huge amount of information Facebook has about its ‘community’ and use it to let advertisers target ads with a specificity never known before, in any medium. Martínez: ‘It can be demographic in nature (e.g. 30-to-40-year-old females), geographic (people within five miles of Sarasota, Florida), or even based on Facebook profile data (do you have children; i.e. are you in the mommy segment?).’ Taplin makes the same point:

If I want to reach women between the ages of 25 and 30 in zip code 37206 who like country music and drink bourbon, Facebook can do that. Moreover, Facebook can often get friends of these women to post a ‘sponsored story’ on a targeted consumer’s news feed, so it doesn’t feel like an ad. As Zuckerberg said when he introduced Facebook Ads, ‘Nothing influences people more than a recommendation from a trusted friend. A trusted referral is the Holy Grail of advertising.’

That was the first part of the monetisation process for Facebook, when it turned its gigantic scale into a machine for making money. The company offered advertisers an unprecedentedly precise tool for targeting their ads at particular consumers. (Particular segments of voters too can be targeted with complete precision. One instance from 2016 was an anti-Clinton ad repeating a notorious speech she made in 1996 on the subject of ‘super-predators’. The ad was sent to African-American voters in areas where the Republicans were trying, successfully as it turned out, to suppress the Democrat vote. Nobody else saw the ads.)

The second big shift around monetisation came in 2012 when internet traffic began to switch away from desktop computers towards mobile devices. If you do most of your online reading on a desktop, you are in a minority. The switch was a potential disaster for all businesses which relied on internet advertising, because people don’t much like mobile ads, and were far less likely to click on them than on desktop ads. In other words, although general internet traffic was increasing rapidly, because the growth was coming from mobile, the traffic was becoming proportionately less valuable. If the trend were to continue, every internet business that depended on people clicking links – i.e. pretty much all of them, but especially the giants like Google and Facebook – would be worth much less money.

Facebook solved the problem by means of a technique called ‘onboarding’. As Martínez explains it, the best way to think about this is to consider our various kinds of name and address.

For example, if Bed, Bath and Beyond wants to get my attention with one of its wonderful 20 per cent off coupons, it calls out:

Antonio García Martínez
1 Clarence Place #13
San Francisco, CA 94107

If it wants to reach me on my mobile device, my name there is:

38400000-8cfo-11bd-b23e-10b96e40000d

That’s my quasi-immutable device ID, broadcast hundreds of times a day on mobile ad exchanges.

On my laptop, my name is this:

07J6yJPMB9juTowar.AWXGQnGPA1MCmThgb9wN4vLoUpg.BUUtWg.rg.FTN.0.AWUxZtUf

This is the content of the Facebook re-targeting cookie, which is used to target ads-are-you based on your mobile browsing.

Though it may not be obvious, each of these keys is associated with a wealth of our personal behaviour data: every website we’ve been to, many things we’ve bought in physical stores, and every app we’ve used and what we did there … The biggest thing going on in marketing right now, what is generating tens of billions of dollars in investment and endless scheming inside the bowels of Facebook, Google, Amazon and Apple, is how to tie these different sets of names together, and who controls the links. That’s it.

Facebook already had a huge amount of information about people and their social networks and their professed likes and dislikes.​2 After waking up to the importance of monetisation, they added to their own data a huge new store of data about offline, real-world behaviour, acquired through partnerships with big companies such as Experian, which have been monitoring consumer purchases for decades via their relationships with direct marketing firms, credit card companies, and retailers. There doesn’t seem to be a one-word description of these firms: ‘consumer credit agencies’ or something similar about sums it up. Their reach is much broader than that makes it sound, though.​3 Experian says its data is based on more than 850 million records and claims to have information on 49.7 million UK adults living in 25.2 million households in 1.73 million postcodes. These firms know all there is to know about your name and address, your income and level of education, your relationship status, plus everywhere you’ve ever paid for anything with a card. Facebook could now put your identity together with the unique device identifier on your phone.

That was crucial to Facebook’s new profitability. On mobiles, people tend to prefer the internet to apps, which corral the information they gather and don’t share it with other companies. A game app on your phone is unlikely to know anything about you except the level you’ve got to on that particular game. But because everyone in the world is on Facebook, the company knows everyone’s phone identifier. It was now able to set up an ad server delivering far better targeted mobile ads than anyone else could manage, and it did so in a more elegant and well-integrated form than anyone else had managed.

So Facebook knows your phone ID and can add it to your Facebook ID. It puts that together with the rest of your online activity: not just every site you’ve ever visited, but every click you’ve ever made – the Facebook button tracks every Facebook user, whether they click on it or not. Since the Facebook button is pretty much ubiquitous on the net, this means that Facebook sees you, everywhere. Now, thanks to its partnerships with the old-school credit firms, Facebook knew who everybody was, where they lived, and everything they’d ever bought with plastic in a real-world offline shop.​4 All this information is used for a purpose which is, in the final analysis, profoundly bathetic. It is to sell you things via online ads.

The ads work on two models. In one of them, advertisers ask Facebook to target consumers from a particular demographic – our thirty-something bourbon-drinking country music fan, or our African American in Philadelphia who was lukewarm about Hillary. But Facebook also delivers ads via a process of online auctions, which happen in real time whenever you click on a website. Because every website you’ve ever visited (more or less) has planted a cookie on your web browser, when you go to a new site, there is a real-time auction, in millionths of a second, to decide what your eyeballs are worth and what ads should be served to them, based on what your interests, and income level and whatnot, are known to be. This is the reason ads have that disconcerting tendency to follow you around, so that you look at a new telly or a pair of shoes or a holiday destination, and they’re still turning up on every site you visit weeks later. This was how, by chucking talent and resources at the problem, Facebook was able to turn mobile from a potential revenue disaster to a great hot steamy geyser of profit.

What this means is that even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. I’ve spent time thinking about Facebook, and the thing I keep coming back to is that its users don’t realise what it is the company does. What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads. I’m not sure there has ever been a more complete disconnect between what a company says it does – ‘connect’, ‘build communities’ – and the commercial reality. Note that the company’s knowledge about its users isn’t used merely to target ads but to shape the flow of news to them. Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.

*

I’m left wondering what will happen when and if this $450 billion penny drops. Wu’s history of attention merchants shows that there is a suggestive pattern here: that a boom is more often than not followed by a backlash, that a period of explosive growth triggers a public and sometimes legislative reaction. Wu’s first example is the draconian anti-poster laws introduced in early 20th-century Paris (and still in force – one reason the city is by contemporary standards undisfigured by ads). As Wu says, ‘when the commodity in question is access to people’s minds, the perpetual quest for growth ensures that forms of backlash, both major and minor, are all but inevitable.’ Wu calls a minor form of this phenomenon the ‘disenchantment effect’.

Facebook seems vulnerable to these disenchantment effects. One place they are likely to begin is in the core area of its business model – ad-selling. The advertising it sells is ‘programmatic’, i.e. determined by computer algorithms that match the customer to the advertiser and deliver ads accordingly, via targeting and/or online auctions. The problem with this from the customer’s point of view – remember, the customer here is the advertiser, not the Facebook user – is that a lot of the clicks on these ads are fake. There is a mismatch of interests here. Facebook wants clicks, because that’s how it gets paid: when ads are clicked on. But what if the clicks aren’t real but are instead automated clicks from fake accounts run by computer bots? This is a well-known problem, which particularly affects Google, because it’s easy to set up a site, allow it to host programmatic ads, then set up a bot to click on those ads, and collect the money that comes rolling in. On Facebook the fraudulent clicks are more likely to be from competitors trying to drive each others’ costs up.

The industry publication Ad Week estimates the annual cost of click fraud at $7 billion, about a sixth of the entire market. One single fraud site, Methbot, whose existence was exposed at the end of last year, uses a network of hacked computers to generate between three and five million dollars’ worth of fraudulent clicks every day. Estimates of fraudulent traffic’s market share are variable, with some guesses coming in at around 50 per cent; some website owners say their own data indicates a fraudulent-click rate of 90 per cent. This is by no means entirely Facebook’s problem, but it isn’t hard to imagine how it could lead to a big revolt against ‘ad tech’, as this technology is generally known, on the part of the companies who are paying for it. I’ve heard academics in the field say that there is a form of corporate groupthink in the world of the big buyers of advertising, who are currently responsible for directing large parts of their budgets towards Facebook. That mindset could change. Also, many of Facebook’s metrics are tilted to catch the light at the angle which makes them look shiniest. A video is counted as ‘viewed’ on Facebook if it runs for three seconds, even if the user is scrolling past it in her news feed and even if the sound is off. Many Facebook videos with hundreds of thousands of ‘views’, if counted by the techniques that are used to count television audiences, would have no viewers at all.

A customers’ revolt could overlap with a backlash from regulators and governments. Google and Facebook have what amounts to a monopoly on digital advertising. That monopoly power is becoming more and more important as advertising spend migrates online. Between them, they have already destroyed large sections of the newspaper industry. Facebook has done a huge amount to lower the quality of public debate and to ensure that it is easier than ever before to tell what Hitler approvingly called ‘big lies’ and broadcast them to a big audience. The company has no business need to care about that, but it is the kind of issue that could attract the attention of regulators.

That isn’t the only external threat to the Google/Facebook duopoly. The US attitude to anti-trust law was shaped by Robert Bork, the judge whom Reagan nominated for the Supreme Court but the Senate failed to confirm. Bork’s most influential legal stance came in the area of competition law. He promulgated the doctrine that the only form of anti-competitive action which matters concerns the prices paid by consumers. His idea was that if the price is falling that means the market is working, and no questions of monopoly need be addressed. This philosophy still shapes regulatory attitudes in the US and it’s the reason Amazon, for instance, has been left alone by regulators despite the manifestly monopolistic position it holds in the world of online retail, books especially.

The big internet enterprises seem invulnerable on these narrow grounds. Or they do until you consider the question of individualised pricing. The huge data trail we all leave behind as we move around the internet is increasingly used to target us with prices which aren’t like the tags attached to goods in a shop. On the contrary, they are dynamic, moving with our perceived ability to pay.​5 Four researchers based in Spain studied the phenomenon by creating automated personas to behave as if, in one case, ‘budget conscious’ and in another ‘affluent’, and then checking to see if their different behaviour led to different prices. It did: a search for headphones returned a set of results which were on average four times more expensive for the affluent persona. An airline-ticket discount site charged higher fares to the affluent consumer. In general, the location of the searcher caused prices to vary by as much as 166 per cent. So in short, yes, personalised prices are a thing, and the ability to create them depends on tracking us across the internet. That seems to me a prima facie violation of the American post-Bork monopoly laws, focused as they are entirely on price. It’s sort of funny, and also sort of grotesque, that an unprecedentedly huge apparatus of consumer surveillance is fine, apparently, but an unprecedentedly huge apparatus of consumer surveillance which results in some people paying higher prices may well be illegal.

Perhaps the biggest potential threat to Facebook is that its users might go off it. Two billion monthly active users is a lot of people, and the ‘network effects’ – the scale of the connectivity – are, obviously, extraordinary. But there are other internet companies which connect people on the same scale – Snapchat has 166 million daily users, Twitter 328 million monthly users – and as we’ve seen in the disappearance of Myspace, the onetime leader in social media, when people change their minds about a service, they can go off it hard and fast.

For that reason, were it to be generally understood that Facebook’s business model is based on surveillance, the company would be in danger. The one time Facebook did poll its users about the surveillance model was in 2011, when it proposed a change to its terms and conditions – the change that underpins the current template for its use of data. The result of the poll was clear: 90 per cent of the vote was against the changes. Facebook went ahead and made them anyway, on the grounds that so few people had voted. No surprise there, neither in the users’ distaste for surveillance nor in the company’s indifference to that distaste. But this is something which could change.

The other thing that could happen at the level of individual users is that people stop using Facebook because it makes them unhappy. This isn’t the same issue as the scandal in 2014 when it turned out that social scientists at the company had deliberately manipulated some people’s news feeds to see what effect, if any, it had on their emotions. The resulting paper, published in the Proceedings of the National Academy of Sciences, was a study of ‘social contagion’, or the transfer of emotion among groups of people, as a result of a change in the nature of the stories seen by 689,003 users of Facebook. ‘When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.’ The scientists seem not to have considered how this information would be received, and the story played quite big for a while.

Perhaps the fact that people already knew this story accidentally deflected attention from what should have been a bigger scandal, exposed earlier this year in a paper from the American Journal of Epidemiology. The paper was titled ‘Association of Facebook Use with Compromised Well-Being: A Longitudinal Study’. The researchers found quite simply that the more people use Facebook, the more unhappy they are. A 1 per cent increase in ‘likes’ and clicks and status updates was correlated with a 5 to 8 per cent decrease in mental health. In addition, they found that the positive effect of real-world interactions, which enhance well-being, was accurately paralleled by the ‘negative associations of Facebook use’. In effect people were swapping real relationships which made them feel good for time on Facebook which made them feel bad. That’s my gloss rather than that of the scientists, who take the trouble to make it clear that this is a correlation rather than a definite causal relationship, but they did go so far – unusually far – as to say that the data ‘suggests a possible trade-off between offline and online relationships’. This isn’t the first time something like this effect has been found. To sum up: there is a lot of research showing that Facebook makes people feel like shit. So maybe, one day, people will stop using it.​6

*

What, though, if none of the above happens? What if advertisers don’t rebel, governments don’t act, users don’t quit, and the good ship Zuckerberg and all who sail in her continues blithely on? We should look again at that figure of two billion monthly active users. The total number of people who have any access to the internet – as broadly defined as possible, to include the slowest dial-up speeds and creakiest developing-world mobile service, as well as people who have access but don’t use it – is three and a half billion. Of those, about 750 million are in China and Iran, which block Facebook. Russians, about a hundred million of whom are on the net, tend not to use Facebook because they prefer their native copycat site VKontakte. So put the potential audience for the site at 2.6 billion. In developed countries where Facebook has been present for years, use of the site peaks at about 75 per cent of the population (that’s in the US). That would imply a total potential audience for Facebook of 1.95 billion. At two billion monthly active users, Facebook has already gone past that number, and is running out of connected humans. Martínez compares Zuckerberg to Alexander the Great, weeping because he has no more worlds to conquer. Perhaps this is one reason for the early signals Zuck has sent about running for president – the fifty-state pretending-to-give-a-shit tour, the thoughtful-listening pose he’s photographed in while sharing milkshakes in (Presidential Ambitions klaxon!) an Iowa diner.

Whatever comes next will take us back to those two pillars of the company, growth and monetisation. Growth can only come from connecting new areas of the planet. An early experiment came in the form of Free Basics, a program offering internet connectivity to remote villages in India, with the proviso that the range of sites on offer should be controlled by Facebook. ‘Who could possibly be against this?’ Zuckerberg wrote in the Times of India. The answer: lots and lots of angry Indians. The government ruled that Facebook shouldn’t be able to ‘shape users’ internet experience’ by restricting access to the broader internet. A Facebook board member tweeted that ‘anti-colonialism has been economically catastrophic for the Indian people for decades. Why stop now?’ As Taplin points out, that remark ‘unwittingly revealed a previously unspoken truth: Facebook and Google are the new colonial powers.’

So the growth side of the equation is not without its challenges, technological as well as political. Google (which has a similar running-out-of-humans problem) is working on ‘Project Loon’, ‘a network of balloons travelling on the edge of space, designed to extend internet connectivity to people in rural and remote areas worldwide’. Facebook is working on a project involving a solar-powered drone called the Aquila, which has the wingspan of a commercial airliner, weighs less than a car, and when cruising uses less energy than a microwave oven. The idea is that it will circle remote, currently unconnected areas of the planet, for flights that last as long as three months at a time. It connects users via laser and was developed in Bridgwater, Somerset. (Amazon’s drone programme is based in the UK too, near Cambridge. Our legal regime is pro-drone.) Even the most hardened Facebook sceptic has to be a little bit impressed by the ambition and energy. But the fact remains that the next two billion users are going to be hard to find.

That’s growth, which will mainly happen in the developing world. Here in the rich world, the focus is more on monetisation, and it’s in this area that I have to admit something which is probably already apparent. I am scared of Facebook. The company’s ambition, its ruthlessness, and its lack of a moral compass scare me. It goes back to that moment of its creation, Zuckerberg at his keyboard after a few drinks creating a website to compare people’s appearance, not for any real reason other than that he was able to do it. That’s the crucial thing about Facebook, the main thing which isn’t understood about its motivation: it does things because it can. Zuckerberg knows how to do something, and other people don’t, so he does it. Motivation of that type doesn’t work in the Hollywood version of life, so Aaron Sorkin had to give Zuck a motive to do with social aspiration and rejection. But that’s wrong, completely wrong. He isn’t motivated by that kind of garden-variety psychology. He does this because he can, and justifications about ‘connection’ and ‘community’ are ex post facto rationalisations. The drive is simpler and more basic. That’s why the impulse to growth has been so fundamental to the company, which is in many respects more like a virus than it is like a business. Grow and multiply and monetise. Why? There is no why. Because.

Automation and artificial intelligence are going to have a big impact in all kinds of worlds. These technologies are new and real and they are coming soon. Facebook is deeply interested in these trends. We don’t know where this is going, we don’t know what the social costs and consequences will be, we don’t know what will be the next area of life to be hollowed out, the next business model to be destroyed, the next company to go the way of Polaroid or the next business to go the way of journalism or the next set of tools and techniques to become available to the people who used Facebook to manipulate the elections of 2016. We just don’t know what’s next, but we know it’s likely to be consequential, and that a big part will be played by the world’s biggest social network. On the evidence of Facebook’s actions so far, it’s impossible to face this prospect without unease.


G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
What Yahoo’s NSA Surveillance Means for Email Privacy
« Reply #1070 on: September 13, 2017, 08:07:52 AM »
https://protonmail.com/blog/yahoo-us-intelligence/

What Yahoo’s NSA Surveillance Means for Email Privacy
Posted on October 6, 2016 by Andy Yen

Updated October 7, 2016 with additional clarification and analysis of Yahoo’s denial
Dear ProtonMail Community,
Two weeks ago, we published a security advisory regarding the mass hacking of Yahoo. Unfortunately, due to recent events, we are issuing a second advisory regarding all US email providers.
What happened?
This week, it was revealed that as a result of a secret US government directive, Yahoo was forced to implement special surveillance software to scan all Yahoo Mail accounts at the request of the NSA and FBI. Sometime in early 2015, Yahoo secretly modified their spam and malware filters to scan all incoming email messages for the phrases in the court order and then siphoned those messages off to US intelligence. This is significant for several reasons:
 
This is the first known incident where a US intelligence directive has indiscriminately targeted all accounts as opposed to just the accounts of suspects. Effectively, all 500 million+ Yahoo Mail users were presumed to be guilty.
Instead of searching stored messages, this directive forced Yahoo to scan incoming messages in real-time.
Because ALL incoming email messages were targeted, this program spied on every person who emailed a Yahoo Mail account, violating the privacy of users around the world who may not even have been using a US email service.
 
What does this mean for US tech companies?
This is a terrible precedent and ushers in a new era of global mass surveillance. It means that US tech companies that serve billions of users around the world can now be forced to act as extensions of the US surveillance apparatus. The problem extends well beyond Yahoo. As was reported earlier, Yahoo did not fight the secret directive because Yahoo CEO Marissa Mayer and the Yahoo legal team did not believe that they could successfully resist the directive.
We believe that Yahoo’s assessment is correct. If it was possible to fight the directive, Yahoo certainly would have done so since they previously fought against secret FISA court orders in 2008. It does not make sense that US surveillance agencies would serve Yahoo Mail with such an order but ignore Gmail, the world’s largest email provider, or Outlook. There is no doubt that the secret surveillance software is also present in Gmail and Outlook, or at least there is nothing preventing Gmail and Outlook from being forced to comply with a similar directive in the future.  From a legal perspective, there is nothing that makes Yahoo particularly vulnerable, or Google particularly invulnerable.
Google and Microsoft have come out to deny they participated in US government mandated mass surveillance, but under a National Security Letter (NSL) gag order, Google and Microsoft would have no choice but to deny the allegations or risk breaking US law (our analysis of Yahoo’s denial is at the bottom of this post). Again ,there is no conceivable reason US intelligence would target Yahoo but ignore Gmail, so we must consider this to be the most probable scenario, particularly since gag orders have become the norm rather than the exception.
In effect, the US government has now officially co-opted US tech companies to perform mass surveillance on all users, regardless of whether they are under US jurisdiction or not. Given the huge amount of data that Google has, this is a truly scary proposition.
How does this impact ProtonMail?
ProtonMail’s secure email service is based in Switzerland and all our servers are located in Switzerland, so all user data is maintained under the protection of Swiss privacy laws. ProtonMail cannot be compelled to perform mass surveillance on our users, nor be compelled to act on behalf of US intelligence. ProtonMail also utilizes end-to-end encryption which means we do not have the capability to read user emails in the first place, so we couldn’t hand over user email data even if we wanted to.
However, since email is an open system, any unencrypted email that goes out of ProtonMail, to Yahoo Mail for example, could potentially have been swept up by these mass surveillance programs and sent to US government agencies. This is why if you want to avoid having your communications scanned and saved by US government agencies, it is important to invite friends, family, and colleagues to use non-US email accounts such as ProtonMail or other email services offered by European companies.
What can the rest of the world do about this?
Unfortunately, the tech sector today is entirely dominated by US companies. Just like Google has a monopoly on search, the US government has a near monopoly on mass surveillance. Even without US government pressure, most US tech companies also have perverse economic incentives to slowly chip away at digital privacy.
This week, we have again seen how easily the massive amounts of private data retained by US tech companies can be abused by US intelligence for their own purposes. Without alternatives to the US tech giants, the rest of the world has no choice but to consent to this. This is an unprecedented challenge, but it also presents an unprecedented opportunity, particularly for Europe.
Now is the time for Europe to invest in its own tech sector, unbeholden to outside interests. This is the only way the European community can continue to safeguard the European ideals of privacy, liberty, and freedom online. It is time for European governments and citizens to act before it is too late.
The only chance for privacy to prevail against these attacks is for the global community to support a new generation of web services which protect privacy by default. These services, such as ProtonMail’s encrypted email service, must operate with a business model where users can donate or pay for services, instead of giving up data and privacy. The security community also has an obligation to make these new service just as easy to use as the ones they replace.
Services such as secure email, search, and cloud storage are now vital to our lives. Their importance means that for the good of all citizens, we need to develop private alternatives that are aligned with users, and free from corporate greed and government overreach. Crowdfunded services like ProtonMail are rising to the challenge, but we need more support from the global community to successfully take on better funded US tech giants. Privacy matters, and your support is essential to ensure the Internet of the future is one that protects our rights.

Best Regards,
The ProtonMail Team
You can get a free secure email account from ProtonMail here.
You can support our mission by upgrading to a paid plan or donating so that we can grow beyond email.

Analysis of Yahoo Denial:

Yahoo, like every other US tech company, has issued a denial, basically denying Reuter’s account of the mass surveillance. Here is Yahoo’s denial, word for word:
“The article is misleading. We narrowly interpret every government request for user data to minimize disclosure. The mail scanning described in the article does not exist on our systems.”
It is curious that Yahoo’s response to this incident is only 29 words, but upon closer examination, it is a very carefully crafted 29 words. First, Yahoo calls the reports misleading. This is a curious choice of words because it does not claim that the report is false. Finally, Yahoo states that, “The mail scanning described in the article does not exist on our systems.” While this could be a true statement, it does NOT deny that the scanning could have been present on Yahoo’s systems in the past.
The same day as the Yahoo denial, the New York Times obtained independent verification of the Reuter’s story from two US government officials. This allowed the New York Times to confirm the following facts:
Yahoo is in fact under a gag order and from a legal standpoint, they cannot confirm the mass surveillance (in other words, they must deny the story or avoid making any statements that would be seen as a confirmation).
The Yahoo mass data collection did in fact take place, but the collection is no longer occurring at present time. Thus, we now understand the disingenuous wording of the last sentence in Yahoo’s statement.
Yahoo’s denial (or non-denial, as the case may be), followed immediately by confirmation by the NYT demonstrates the new reality that denials by US tech companies cannot really be taken at face value anymore. It is not that US tech companies are intentionally trying to mislead their customers, but many times, they have no choice due to the gag orders that now inevitably accompany any government requests. If statements from US tech companies turn out to be suspect (as in the Yahoo example), the likelihood of the public ever knowing the truth becomes highly unlikely, and this brings us to a dangerous place.

About the Author

Andy Yen
Andy is the Co-Founder of ProtonMail. He is a long time advocate of privacy rights and has spoken at TED, SXSW, and the Asian Investigative Journalism Conference about online privacy issues. Previously, Andy was a research scientist at CERN and has a PhD in Particle Physics from Harvard University. You can watch his TED talk online to learn more about ProtonMail's mission.
 

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1071 on: September 14, 2017, 11:01:39 PM »
 :-o :-o :-o

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
How to hide from the internet’s surveillance machine
« Reply #1072 on: September 20, 2017, 12:39:17 PM »
http://www.futurity.org/surveillance-privacy-internet-book-1096512/

How to hide from the internet’s surveillance machine
Posted by Eileen Reynolds-NYU January 27th, 2016

    
You are free to share this article under the Attribution 4.0 International license.

It’s a common assumption that being online means you’ll have to part ways with your personal data and there’s nothing you can do about it.

Not true, according to two communication professors. In their new book, Obfuscation: A User’s Guide for Privacy and Protest (MIT Press, 2015), they argue both that your privacy is being eroded through acts way, way more heinous than you might think, and that contrary to popular belief, there is something you can do about it.

Part philosophical treatise and part rousing how-to, Obfuscation reads at times as an urgent call to arms.

“Machines don’t forget.”
“We mean to start a revolution with this book,” its authors declare. “Although its lexicon of methods can be, and has been, taken up by tyrants, authoritarians, and secret police, our revolution is especially suited for use by the small players, the humble, the stuck, those not in a position to decline or opt out or exert control.”

One of the tricky things about online tracking is that it’s so complex and invisible that we aren’t necessarily cognizant of it happening,” says Finn Brunton, coauthor and professor at New York University. “Part of the goal of Obfuscation is to draw attention to precisely that problem.”



Consider the trick by which, in loading a single (practically invisible) pixel onto a website you’re visiting, an ad server can, without your knowledge, collect all kinds of information about the browser and device that you’re using—information that could then be used down the line to, say, jack up the price on a plane ticket the next time you’re making travel arrangements, serve up a selection of higher-end goods the next time you search on an online retailer’s site, or, on the flip side, make it tougher for you to get a loan, if something about your data gets flagged as a credit risk.

This is a clear example of what Brunton and coauthor Helen Nissenbaum, also a professor at NYU, describe as “information asymmetry,” where, as they write, the companies collecting data “know much about us, and we know little about them or what they can do.”

The surveillance background

It’s not just that we haven’t agreed to having our personal information collected, it’s that the invisible processes of dossier building are so complex, and their consequences so difficult to predict, that it would be virtually impossible to understand exactly what we’re being asked to consent to.

Whereas NSA snooping makes headlines, other forms of quiet surveillance go unnoticed (and unregulated), to the benefit of shadowy entities making bank in the data economy—or even police using software to calculate citizens’ threat “scores.”

“Machines don’t forget,” Brunton says. Suppose you have an agreement with one company, “the best company run by the best people,” he says, “but then they go bankrupt, or get subpoenaed, or acquired. Your data ends up on the schedule of assets,” and then you don’t know where it might end up.”

[Do your friends give your data to third parties?]

To be clear, the authors—whose manifesto irked critics who argue that these kinds of transactions are what finance the “free” internet—aren’t against online advertising per se.

“Before ad networks started the surveillance background,” Nissenbaum explains, “there was traditional advertising, where Nike could buy an ad space on, say, the New York Times [website], or contextual advertising, where Nike would buy space on Sports Illustrated. There were plenty of ways of advertising that didn’t involve tracking people.”

Nowadays, though, Brunton says, “Many online sites that produce content you use and enjoy don’t get that much money out of the advertising, and yet there’s a whole galaxy of third-party groups on the back end swapping data back and forth for profit, in a way that’s not necessarily more effective for the merchant, the content provider, or you.

“Then add on top of it all that the data can be misused, and you have a network that is less secure and built around surveillance. I think that starts to shift the balance in favor of taking aggressive action.”

That’s where obfuscation—defined in the book as “the production of noise modeled on an existing signal in order to make a collection of data more ambiguous, confusing, harder to exploit, more difficult to act on, and therefore less valuable”—comes in.

TrackMeNot, for example, one of several elegant obfuscation tools designed by Nissenbaum and NYU computer science colleagues, serves up bogus queries to thwart search engines’ efforts to build a profile on you, so that when you search, say, “leather boots,” it also sends along “ghost” terms like “Tom Cruise,” “Spanish American War,” and “painters tape” (which don’t affect your search results). Another tool, ADNAUSEUM, registers a click on all the ads in your ad blocker, rendering futile any attempt to build a profile of your preferences based on ads you click.

History lessons

Even as they look to future battles, Brunton and Nissenbaum draw inspiration from the past, offering a compendium of examples of obfuscation tactics used throughout history.

World War II planes released chaff—strips of black paper coated with foil—to overwhelm enemy radar with false targets. Poker players sometimes employ false tells; baseball coaches hide signs amid a string of meaningless hand gestures.

People worried that their private conversations may be being recorded can play a “babble tape” in the background—an update to the classic mobster strategy of meeting in noisy bathrooms to safeguard against FBI audio surveillance.

[Why your phone is the perfect surveillance tool]

Shoppers can swap loyalty cards with strangers to prevent brick-and-mortar stores from building a record of their purchases. The orb-weaving spider, vulnerable to attacks by wasps, builds spider decoys to position around its web.

Brunton and Nissenbaum are often asked in interviews about what simple steps even technophobes can take to protect their privacy. The answer: It depends on what scares you most.

“Are you worried about Google?” Brunton asks. “About your insurance company? Where are the places that you want to push back?” A theme that emerges in the book is that obfuscation tactics, while often similar in principle, vary a lot in practice; each unique threat requires a unique defense.

“The ideal world for me is one where you don’t need to obfuscate.”
“Camouflage is often very specific,” Nissenbaum explains. “This animal is worried about these particular predators with this particular eyesight. It’s a general thing but in the instance, it is quite specialized.”

That makes for a big challenge, since there are so many threats—and the notion of “opting out” of all types of surveillance has become so impractical as to be nearly nonsensical. (In the book, Brunton and Nissenbaum quip that it would mean leading “the life of an undocumented migrant laborer of the 1920s, with no internet, no phones, no insurance, no assets, riding the rails, being paid off the books for illegal manual work.”)

Brunton, for example, refuses to use E-ZPass (which, in addition to enabling your cashless commute, announces your location to readers that could be waiting anywhere—not just in tollbooths), but can’t resist the convenience of Google Maps. And Nissenbaum declined to share her location with acquaintances using the iPhone’s “Find My Friends” app, but lamented that there’s no box to check to keep Apple from knowing her whereabouts.

Brunton and Nissenbaum stress that obfuscation isn’t a solution to the problem of constant surveillance, but rather a stopgap to draw attention to the issue and the need for better regulation.

“The ideal world for me,” Nissenbaum says, is “one where you don’t need to obfuscate.”

She draws an analogy between our time and the moment when, soon after telephones became mainstream, the US passed laws forbidding phone companies from listening in on their customers’ conversations.

“You could imagine a different route, where they could eavesdrop and say, ‘Oh, I can hear you discussing with your mom that you would like to go to Mexico in the summer, why don’t we send you a few coupons for Mexican travel?'” Until we pass similar laws to address our current predicament, we’ll be stuck with “the information universe eavesdropping on everything we do.”

Brunton draws an even bolder comparison—between the dawn of the information age and the (much) earlier transition from agrarian to industrial life. Indeed, history is a testament to how societies can and do find equilibrium with relation to transformative new technologies.

The bad news, in the case of the Industrial Revolution, though, is that “in the middle of that shift, horrific things happened to huge populations of people,” Brunton says. Today, he argues, we have the opportunity to prevent the digital equivalent of such horrors. “Can find ways to prevent the worst outcomes for vulnerable populations?”

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Will Facebook become the World's Most Powerful Government Contractor?
« Reply #1073 on: September 20, 2017, 05:44:24 PM »
http://globalguerrillas.typepad.com/globalguerrillas/2017/09/will-facebook-become-the-worlds-most-powerful-government-contractor.html

MONDAY, 18 SEPTEMBER 2017
Will Facebook become the World's Most Powerful Government Contractor?
Facebook, with a COMPLETE social graph, becomes more than an advertising platform. It becomes an arm of government. 

Facebook's Network
FACEBOOK'S GLOBAL SOCIAL NETWORK VISUALIZED
Here's how.  Facebook recently passed:

2 billion monthly users.
That’s~70% of the 2.8 billion Internet users living outside of China/Russia (they use a different social networking system).
With slowing rates of growth for Facebook and the Internet (due to saturation), Facebook is likely to hit 3.5 billion monthly users by 2025.
The Complete Social Graph
At 3.5 billion users in 2025, Facebook’s social network will be more than half of the 6.5 billion people living outside of China/Russia. That’s a network that is large enough and deep enough to:

create a global census that can “see” nearly everyone on the planet , even if they don’t have a Facebook account.
enable real-time tracking on nearly everyone on the planet using smartphone GPS data and ancillary information (mentions of location/who you are with/pictures).
create the largest micro-targeting database on earth, from pictures to posted links to likes. Details on the interests and desires of billions of people.
What Does Facebook do with a Complete Social Graph?

The simple and straightforward answer is to build a very profitable advertising platform. However, the success of that advertising platform will be based on the ability of Facebook to avoid intrusive government regulation. To accomplish that, Facebook will develop services it can provide governments to better secure, control, and manage their citizens in a volatile global environment. In exchange for these services, Facebook will avoid regulations that will limit its ability to make money. Here’s more detail on the services it could provide:

Surveillance. The ability to ID anyone using facial recognition AIs (trained on the trillions of photos uploaded to the platform) and then track their movements globally. Border security and access control (buildings and government services). Tracking movement domestically (from CCTVs to Fastlane pics).
Censorship. The ability to limit domestic political conversations to those approved by the government. As the primary source of news in nearly country (outside of China), Facebook has the ability to limit sources to approved channels, prevent the discussion of banned topics, and steer conversations in subtle ways.
Counter-terrorism. Facebook will peer into private conversations and do the network analysis to ID potential extremists. It will also actively sabotage or intervene in terrorist/extremist recruiting networks to damage their effectiveness in securing recruits. Facebook now has the ability to offer NSA scale services, with better data, to nations around the world.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
The Long Night Ahead
« Reply #1074 on: September 23, 2017, 10:15:30 AM »
http://globalguerrillas.typepad.com/globalguerrillas/2017/09/the-long-night-ahead.html

FRIDAY, 22 SEPTEMBER 2017
The Long Night Ahead
Facebook just declared war against "disruptive" information.  In addition to hundreds of new human censors, they are training AI censors capable of identifying and deleting 'unacceptable' information found in the discussions of all two billion members in real time. This development highlights what the real danger posed by a socially networked world actually is.

The REAL danger facing a world interconnected by social networking isn't disruption.  As we have seen on numerous occasions, the danger posed by disruptive information and events is fleeting. Disruption, although potentially painful in the short term, doesn't last, nor is it truly damaging over the long term. In fact, the true danger posed by an internetworked world is just the opposite of disruption.  

This danger is an all encompassing online orthodoxy.  A sameness of thought and approach enforced by hundreds of millions of socially internetworked adherents.  A global orthodoxy that ruthless narrows public thought down to a single, barren, ideological framework. A ruling network that prevents dissent and locks us into stagnation and inevitable failure as it runs afoul of reality and human nature.  

This ruling network already exists.  It already has millions of online members and it is growing and deepening with each passing day -- extending its tendrils into the media, the civil service, tech companies, and academia.  There's little doubt that over time it will eventually exert decisive influence over the entire government as well.  

However, in order to exert authoritarian control over our decision making, it needs control over the flow of information in our society. Merely controlling the online debate is insufficient.  For real power, the ruling network needs to control the information flows on our information infrastructure -- Facebook, Google, and Amazon -- and that's exactly the power it is now getting.  

However, as large and powerful as this network already is, I still believe this future is reversible. We still have a short time before a long night descends across the world.

Sincerely,

John Robb

Writing on a cool New England afternoon.  Feeling a bit like Hayek today.

PS:  As if on cue, authoritarianism that diminishes the role of the individual is in the wind:

a majority of US students now oppose free speech on campus.
a free fall in support for democracy as a preferred form of governance among young people.
a majority of young people now oppose capitalism.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
FaceHuggerBook
« Reply #1075 on: October 24, 2017, 08:03:37 AM »

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Big Data and Big Brother in China
« Reply #1076 on: October 24, 2017, 03:14:39 PM »
http://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasionBig data meets Big Brother as China moves to rate its citizens
The Chinese government plans to launch its Social Credit System in 2020. The aim? To judge the trustworthiness – or otherwise – of its 1.3 billion residents
 

Kevin Hong
By RACHEL BOTSMAN

Saturday 21 October 2017
On June 14, 2014, the State Council of China published an ominous-sounding document called "Planning Outline for the Construction of a Social Credit System". In the way of Chinese policy documents, it was a lengthy and rather dry affair, but it contained a radical idea. What if there was a national trust score that rated the kind of citizen you were?

Imagine a world where many of your daily activities were constantly monitored and evaluated: what you buy at the shops and online; where you are at any given time; who your friends are and how you interact with them; how many hours you spend watching content or playing video games; and what bills and taxes you pay (or not). It's not hard to picture, because most of that already happens, thanks to all those data-collecting behemoths like Google, Facebook and Instagram or health-tracking apps such as Fitbit. But now imagine a system where all these behaviours are rated as either positive or negative and distilled into a single number, according to rules set by the government. That would create your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus, your rating would be publicly ranked against that of the entire population and used to determine your eligibility for a mortgage or a job, where your children can go to school - or even just your chances of getting a date.

A futuristic vision of Big Brother out of control? No, it's already getting underway in China, where the government is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. The Chinese government is pitching the system as a desirable way to measure and enhance "trust" nationwide and to build a culture of "sincerity". As the policy states, "It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility."

Others are less sanguine about its wider purpose. "It is very ambitious in both depth and scope, including scrutinising individual behaviour and what books people are reading. It's Amazon's consumer tracking with an Orwellian political twist," is how Johan Lagerkvist, a Chinese internet specialist at the Swedish Institute of International Affairs, described the social credit system. Rogier Creemers, a post-doctoral scholar specialising in Chinese law and governance at the Van Vollenhoven Institute at Leiden University, who published a comprehensive translation of the plan, compared it to "Yelp reviews with the nanny state watching over your shoulder".

For now, technically, participating in China's Citizen Scores is voluntary. But by 2020 it will be mandatory. The behaviour of every single citizen and legal person (which includes every company or other entity)in China will be rated and ranked, whether they like it or not.


Kevin Hong
Prior to its national roll-out in 2020, the Chinese government is taking a watch-and-learn approach. In this marriage between communist oversight and capitalist can-do, the government has given a licence to eight private companies to come up with systems and algorithms for social credit scores. Predictably, data giants currently run two of the best-known projects.

The first is with China Rapid Finance, a partner of the social-network behemoth Tencent and developer of the messaging app WeChat with more than 850 million active users. The other, Sesame Credit, is run by the Ant Financial Services Group (AFSG), an affiliate company of Alibaba. Ant Financial sells insurance products and provides loans to small- to medium-sized businesses. However, the real star of Ant is AliPay, its payments arm that people use not only to buy things online, but also for restaurants, taxis, school fees, cinema tickets and even to transfer money to each other.

Sesame Credit has also teamed up with other data-generating platforms, such as Didi Chuxing, the ride-hailing company that was Uber's main competitor in China before it acquired the American company's Chinese operations in 2016, and Baihe, the country's largest online matchmaking service. It's not hard to see how that all adds up to gargantuan amounts of big data that Sesame Credit can tap into to assess how people behave and rate them accordingly.

So just how are people rated? Individuals on Sesame Credit are measured by a score ranging between 350 and 950 points. Alibaba does not divulge the "complex algorithm" it uses to calculate the number but they do reveal the five factors taken into account. The first is credit history. For example, does the citizen pay their electricity or phone bill on time? Next is fulfilment capacity, which it defines in its guidelines as "a user's ability to fulfil his/her contract obligations". The third factor is personal characteristics, verifying personal information such as someone's mobile phone number and address. But the fourth category, behaviour and preference, is where it gets interesting.

Under this system, something as innocuous as a person's shopping habits become a measure of character. Alibaba admits it judges people by the types of products they buy. "Someone who plays video games for ten hours a day, for example, would be considered an idle person," says Li Yingyun, Sesame's Technology Director. "Someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility." So the system not only investigates behaviour - it shapes it. It "nudges" citizens away from purchases and behaviours the government does not like.

Friends matter, too. The fifth category is interpersonal relationships. What does their choice of online friends and their interactions say about the person being assessed? Sharing what Sesame Credit refers to as "positive energy" online, nice messages about the government or how well the country's economy is doing, will make your score go up.

Alibaba is adamant that, currently, anything negative posted on social media does not affect scores (we don't know if this is true or not because the algorithm is secret). But you can see how this might play out when the government's own citizen score system officially launches in 2020. Even though there is no suggestion yet that any of the eight private companies involved in the ongoing pilot scheme will be ultimately responsible for running the government's own system, it's hard to believe that the government will not want to extract the maximum amount of data for its SCS, from the pilots. If that happens, and continues as the new normal under the government's own SCS it will result in private platforms acting essentially as spy agencies for the government. They may have no choice.


Posting dissenting political opinions or links mentioning Tiananmen Square has never been wise in China, but now it could directly hurt a citizen's rating. But here's the real kicker: a person's own score will also be affected by what their online friends say and do, beyond their own contact with them. If someone they are connected to online posts a negative comment, their own score will also be dragged down.

So why have millions of people already signed up to what amounts to a trial run for a publicly endorsed government surveillance system? There may be darker, unstated reasons - fear of reprisals, for instance, for those who don't put their hand up - but there is also a lure, in the form of rewards and "special privileges" for those citizens who prove themselves to be "trustworthy" on Sesame Credit.

If their score reaches 600, they can take out a Just Spend loan of up to 5,000 yuan (around £565) to use to shop online, as long as it's on an Alibaba site. Reach 650 points, they may rent a car without leaving a deposit. They are also entitled to faster check-in at hotels and use of the VIP check-in at Beijing Capital International Airport. Those with more than 666 points can get a cash loan of up to 50,000 yuan (£5,700), obviously from Ant Financial Services. Get above 700 and they can apply for Singapore travel without supporting documents such as an employee letter. And at 750, they get fast-tracked application to a coveted pan-European Schengen visa. "I think the best way to understand the system is as a sort of bastard love child of a loyalty scheme," says Creemers.

Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen's score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.

Sesame Credit already offers tips to help individuals improve their ranking, including warning about the downsides of friending someone who has a low score. This might lead to the rise of score advisers, who will share tips on how to gain points, or reputation consultants willing to offer expert advice on how to strategically improve a ranking or get off the trust-breaking blacklist.


Indeed, Sesame Credit is basically a big data gamified version of the Communist Party's surveillance methods; the disquieting dang'an. The regime kept a dossier on every individual that tracked political and personal transgressions. A citizen's dang'an followed them for life, from schools to jobs. People started reporting on friends and even family members, raising suspicion and lowering social trust in China. The same thing will happen with digital dossiers. People will have an incentive to say to their friends and family, "Don't post that. I don't want you to hurt your score but I also don't want you to hurt mine."

We're also bound to see the birth of reputation black markets selling under-the-counter ways to boost trustworthiness. In the same way that Facebook Likes and Twitter followers can be bought, individuals will pay to manipulate their score. What about keeping the system secure? Hackers (some even state-backed) could change or steal the digitally stored information.

"People with low ratings will have slower internet speeds; restricted access to restaurants and the removal of the right to travel"
Rachel Botsman, author of ‘Who Can You Trust?’
The new system reflects a cunning paradigm shift. As we've noted, instead of trying to enforce stability or conformity with a big stick and a good dose of top-down fear, the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It's gamified obedience.

In a trendy neighbourhood in downtown Beijing, the BBC news services hit the streets in October 2015 to ask people about their Sesame Credit ratings. Most spoke about the upsides. But then, who would publicly criticise the system? Ding, your score might go down. Alarmingly, few people understood that a bad score could hurt them in the future. Even more concerning was how many people had no idea that they were being rated.

Currently, Sesame Credit does not directly penalise people for being "untrustworthy" - it's more effective to lock people in with treats for good behaviour. But Hu Tao, Sesame Credit's chief manager, warns people that the system is designed so that "untrustworthy people can't rent a car, can't borrow money or even can't find a job". She has even disclosed that Sesame Credit has approached China's Education Bureau about sharing a list of its students who cheated on national examinations, in order to make them pay into the future for their dishonesty.

Penalties are set to change dramatically when the government system becomes mandatory in 2020. Indeed, on September 25, 2016, the State Council General Office updated its policy entitled "Warning and Punishment Mechanisms for Persons Subject to Enforcement for Trust-Breaking". The overriding principle is simple: "If trust is broken in one place, restrictions are imposed everywhere," the policy document states.

For instance, people with low ratings will have slower internet speeds; restricted access to restaurants, nightclubs or golf courses; and the removal of the right to travel freely abroad with, I quote, "restrictive control on consumption within holiday areas or travel businesses". Scores will influence a person's rental applications, their ability to get insurance or a loan and even social-security benefits. Citizens with low scores will not be hired by certain employers and will be forbidden from obtaining some jobs, including in the civil service, journalism and legal fields, where of course you must be deemed trustworthy. Low-rating citizens will also be restricted when it comes to enrolling themselves or their children in high-paying private schools. I am not fabricating this list of punishments. It's the reality Chinese citizens will face. As the government document states, the social credit system will "allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step".

According to Luciano Floridi, a professor of philosophy and ethics of information at the University of Oxford and the director of research at the Oxford Internet Institute, there have been three critical "de-centering shifts" that have altered our view in self-understanding: Copernicus's model of the Earth orbiting the Sun; Darwin's theory of natural selection; and Freud's claim that our daily actions are controlled by the unconscious mind.


Floridi believes we are now entering the fourth shift, as what we do online and offline merge into an onlife. He asserts that, as our society increasingly becomes an infosphere, a mixture of physical and virtual experiences, we are acquiring an onlife personality - different from who we innately are in the "real world" alone. We see this writ large on Facebook, where people present an edited or idealised portrait of their lives. Think about your Uber experiences. Are you just a little bit nicer to the driver because you know you will be rated? But Uber ratings are nothing compared to Peeple, an app launched in March 2016, which is like a Yelp for humans. It allows you to assign ratings and reviews to everyone you know - your spouse, neighbour, boss and even your ex. A profile displays a "Peeple Number", a score based on all the feedback and recommendations you receive. Worryingly, once your name is in the Peeple system, it's there for good. You can't opt out.

Peeple has forbidden certain bad behaviours including mentioning private health conditions, making profanities or being sexist (however you objectively assess that). But there are few rules on how people are graded or standards about transparency.

China's trust system might be voluntary as yet, but it's already having consequences. In February 2017, the country's Supreme People's Court announced that 6.15 million of its citizens had been banned from taking flights over the past four years for social misdeeds. The ban is being pointed to as a step toward blacklisting in the SCS. "We have signed a memorandum… [with over] 44 government departments in order to limit 'discredited' people on multiple levels," says Meng Xiang, head of the executive department of the Supreme Court. Another 1.65 million blacklisted people cannot take trains.

Where these systems really descend into nightmarish territory is that the trust algorithms used are unfairly reductive. They don't take into account context. For instance, one person might miss paying a bill or a fine because they were in hospital; another may simply be a freeloader. And therein lies the challenge facing all of us in the digital world, and not just the Chinese. If life-determining algorithms are here to stay, we need to figure out how they can embrace the nuances, inconsistencies and contradictions inherent in human beings and how they can reflect real life.


You could see China's so-called trust plan as Orwell's 1984 meets Pavlov's dogs. Act like a good citizen, be rewarded and be made to think you're having fun. It's worth remembering, however, that personal scoring systems have been present in the west for decades.

More than 70 years ago, two men called Bill Fair and Earl Isaac invented credit scores. Today, companies use FICO scores to determine many financial decisions, including the interest rate on our mortgage or whether we should be given a loan.

For the majority of Chinese people, they have never had credit scores and so they can't get credit. "Many people don't own houses, cars or credit cards in China, so that kind of information isn't available to measure," explains Wen Quan, an influential blogger who writes about technology and finance. "The central bank has the financial data from 800 million people, but only 320 million have a traditional credit history." According to the Chinese Ministry of Commerce, the annual economic loss caused by lack of credit information is more than 600 billion yuan (£68bn).

China's lack of a national credit system is why the government is adamant that Citizen Scores are long overdue and badly needed to fix what they refer to as a "trust deficit". In a poorly regulated market, the sale of counterfeit and substandard products is a massive problem. According to the Organization for Economic Co-operation and Development (OECD), 63 per cent of all fake goods, from watches to handbags to baby food, originate from China. "The level of micro corruption is enormous," Creemers says. "So if this particular scheme results in more effective oversight and accountability, it will likely be warmly welcomed."


The government also argues that the system is a way to bring in those people left out of traditional credit systems, such as students and low-income households. Professor Wang Shuqin from the Office of Philosophy and Social Science at Capital Normal University in China recently won the bid to help the government develop the system that she refers to as "China's Social Faithful System". Without such a mechanism, doing business in China is risky, she stresses, as about half of the signed contracts are not kept. "Given the speed of the digital economy it's crucial that people can quickly verify each other's credit worthiness," she says. "The behaviour of the majority is determined by their world of thoughts. A person who believes in socialist core values is behaving more decently." She regards the "moral standards" the system assesses, as well as financial data, as a bonus.

Indeed, the State Council's aim is to raise the "honest mentality and credit levels of the entire society" in order to improve "the overall competitiveness of the country". Is it possible that the SCS is in fact a more desirably transparent approach to surveillance in a country that has a long history of watching its citizens? "As a Chinese person, knowing that everything I do online is being tracked, would I rather be aware of the details of what is being monitored and use this information to teach myself how to abide by the rules?" says Rasul Majid, a Chinese blogger based in Shanghai who writes about behavioural design and gaming psychology. "Or would I rather live in ignorance and hope/wish/dream that personal privacy still exists and that our ruling bodies respect us enough not to take advantage?" Put simply, Majid thinks the system gives him a tiny bit more control over his data.


Kevin Hong
When I tell westerners about the Social Credit System in China, their responses are fervent and visceral. Yet we already rate restaurants, movies, books and even doctors. Facebook, meanwhile, is now capable of identifying you in pictures without seeing your face; it only needs your clothes, hair and body type to tag you in an image with 83 per cent accuracy.

In 2015, the OECD published a study revealing that in the US there are at least 24.9 connected devices per 100 inhabitants. All kinds of companies scrutinise the "big data" emitted from these devices to understand our lives and desires, and to predict our actions in ways that we couldn't even predict ourselves.


Governments around the world are already in the business of monitoring and rating. In the US, the National Security Agency (NSA) is not the only official digital eye following the movements of its citizens. In 2015, the US Transportation Security Administration proposed the idea of expanding the PreCheck background checks to include social-media records, location data and purchase history. The idea was scrapped after heavy criticism, but that doesn't mean it's dead. We already live in a world of predictive algorithms that determine if we are a threat, a risk, a good citizen and even if we are trustworthy. We're getting closer to the Chinese system - the expansion of credit scoring into life scoring - even if we don't know we are.

So are we heading for a future where we will all be branded online and data-mined? It's certainly trending that way. Barring some kind of mass citizen revolt to wrench back privacy, we are entering an age where an individual's actions will be judged by standards they can't control and where that judgement can't be erased. The consequences are not only troubling; they're permanent. Forget the right to delete or to be forgotten, to be young and foolish.

While it might be too late to stop this new era, we do have choices and rights we can exert now. For one thing, we need to be able rate the raters. In his book The Inevitable, Kevin Kelly describes a future where the watchers and the watched will transparently track each other. "Our central choice now is whether this surveillance is a secret, one-way panopticon - or a mutual, transparent kind of 'coveillance' that involves watching the watchers," he writes.

Our trust should start with individuals within government (or whoever is controlling the system). We need trustworthy mechanisms to make sure ratings and data are used responsibly and with our permission. To trust the system, we need to reduce the unknowns. That means taking steps to reduce the opacity of the algorithms. The argument against mandatory disclosures is that if you know what happens under the hood, the system could become rigged or hacked. But if humans are being reduced to a rating that could significantly impact their lives, there must be transparency in how the scoring works.


In China, certain citizens, such as government officials, will likely be deemed above the system. What will be the public reaction when their unfavourable actions don't affect their score? We could see a Panama Papers 3.0 for reputation fraud.

It is still too early to know how a culture of constant monitoring plus rating will turn out. What will happen when these systems, charting the social, moral and financial history of an entire population, come into full force? How much further will privacy and freedom of speech (long under siege in China) be eroded? Who will decide which way the system goes? These are questions we all need to consider, and soon. Today China, tomorrow a place near you. The real questions about the future of trust are not technological or economic; they are ethical.

If we are not vigilant, distributed trust could become networked shame. Life will become an endless popularity contest, with us all vying for the highest rating that only a few can attain.

This is an extract from Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart (Penguin Portfolio) by Rachel Botsman, published on October 4. Since this piece was written, The People's Bank of China delayed the licences to the eight companies conducting social credit pilots. The government's plans to launch the Social Credit System in 2020 remain unchanged


DougMacG

  • Power User
  • ***
  • Posts: 18294
    • View Profile
Privacy, Big Brother (State and Corporate: Google is reading your Docs too!
« Reply #1077 on: November 01, 2017, 12:43:59 PM »
Besides reading your emails, knowing all your searches and tracking your location and listening in your home, Google is reading your Docs too.

http://www.telegraph.co.uk/technology/2017/11/01/google-reading-docs/

Google admits its new smart speaker was eavesdropping on users
http://money.cnn.com/2017/10/11/technology/google-home-mini-security-flaw/index.html

A (waived?) right of Privacy

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Privacy, Big Brother (State and Corporate: Google is reading your Docs too!
« Reply #1078 on: November 01, 2017, 07:40:47 PM »
Besides reading your emails, knowing all your searches and tracking your location and listening in your home, Google is reading your Docs too.

http://www.telegraph.co.uk/technology/2017/11/01/google-reading-docs/

Google admits its new smart speaker was eavesdropping on users
http://money.cnn.com/2017/10/11/technology/google-home-mini-security-flaw/index.html

A (waived?) right of Privacy

Know anyone who really reads the terms of service for anything?

Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
WSJ: Harper: Is it unreasonable to expect cell phone privacy?
« Reply #1079 on: November 29, 2017, 08:00:18 AM »



By Jim Harper
Nov. 28, 2017 6:36 p.m. ET
38 COMMENTS

A case that comes before the Supreme Court Wednesday may erode or solidify Justice Antonin Scalia’s legacy. How the justices decide in Carpenter v. U.S. won’t matter as much as how they reason. If they use the “reasonable expectation of privacy” test to decide whether the government can access cellphone users’ location data without a warrant, Scalia’s contributions to Fourth Amendment jurisprudence will be negated. But if the high court recognizes that data as owned in part by cellphone users, Scalia’s legacy will be secured, along with the Constitution’s safeguards against unreasonable search and seizure.

The plaintiff, Timothy Ivory Carpenter, was convicted in 2014 of participating in a string of armed robberies in the Detroit area and sentenced to 116 years in federal prison. Investigators obtained court orders netting 127 days of Mr. Carpenter’s cellphone records, showing that his phone was in communication with cell towers near the sites of four robberies. The court will decide whether investigators should have gained access to that data under a relatively low statutory standard requiring that the information be “relevant” to an ongoing investigation, or whether they should have asked a court for a warrant based on probable cause.

Since 1963, the dominant approach to the Fourth Amendment has been derived from a solo concurrence in Katz v. U.S. setting out the reasonable-expectation-of-privacy test. That test defines a search as having occurred anytime a government agent violates a defendant’s reasonable privacy expectations. It has often operated as a one-way ratchet against Fourth Amendment protection, using curious logic.

–– ADVERTISEMENT ––

Because possession of drugs and other contraband is illegal, concealing them is unreasonable. Thus, courts have held that whatever government action turns up such contraband is not a search. Actions that most would consider searches, such as directing drug-sniffing dogs at people and flying planes low over suspects’ houses, are treated as nonsearches that don’t require warrants.

Smith v. Maryland (1979) is the premier precedent supporting government access to telecommunications data. In Smith, government agents acting without a warrant persuaded a Baltimore telephone company to place a pen register on the phone line of a burglary-and-stalking suspect. The device captured the numbers of his outgoing calls, showing that he had dialed the victim’s home number. The Supreme Court found there was no reasonable expectation of privacy and thus no seizure or search.

Today the government interprets Smith as providing warrantless access to troves of data about the locations and movements of every cellphone user, subject to that statutory relevance standard. That data can reveal sensitive information, such as when people seek medical or psychological treatment, where they go to church, their relationships and business dealings, attendance at political events, and more. The appeals court in Carpenter adopted Smith’s reasoning.

For years, Scalia pointedly avoided the reasonable-expectation-of-privacy test. His 2001 decision in Kyllo v. U.S., for example, addressed the use of a thermal imaging device to detect heat patterns emanating from a home thought to contain a marijuana-growing operation. Scalia didn’t refer to privacy expectations in his argument. Rather, he claimed that when government agents use an exotic device “to explore the details of the home that would previously have been unknowable without physical intrusion, the surveillance is a ‘search’ and is presumptively unreasonable without a warrant.”

In 2012, another major Scalia decision again steered around the potholed logic of “reasonable expectations.” In U.S. v. Jones, Scalia’s majority opinion found that attaching a Global Positioning System device to a car without a warrant, and using that device to monitor the vehicle’s movements, constitutes a search. In 2015, the Second U.S. Circuit Court of Appeals in New York polished Scalia’s logic. Attachment of the GPS device was “a technical trespass on the defendant’s vehicle”—a small but important seizure that put the car to the government’s purposes.

Gauzy appeals to privacy expectations only complicate what ought to be straightforward: Searching is searching; seizing is seizing.

Cellphone privacy policies give consumers many rights to control their telecommunications data. Essentially these are property rights, which on their own should require that the government obtain a warrant before searching and seizing digital records. In Carpenter the court may find that such contracts help create an “expectation of privacy.” Or it may find that there isn’t a reasonable privacy expectation. Seizing data and examining its contents would become neither seizure nor search, giving government agents a free hand.

That kind of illogic would be a loss for Justice Scalia’s legacy. The court should find that telecommunications data are owned in part by cellphone users. A warrant is required for the government to take such property and examine it.

Mr. Harper is vice president of the Competitive Enterprise Institute, which has filed an amicus brief in Carpenter v. U.S.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
I did not consent to this
« Reply #1080 on: December 04, 2017, 06:52:37 PM »

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Is your cell phone wiretapping you?
« Reply #1081 on: December 23, 2017, 08:28:49 AM »
http://www.dailymail.co.uk/sciencetech/article-5200661/Is-phone-listening-word-say.html

Is your phone listening to your every word and WATCHING you through your phone's camera? How thousands of people are convinced 'coincidence' adverts are anything but
Writer Jen Lewis posted viral image to Twitter of a Facebook ad featuring women wearing identical outfit to her
Tweet went viral with hundreds sharing their stories of social networks 'listening in' on conversations
Journalist Julia Lawrence (with the help of daughter Lois) investigated the powers of online advertising for the Daily Mail 
By JULIA LAWRENCE FOR THE DAILY MAIL

PUBLISHED: 20:44 EST, 20 December 2017 | UPDATED: 08:52 EST, 21 December 2017


We were sitting in a rooftop restaurant, 30 storeys up, overlooking the Empire State building in New York, when my daughter confessed that she thought she was being spied on by a professional network of cyberspooks.

‘Look at this,’ said Lois, presenting me with her smartphone, where an advert for a snazzy little instamatic camera was displayed. It had popped up a few seconds earlier, when she’d logged on to Instagram.

She met my quizzical ‘so what?’ face with exasperation.

‘What were we talking about? Just now? In the street, down there?’ she said.

Picture perfect: Jen Lewis (left) and the alarmingly similar advert sent on Facebook shortly after    +5
Picture perfect: Jen Lewis (left) and the alarmingly similar advert sent on Facebook shortly after

Sure enough, we’d been window shopping before our lunch reservation, and spotted a little gadget shop. I remembered Lois had commented on the instamatic cameras on display (dropping a few hints for her forthcoming 21st birthday, I suspected).

We’d had a brief conversation about how they were all the rage in the Eighties, and how one of my memories of Christmas parties at my parents’ house was listening to that familiar ‘whirrr’ and watching the wealthier guests flapping about the instant photos, as everyone waited for them to dry.

RELATED ARTICLES
Previous
1
2
Next

She's aging backwards! Pettifleur Berenger unveils her...

'Pokemon Go' gets real: Niantic reveals new version of hit...

Mystery of the gigantic snake-like cosmic filament found...
SHARE THIS ARTICLE
Share
They were the selfies of their day, and good fun (if you could afford the camera film). How lovely that they were making a comeback, I commented. And we moved on.

Then, less than 20 minutes later, an advert popped up on Lois’s phone, for the exact same product. Same colour, same model, same everything.

‘They’re listening, they’re watching,’ she said.

‘Oh don’t be daft,’ I replied. ‘Who’s listening? Who’d want to listen to us?’

‘I’m serious,’ said Lois. ‘This keeps happening. This is no coincidence. Someone is listening to our conversations. Advertisers. They’re listening via our phones’ microphones.’

Our activity on websites and apps and demographic information is gathered using increasingly sophisticated technology to bring us personalised adverts (stock image)    +5
Our activity on websites and apps and demographic information is gathered using increasingly sophisticated technology to bring us personalised adverts (stock image)

A little melodramatic and paranoid, you might think. I certainly did. I assumed Lois had simply been researching the product online before we flew to New York, and had forgotten.

We all know ‘targeted advertising’ has been prevalent for some years now, via our social media apps and search engines. Facebook was one of the first to introduce it four years ago. It’s no big secret: go on the John Lewis website and choose a blouse, or Google Nigella’s smart eye-level oven, and the next time you log on to Facebook or Instagram, there’s a good chance they’ll pop up as adverts there.

While it felt a little uncomfortable and intrusive to begin with, we’ve all sort of got used to it.

Our activity on websites and apps and demographic information is gathered using increasingly sophisticated technology to bring us personalised adverts.

People’s electronic markers — known as ‘cookies’ — from websites they visit are gathered and passed to advertisers so they can target us with products relevant to our tastes and interests (and ones we’re more likely to buy).

Facebook categorically denies it uses smartphone microphones to gather information for the purposes of targeted advertising    +5
Facebook categorically denies it uses smartphone microphones to gather information for the purposes of targeted advertising

It is not illegal. Although under the Data Protection Act 1998, a person has to actively consent to their data being collected and the purpose for which it’s used, few people actually take time to police what they consent to.

The terms and conditions and privacy statements you sign up to when you buy a smartphone or download an app are rarely scrutinised before we tick the box and wade in.

But Lois swore she hadn’t Googled an instamatic camera. That was the first time she’d ever had a conversation about them. ‘I’m telling you, they’re listening,’ she said, and I admit I stuffed my own phone a little deeper into my bag. Could she be right?

Well, hundreds of other people seem to think so. Stories on Twitter of these ‘blind coincidence’ adverts are abundant.

And not just restricted to voice snooping either — some are convinced their phones are spying on them via their cameras, too.

Last month, a creepy story swept social media about an American woman called Jen Lewis who was shown an advert on Facebook for a bra — featuring a model wearing exactly the same clothes she was wearing at that moment. The same pink shirt and skinny jeans.

Lewis, a writer and designer, recreated the model’s pose and posted the near-identical pictures side-by-side on Twitter where they went viral with more than 20,000 likes.

While Facebook insisted the ad was a coincidence, hundreds of horrified social media users commented — many suggesting the ad could have been targeted with image recognition software, using Jen’s laptop or smartphone camera as a spy window into her life. ‘Seriously, cover up your camera lens,’ warned one, as stories were swapped of people receiving adverts for wedding planners, minutes after popping the question, and cat food after merely discussing whether to buy a cat.

People’s electronic markers — known as ‘cookies’ — from websites they visit are gathered and passed to advertisers so they can target us with products relevant to our tastes  (stock image)   +5
People’s electronic markers — known as ‘cookies’ — from websites they visit are gathered and passed to advertisers so they can target us with products relevant to our tastes  (stock image)

One Facebook user is so convinced his conversations are being monitored that he switched off the microphone on his smartphone — and, sure enough, there haven’t been any more ‘strange coincidences’ since.

Tom Crewe, 28, a marketing manager from Bournemouth, was immediately suspicious in March when he noticed an advert on Facebook for beard transplant surgery. Only hours earlier he’d joked with a colleague about them both getting one, as they remained smooth-faced, despite their age.

‘I had my phone’s Facebook app switched on at the time. Within a few hours, an ad came through for hair and beard transplants,’ he says.

‘I just thought: “Why have I been targeted?” I’d never Googled “hair or beard transplants” or sent an email to anyone about it or talked about it on Facebook.’

The fact that the ad for beard transplants was so unusual and specific made him suspect his phone had been eavesdropping.

He became convinced when later that month he received an advert to his phone — again weirdly and quite specifically — for Peperami sausages.

Companies have developed algorithms that can look for patterns and determine potentially useful things about your behaviour and interests (stock image)    +5
Companies have developed algorithms that can look for patterns and determine potentially useful things about your behaviour and interests (stock image)

‘Again, it was a casual conversation in the office. I’d just eaten a Peperami, and it was a few hours before lunch, and a colleague joked how he didn’t think this was a particularly good thing to have for breakfast.

‘Again, I’d never Googled the product or mentioned it on Facebook or anywhere online. It’s just something I buy during my twice-a-week shop at Tesco.

‘Then I get an advert for it. This happened within two weeks of the beard incident.’

It so disturbed him that he researched it and saw others talking about it.

‘I saw articles and got information and turned off the Facebook app’s access to my phone’s microphone. I’ve not noticed it happening since then.’

Facebook categorically denies it uses smartphone microphones to gather information for the purposes of targeted advertising.

A spokesperson said being targeted with an advert for a beard transplant was just an example of heightened perception, or the phenomenon whereby people notice things they’ve talked about.

With 1.7 billion users being served tens of adverts a day, there’s always going to be something uncanny. Google and WhatsApp also categorically deny bugging private conversations, describing the anecdotal evidence as pure coincidence.

One thing technology experts agree on, though, is that the ability to create technology that can randomly sweep millions of conversations for repeated phrases or identifiable names, definitely exists.

Companies have developed algorithms that can look for patterns and determine potentially useful things about your behaviour and interests. Whether they are being used by the companies with access to your phone, however, remains unproven.

Not convinced? Consider the Siri or Google Assistant functions, designed to understand your voice and pick out key phrases, and with a huge vocabulary in their grasp.

I saw articles and got information and turned off the Facebook app’s access to my phone’s microphone. I’ve not noticed it happening since then
It’s not too big a stretch to think of this technology developed to sweep conversations as a marketing tool. ‘Smartphones are small tracking devices,’ says Michelle De Mooy, acting director for the U.S.’s Democracy and Technology Privacy and Data project.

‘We may not think of them like that because they’re very personal devices — they travel with us, they sleep next to us. But they are, in fact, collectors of a vast amount of information including audio information. When you are using a free service, you are basically paying for it with information.’

As yet, however, there’s no concrete evidence that we are being listened to. Any complaints about spying would be dealt with by the Information Commissioner’s Office (ICO), which handles legislation governing how personal information is stored and shared across the UK.

They say no one has complained officially. Tales of cybersnooping haven’t gone beyond ‘shaggy dog stories’ on Twitter and Facebook.

When approached by the Mail, an ICO spokesman said: ‘We haven’t received any complaints on the issue of Facebook listening to people’s conversations.

‘Businesses and organisations operating in the UK are required by law to process personal data fairly and lawfully, this means being clear and open with individuals about how information will be used.’

That law, however, is struggling to keep up with technology, according to Ewa Luger, a researcher and specialist in the ethical design of intelligent machines, at the University of Edinburgh. ‘I think this is a problem ethically,’ she says. ‘If I had an expectation that this application was recording what I was saying, that’s one thing, but if I don’t, then it’s ethically questionable. I may be having private conversations and taking my phone into the bathroom.

‘This is a new area of research — voice assistance technology. We have only been looking at this for 12 months. It takes a while for research to catch up.’

In the meantime, Lois and I have turned off our microphones. It’s easy to do via your phone’s Settings.

To be honest, I don’t think there are people with earphones in a bunker, desperate to know what car I’m thinking of buying, but I’d rather, in this increasingly public world, maintain a bit of privacy. You really don’t know who’s listening.

■ Additional reporting Stephanie Condron


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
WSJ: The Chinese model
« Reply #1082 on: December 23, 2017, 02:41:48 PM »
China

Twelve Days in Xinjiang: How China’s Surveillance State Overwhelms Daily Life
The government has turned the remote region into a laboratory for its high-tech social controls
Pedestrians pass a “convenience police station” in the Erdaoqiao neighborhood of Urumqi.
by Josh Chin and Giulia Marchi for The Wall Street Journal
Updated Dec. 19, 2017 10:58 p.m. ET
Pedestrians pass a “convenience police station” in the Erdaoqiao neighborhood of Urumqi.


URUMQI, China—This city on China’s Central Asia frontier may be one of the most closely surveilled places on earth.

Security checkpoints with identification scanners guard the train station and roads in and out of town. Facial scanners track comings and goings at hotels, shopping malls and banks. Police use hand-held devices to search smartphones for encrypted chat apps, politically charged videos and other suspect content. To fill up with gas, drivers must first swipe their ID cards and stare into a camera.

China’s efforts to snuff out a violent separatist movement by some members of the predominantly Muslim Uighur ethnic group have turned the autonomous region of Xinjiang, of which Urumqi is the capital, into a laboratory for high-tech social controls that civil-liberties activists say the government wants to roll out across the country.

It is nearly impossible to move about the region without feeling the unrelenting gaze of the government. Citizens and visitors alike must run a daily gantlet of police checkpoints, surveillance cameras and machines scanning their ID cards, faces, eyeballs and sometimes entire bodies.


Life Inside China’s Total Surveillance State




China has turned the northwestern region of Xinjiang into a vast experiment in domestic surveillance. WSJ investigated what life is like in a place where one's every move can be monitored with cutting-edge technology.
.
When fruit vendor Parhat Imin swiped his card at a telecommunications office this summer to pay an overdue phone bill, his photo popped up with an “X.” Since then, he says, every scan of his ID card sets off an alarm. He isn’t sure what it signifies, but figures he is on some kind of government watch list because he is a Uighur and has had intermittent run-ins with the police.

He says he is reluctant to travel for fear of being detained. “They blacklisted me,” he says. “I can’t go anywhere.”

All across China, authorities are rolling out new technology to keep watch over people and shape their behavior. Controls on expression have tightened under President Xi Jinping, and the state’s vast security web now includes high-tech equipment to monitor online activity and even snoop in smartphone messaging apps.

China’s government has been on high alert since a surge in deadly terrorist attacks around the country in 2014 that authorities blamed on Xinjiang-based militants inspired by extremist Islamic messages from abroad. Now officials are putting the world’s most state-of-the-art tools in the hands of a ramped-up security force to create a system of social control in Xinjiang—one that falls heaviest on Uighurs.

At a security exposition in October, an executive of Guangzhou-based CloudWalk Technology Co., which has sold facial-recognition algorithms to police and identity-verification systems to gas stations in Xinjiang, called the region the world’s most heavily guarded place. According to the executive, Jiang Jun, for every 100,000 people the police in Xinjiang want to monitor, they use the same amount of surveillance equipment that police in other parts of China would use to monitor millions.


Authorities in Xinjiang declined to respond to questions about surveillance. Top party officials from Xinjiang said at a Communist Party gathering in Beijing in October that “social stability and long-term security” were the local government’s bottom-line goals.

Chinese and foreign civil-liberty activists say the surveillance in this northwestern corner of China offers a preview of what is to come nationwide.

"A woman undergoes a facial-recognition check at a luxury mall in Urumqi."
.
“They constantly take lessons from the high-pressure rule they apply in Xinjiang and implement them in the east,” says Zhu Shengwu, a Chinese human-rights lawyer who has worked on surveillance cases. “What happens in Xinjiang has bearing on the fate of all Chinese people.”

During an October road trip into Xinjiang along a modern highway, two Wall Street Journal reporters encountered a succession of checkpoints that turned the ride into a strange and tense journey.

At Xingxing Gorge, a windswept pass used centuries ago by merchants plying the Silk Road, police inspected incoming traffic and verified travelers’ identities. The Journal reporters were stopped, ordered out of their car and asked to explain the purpose of their visit. Drivers, mostly those who weren’t Han Chinese, were guided through electronic gateways that scanned their ID cards and faces.



 

Twelve Days in Xinjiang: How China’s Surveillance State Overwhelms Daily Life


Farther along, at the entrance to Hami, a city of a half-million, police had the Journal reporters wait in front of a bank of TV screens showing feeds from nearby surveillance cameras while recording their passport numbers.



Surveillance cameras loomed every few hundred feet along the road into town, blanketed street corners and kept watch on patrons of a small noodle shop near the main mosque. The proprietress, a member of the Muslim Hui minority, said the government ordered all restaurants in the area to install the devices earlier this year “to prevent terrorist attacks.”

Days later, as the Journal reporters were driving on a dirt road in Shanshan county after being ordered by officials to leave a nearby town, a police cruiser materialized seemingly from nowhere. It raced past, then skidded to a diagonal stop, kicking up a cloud of dust and blocking the reporters’ car. An SUV pulled up behind. A half-dozen police ordered the reporters out of the car and demanded their passports.

An officer explained that surveillance cameras had read the out-of-town license plates and sent out an alert. “We check every car that’s not from Xinjiang,” he said. The police then escorted the reporters to the highway.



"A security camera has been erected next to the minarets of a mosque in the Uighur village of Tuyugou."
 
.
At checkpoints further west, iris and body scanners are added to the security arsenal.

Darren Byler, an anthropology researcher at the University of Washington who spent two years in Xinjiang studying migration, says the closest contemporary parallel can be found in the West Bank and Gaza Strip, where the Israeli government has created a system of checkpoints and biometric surveillance to keep tabs on Palestinians.

In Erdaoqiao, the neighborhood where the fruit vendor Mr. Imin lives, small booths known as “convenience police stations,” marked by flashing lights atop a pole, appear every couple of hundred yards. The police stationed there offer water, cellphone charging and other services, while also taking in feeds from nearby surveillance cameras.


Always Watching

In Xinjiang, China's government has put the world's most state-of-the-art surveillance tools in the hands of security forces.

License-plate camera


Used to track vehicles breaking law, on watch list or from outside Xinjiang


Iris scanner


ID technology used at some checkpoints.


Location tracker


Mandatory in all

commercial vehicles.


Voice-pattern analyzer


Can identify people by speech patterns.


Smartphone

scanner


Searches for encrypted chat apps and other suspect content.


ID scanner


Used to check identification cards.


QR code


Knife


Includes ID number and other personal information


Buyer identification information is marked by laser on blade.


Sources: Government procurement orders; iFlyTek Co.; Meiya Pico Information Co; Darren Byler, University of Washington; Human Rights Watch; police interviews; interviews with Uighurs in exile.


 .


Twelve Days in Xinjiang: How China’s Surveillance State Overwhelms Daily Life


Young Uighur men are routinely pulled into the stations for phone checks, leading some to keep two devices—one for home use and another, with no sensitive content or apps, for going out, according to Uighur exiles.

Erdaoqiao, the heart of Uighur culture and commerce in Urumqi, is where ethnic riots started in 2009 that resulted in numerous deaths. The front entrance to Erdaoqiao Mosque is now closed, as are most entries to the International Grand Bazaar. Visitors funnel through a heavily guarded main gate. The faces and ID cards of Xinjiang residents are scanned. An array of cameras keeps watch.

After the riots, authorities showed up to shut down the shop Mr. Imin was running at the time, which sold clothing and religious items. When he protested, he says, they clubbed him on the back of the head, which has left him walking with a limp. They jailed him for six months for obstructing official business, he says. Other jail stints followed, including eight months for buying hashish.

The police in Urumqi didn’t respond to requests for comment.

Mr. Imin now sells fruit and freshly squeezed pomegranate juice from a cart. He worries that his flagged ID card will bring the police again. Recently remarried, he hasn’t dared visit his new wife’s family in southern Xinjiang.



.


At a checkpoint in Kashgar, passengers get their ID cards and faces scanned while police officers check cars and drivers.


Chinese rulers have struggled for two millennia to control Xinjiang, whose 23 million people are scattered over an expanse twice the size of Texas. Beijing sees it as a vital piece of President Xi’s trillion-dollar “Belt and Road” initiative to build infrastructure along the old Silk Road trade routes to Europe.


Last year, Mr. Xi installed a new Xinjiang party chief, Chen Quanguo, who previously handled ethnic strife in Tibet, another hot spot. Mr. Chen pioneered the convenience police stations in that region, partly in response to a string of self-immolations by monks protesting Chinese rule.


Surveillance Economy

The value of security-related investment projects in Xinjiang is soaring.


 


8 billion yuan


7


6


5


4


3


2


1


0


2015


2016


2017*

*January-March

Source: Industrial Securities Co.

 .


Twelve Days in Xinjiang: How China’s Surveillance State Overwhelms Daily Life


Under Mr. Chen, the police presence in Xinjiang has skyrocketed, based on data showing exponential increases in police-recruitment advertising. Local police departments last year began ordering cameras capable of creating three-dimensional face images as well as DNA sequencers and voice-pattern analysis systems, according to government procurement documents uncovered by Human Rights Watch and reviewed by the Journal.

During the first quarter of 2017, the government announced the equivalent of more than $1 billion in security-related investment projects in Xinjiang, up from $27 million in all of 2015, according to research in April by Chinese brokerage firm Industrial Securities .



 
Police Officers Wanted

Advertisements for policing positions in Xinjiang have risen sharply.


 
Twelve Days in Xinjiang: How China’s Surveillance State Overwhelms Daily Life


Government procurement orders show millions spent on “unified combat platforms”—computer systems to analyze surveillance data from police and other government agencies.

Tahir Hamut, a Uighur poet and filmmaker, says Uighurs who had passports were called in to local police stations in May. He worried he would draw extra scrutiny for having been accused of carrying sensitive documents, including newspaper articles about Uighur separatist attacks, while trying to travel to Turkey to study in the mid-1990s. The aborted trip landed him in a labor camp for three years, he says.

He and his wife lined up at a police station with other Uighurs to have their fingerprints and blood samples taken. He says he was asked to read a newspaper for two minutes while police recorded his voice, and to turn his head slowly in front of a camera.

.
Later, his family’s passports were confiscated. After a friend was detained by police, he says, he assumed he also would be taken away. He says he paid officials a bribe of more than $9,000 to get the passports back, making up a story that his daughter had epilepsy requiring treatment in the U.S. Xinjiang’s Public Security Bureau, which is in charge of the region’s police forces, didn’t respond to a request for comment about the bribery.

“The day we left, I was filled with anxiety,” he says. “I worried what would happen if we were stopped going through security at the Urumqi airport, or going through border control in Beijing.”

He and his family made it to Virginia, where they have applied for political asylum.



Annotations in red added by The Wall Street Journal. Notes: * Xinjiang considers it suspicious for Uighurs to visit a list of 26 mostly Muslim countries, including Turkey, Egypt, Afghanistan, South Sudan, Malaysia, Indonesia and Thailand. ** “Persons of interest” refers to people on the police watch list; “special population” is a common euphemism for Uighurs seen as separatists risks. Sources: Tahir Hamut (provided the form), Uighur Istiqlal TV and Adrian Zenz (confirmation of 26-country list).
Chinese authorities use forms to collect personal information from Uighurs. One form reviewed by the Journal asks about respondents’ prayer habits and if they have contacts abroad. There are sections for officials to rate “persons of interest” on a six-point scale and check boxes on whether they are “safe,” “average” or “unsafe.”

China Communications Services Co. Ltd., a subsidiary of state telecom giant China Telecom , has signed contracts this year worth more than $38 million to provide mosque surveillance and install surveillance-data platforms in Xinjiang, according to government procurement documents. The company declined to discuss the contracts, saying they constituted sensitive business information.

Xiamen Meiya Pico Information  Co. Ltd. worked with police in Urumqi to adapt a hand-held device it sells for investigating economic crimes so it can scan smartphones for terrorism-related content.

A description of the device that recently was removed from the company’s website said it can read the files on 90% of smartphones and check findings against a police antiterror database. “Mostly, you’re looking for audio and video,” said Zhang Xuefeng, Meiya Pico’s chief marketing officer, in an interview.



Inside China’s Surveillance State

Surveillance Cameras Made by China Are Hanging All Over the U.S.
China’s All-Seeing Surveillance State Is Reading Its Citizens’ Faces
China’s Tech Giants Have a Second Job: Helping Beijing Spy on Its People
Jailed for a Text: China’s Censors Are Spying on Mobile Chat Groups
.
Near the Xinjiang University campus in Urumqi, police sat at a wooden table recently, ordering some people walking by to hand over their phones.

“You just plug it in and it shows you what’s on the phone,” said one officer, brandishing a device similar to the one on Meiya Pico’s website. He declined to say what content they were checking for.

One recent afternoon in Korla, one of Xinjiang’s largest cities, only a trickle of people passed through the security checkpoint at the local bazaar, where vendors stared at darkened hallways empty of shoppers.

Li Qiang, the Han Chinese owner of a wine shop, said the security checks, while necessary for safety, were getting in the way of commerce. “As soon as you go out, they check your ID,” he said.

"Shopkeepers perform an antiterrorism drill under police supervision outside the bazaar in Kashgar."   
.
Authorities have built a network of detention facilities, officially referred to as education centers, across Xinjiang. In April, the official Xinjiang Daily newspaper said more than 2,000 people had been sent to a “study and training center” in the southern city of Hotan.

One new compound sits a half-hour drive south of Kashgar, a Uighur-dominated city near the border with Kyrgyzstan. It is surrounded by imposing walls topped with razor wire, with watchtowers at two corners. A slogan painted on the wall reads: “All ethnic groups should be like the pods of a pomegranate, tightly wrapped together.”

Villagers describe it as a detention center. A man standing near the entrance one recent night said it was a school and advised reporters to leave.

Mr. Hamut, the poet, says a relative in Kashgar was taken to a detention center after she participated in an Islamic ceremony, and another went missing soon after the family tried to call him from the U.S.

The local government in Kashgar didn’t respond to a request for comment.




Police officers at a gate in the Old City of Kashgar.   
.
Surveillance in and around Kashgar, where Han Chinese make up less than 7% of the population, is even tighter than in Urumqi. Drivers entering the city are screened intensively. A machine scans each driver’s face. Police officers inspect the engine and the trunk. Passengers must get out and run their bags through X-ray machines.

In Aksu, a dusty city a five-hour drive east of Kashgar, knife salesman Jiang Qiankun says his shop had to pay thousands of dollars for a machine that turns a customer’s ID card number, photo, ethnicity and address into a QR code that it lasers into the blade of any knife it sells. “If someone has a knife, it has to have their ID card information,” he says.

On the last day the Journal reporters were in Xinjiang, an unmarked car trailed them on a 5 a.m. drive to the Urumqi airport. During their China Southern Airlines flight to Beijing, a flight attendant appeared to train a police-style body camera attached to his belt on the reporters. Later, as passengers were disembarking, the attendant denied filming them, saying it was common for airline crew to wear the cameras as a security measure.

China Southern says the crew member was an air marshal, charged with safety on board.

—Fan Wenxin, Jeremy Page, Kersten Zhang and Eva Dou contributed to this article.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Netflix is tracking you...and me
« Reply #1083 on: January 04, 2018, 04:46:36 PM »
So, I like many people have a Netflix account. I had the movie "Bright" saved to watch when I could. I finally did get the chance to watch it at a relative's home. It was through their Netflix account. Their home is hundreds of miles away from mine. I don't have a Netflix app on any cell phone or tablet. I never logged into my Netflix account at that location.

So, when I returned home, "Bright" is now on my Netflix account under the category of "watch it again".

Trying to figure out how that happened.

I guess we are all living in a "Black Mirror" episode now.

ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1084 on: January 05, 2018, 05:18:02 AM »
GM,

did you pay for the film with a credit card
for certain out credit card transactions are being sold .  as I notice I might order something then see pop  up on my computer within a day for same thing. 

I have learned from having to question the motives of everything and look over my shoulder at everyone that coincidences do happen but also we are being screwed as well

with the data mining. 

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1085 on: January 05, 2018, 10:16:36 AM »
GM,

did you pay for the film with a credit card
for certain out credit card transactions are being sold .  as I notice I might order something then see pop  up on my computer within a day for same thing. 

I have learned from having to question the motives of everything and look over my shoulder at everyone that coincidences do happen but also we are being screwed as well

with the data mining. 

No credit card transaction involved. I just selected the movie and watched it.  I wonder if my cellphone was tracked using the wifi. Unsure at this time.

ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1086 on: January 06, 2018, 04:22:38 AM »
How about through Facebook?  That would connect you to your relative and maybe "likes and dislikes".

I presume Zucker shit would deny it - while doing it.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1087 on: January 06, 2018, 08:15:28 AM »
How about through Facebook?  That would connect you to your relative and maybe "likes and dislikes".

I presume Zucker shit would deny it - while doing it.

I don't have Facebook as an app on any device.

ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1088 on: January 07, 2018, 01:30:56 PM »
Somehow netflix knew you were at your relatives house and saw the movie there

How could it have know that? 
Like you said you are tracked via mobile device
or facial recognition

There may be embedded programs to detect these things.

I remember how MSFT had ways to detect or control the use of their software in devices to stop pirating .   :|

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1089 on: January 07, 2018, 01:43:24 PM »
I'm pretty sure it's by tracking the presence of devices connected to WiFi at the time the movie is played.

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
location tracking through wi-fi
« Reply #1090 on: January 07, 2018, 02:45:50 PM »
« Last Edit: January 12, 2018, 06:20:13 PM by Crafty_Dog »

G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
The Amazon wiretap device
« Reply #1091 on: January 16, 2018, 12:00:44 PM »
https://www.wired.com/story/amazon-echo-wiretap-hack/

The wiretap device can be used as a wiretap.

Be sure to put one in your home.


G M

  • Power User
  • ***
  • Posts: 26643
    • View Profile
Tucker Carlson on Goolag and your phone
« Reply #1093 on: February 08, 2018, 12:51:49 PM »
http://dailycaller.com/2018/02/07/tucker-google-spy-on-phone/

In Soviet Union, KGB listen to phone. In Soviet Amerika, spy IS phone.
« Last Edit: February 08, 2018, 03:37:33 PM by Crafty_Dog »


Crafty_Dog

  • Administrator
  • Power User
  • *****
  • Posts: 69460
    • View Profile
WSJ: The last capitalist sells the rope to the commie hangman
« Reply #1095 on: February 24, 2018, 11:16:51 AM »
Apple to Start Putting Sensitive Encryption Keys in China
Codes for Chinese users of iCloud will be kept in a secure location, company says
By Robert McMillan and
Tripp Mickle
Feb. 24, 2018 1:39 p.m. ET
18 COMMENTS

When Apple Inc. AAPL 1.74% next week begins shifting the iCloud accounts of its China-based customers to a local partner’s servers, it also will take an unprecedented step for the company that alarms some privacy specialists: storing the encryption keys for those accounts in China.

The keys are complex strings of random characters that can unlock the photos, notes and messages that users store in iCloud. Until now, Apple has stored the codes only in the U.S. for all global users, the company said, in keeping with its emphasis on customer privacy and security.

While Apple says it will ensure that the keys are protected in China, some privacy experts and former Apple security employees worry that moving the keys to China makes them more vulnerable to seizure by a government with a record of censorship and political suppression.

“Once the keys are there, they can’t necessarily pull out and take those keys because the server could be seized by the Chinese government,” said Matthew Green, a professor of cryptography at Johns Hopkins University. Ultimately, he says, “It means that Apple can’t say no.”

Apple says it is moving the keys to China as part of its effort to comply with a Chinese law on data storage enacted last year. Apple said it will store the keys in a secure location, retain control over them and hasn’t created any backdoors to access customer data. A spokesman in a statement added that Apple advocated against the new laws, but chose to comply because it “felt that discontinuing the [iCloud] service would result in a bad user experience and less data security and privacy for our Chinese customers.”

Apple’s move reflects the tough choice that has faced all foreign companies that want to continue offering cloud services in China since the new law. Other companies also have complied, including Microsoft Corp. for its Azure and Office 365 services, which are operated by 21Vianet Group , Inc., and Amazon.com Inc., which has cloud operating agreements with Beijing Sinnet Technology Co. and Ningxia Western Cloud Data Technology Co.

Amazon Web Services and Microsoft, which serve businesses in China, declined to say where encryption keys will be stored for businesses using their security tools there.

Privacy specialists are especially interested in Apple because of its enormous customer base and its history of championing customer privacy. Apple in 2016 fought a U.S. government demand to help unlock the iPhone of the gunman in the 2015 San Bernardino terrorist attack. “For many years, we have used encryption to protect our customers’ personal data because we believe it’s the only way to keep their information safe,” Apple Chief Executive Tim Cook said then in a letter to customers explaining its decision.

Apple said it will provide data only in response to requests initiated by Chinese authorities that the company deems lawful and said it won’t respond to bulk data requests. In the first half of 2017, Apple received 1,273 requests for data from Chinese authorities covering more than 10,000 devices, according to its transparency report. Apple said it provided data for all but 14% of those requests.

Greater China is Apple’s second-most-important market after the U.S., with $44.76 billion in revenue in its last fiscal year, a fifth of the total. Some previous steps to comply with Chinese laws have been controversial, including removing apps from its China store for virtual private networks that can circumvent government blocks on websites. Apple has said it follows the law wherever it operates and hopes that the restrictions around communication in China are eventually loosened.

Jingzhou Tao, a Beijing-based attorney at Dechert LLP, said Chinese iPhone users are disappointed by Apple’s changes to iCloud data storage because privacy protection in China is weak. However, he said users there “still consider that iPhone is better than some other pure Chinese-made phones for privacy policy and protection.”

Apple’s cloud partner in China is Guizhou on the Cloud Big Data Industry Co., or Guizhou-Cloud, which is overseen by the government of Guizhou province. Apple plans to shift operational responsibility for all iCloud data for Chinese customers in China to Guizhou-Cloud by Feb. 28. Customer data will migrate to servers based in China over the course of the next two years. The company declined to say when the encryption keys would move to China.

Apple began notifying iCloud users in China last month that Guizhou-Cloud would be responsible for storing their data.

Updated terms and conditions for China users say that Apple and Guizhou-Cloud “will have access to all data” and “the right to share, exchange and disclose all user data, including content, to and between each other under applicable law.”

“Given that Apple’s China operations will be managed by a Chinese company, it seems implausible that the government will not have access to Apple data through the local company,” said Ronald Deibert, a political-science professor at the University of Toronto’s Munk School of Global Affairs who has researched Chinese government hacking operations.

Guizhou-Cloud and the Chinese cybersecurity administration didn’t immediately respond to requests for comment.

Reporters Without Borders has urged journalists in China to change their geographic region or close their accounts before Feb. 28, saying Chinese authorities could gain a backdoor to user data even if Apple says it won’t provide one.

Apple said it has advised Chinese customers that they can opt out of iCloud service to avoid having their data stored in China. Data for China-based users whose settings are configured for another country, or for Hong Kong and Macau, won’t go on Chinese servers, and Apple said it won’t transfer anyone’s data until they accept the new mainland-China terms of service.

Mr. Green and others say Apple should provide more technical details on its steps to secure its encryption keys and internet usage data that might be available on Guizhou-Cloud.

This usage information, called metadata, could tell Chinese authorities the identity of users who download a book or other files of interest to the government, said Joe Gross, a consultant on building data centers.

“You can tell whether people are uploading or downloading things,” he said “You can tell where they are. You may be able to tell whether they’re sharing things.”

Apple said there would need to be a legal request to obtain metadata.

—Yoko Kubota, Jay Greene and Xiao Xiao contributed to this article


DougMacG

  • Power User
  • ***
  • Posts: 18294
    • View Profile
Re: Privacy, A simple law—‘Users own their private data’
« Reply #1097 on: April 09, 2018, 07:59:49 AM »
A simple law—‘Users own their private data’
 - WSJ opinion today

It reminds me that I bragged during the 1700 page NAFTA debate that I could write a free trade agreement on one side of a cocktail napkin. 

What happened to clear thinking like, "unalienable Rights"?
« Last Edit: April 09, 2018, 08:09:52 AM by DougMacG »

ccp

  • Power User
  • ***
  • Posts: 18543
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1098 on: April 09, 2018, 08:05:58 AM »
If we in the medical field sold patient data for a profit you know where we would be.

I've never been in a jumpsuit.

Maybe Zuck could promote the first jailhouse division of FB.

DougMacG

  • Power User
  • ***
  • Posts: 18294
    • View Profile
Re: Privacy, Big Brother (State and Corporate) and the 4th & 9th Amendments
« Reply #1099 on: April 09, 2018, 08:24:24 AM »
If we in the medical field sold patient data for a profit you know where we would be.

I've never been in a jumpsuit.

Maybe Zuck could promote the first jailhouse division of FB.

The medical industry collects our private information including ss nos. and hackers do the selling.
----
Any recommendations anyone for a site that respects privacy to replace facebook?