recentpopularlog in

charlesarthur : privacy   266

« earlier  
Period tracker apps: Maya and MIA Fem are sharing deeply personal data with Facebook • Buzzfeed News
Megha Rajagopalan:
<p>UK-based advocacy group Privacy International, sharing its <a href="">findings</a> exclusively with BuzzFeed News, discovered period-tracking apps including MIA Fem and Maya sent women’s use of contraception, the timings of their monthly periods, symptoms like swelling and cramps, and more, directly to Facebook.

Women use such apps for a range of purposes, from tracking their period cycles to maximizing their chances of conceiving a child. On the Google Play store, Maya, owned by India-based Plackal Tech, has more than 5 million downloads. Period Tracker MIA Fem: Ovulation Calculator, owned by Cyprus-based Mobapp Development Limited, says it has more than 2 million users around the world. They are also available on the App Store.

The data sharing with Facebook happens via Facebook’s Software Development Kit (SDK), which helps app developers incorporate particular features and collect user data so Facebook can show them targeted ads, among other functions. When a user puts personal information into an app, that information may also be sent by the SDK to Facebook.

Asked about the report, Facebook told BuzzFeed News it had gotten in touch with the apps Privacy International identified to discuss possible violations of its terms of service, including sending prohibited types of sensitive information.

Maya informs Facebook whenever you open the app and starts sharing some data with Facebook even before the user agrees to the app’s privacy policy, Privacy International found.</p>
app  privacy  menstruation  facebook 
3 days ago by charlesarthur
Face recognition, bad people and bad data • Benedict Evans
Evans on fine form again:
<p>what exactly is in the training data - in your examples of X and Not-X? Are you sure? What ELSE is in those example sets?

My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale - so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.

The structural thing to understand here is that the system has no understanding of what it’s looking at - it has no concept of skin or cancer or colour or gender or people or even images. It doesn’t know what these things are any more than a washing machine knows what clothes are. It’s just doing a statistical comparison of data sets. So, again - what is your data set? How is it selected? What might be in it that you don’t notice - even if you’re looking? How might different human groups be represented in misleading ways? And what might be in your data that has nothing to do with people and no predictive value, yet affects the result? Are all your ‘healthy’ photos taken under incandescent light and all your ‘unhealthy’ pictures taken under LED light? You might not be able to tell, but the computer will be using that as a signal.</p>

A very astringent look at a lot of the hoopla about machine learning.
machinelearning  privacy  database 
8 days ago by charlesarthur
Just Delete Me : A directory of direct links to delete your account from web services.
<p>Many companies use dark pattern techniques to make it difficult to find how to delete your account. aims to be a directory of urls to enable you to easily delete your account from web services.</p>

A service, apparently, from Turns out that Facebook is only "medium" difficult to delete yourself from; some services (lookin' at you, Animal Crossing) are "impossible".
internet  privacy  security 
16 days ago by charlesarthur
Ten years on, Foursquare is now checking in to you • NY Mag
James D. Walsh on the "I'm the mayor of..." company's pivot to a business-to-business model, which it made in 2014:
<p>It projected iPhone sales in 2015 based on traffic to Apple stores and, in 2016, the huge drop in Chipotle’s sales figures (thanks to E. coli) two weeks before the burrito-maker announced its quarterly earnings. (It also used its data to show that foot traffic to Trump properties began declining after he announced his presidential campaign, and that traffic to Nike stores increased after the Colin Kaepernick ad.)

Co-founder and executive chairman Dennis Crowley says the human check-ins gave Foursquare engineers and data scientists the ability to verify and adjust location readings from other sources, like GPS, Wi-Fi, and Bluetooth. As it turns out, the goofy badges for Uncle Tony that made Foursquare easy to dismiss as a late-2000s fad were an incredibly powerful tool. “Everyone was laughing at us, ‘Oh, what are you, just people checking in at coffee shops?’” Crowley says. “Yeah, and they checked in billions of times. So we had this corpus of data, an army of people, who every day were like, ‘I’m at Think Coffee.’ ‘I’m at Think Coffee.’ ‘I’m at Think Coffee.’” Because of the “corpus” of data generated by people like Uncle Tony, Foursquare knows when the dimensions of storefronts change and can tell the difference between an office on the eighth floor and one of the ninth floor.

In addition to all of those active check-ins, at some point Foursquare began collecting passive data using a “check-in button you never had to press.” It doesn’t track people 24/7 (in addition to creeping people out, doing so would burn through phones’ batteries), but instead, if users opt-in to allow the company to “always” track their locations, the app will register when someone stops and determine whether that person is at a red light or inside an Urban Outfitters. The Foursquare database now includes 105 million places and 14 billion check-ins. The result, experts say, is a map that is often more reliable and detailed than the ones generated by Google and Facebook.</p>
advertising  privacy  foursquare  location 
16 days ago by charlesarthur
Deconstructing Google’s excuses on tracking protection • Freedom To Tinker
Jonathan Mayer and Arvind Narayanan:
<p>Blocking cookies is bad for privacy. That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight.<br />Our high-level points are:

1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting.

2) There is little trustworthy evidence on the comparative value of tracking-based advertising.

3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical.

4) Google is attempting a punt to the web standardization process, which will at best result in years of delay.

What follows is a reproduction of excerpts from yesterday’s announcement, annotated with our comments.</p>

This is quite a takedown of Google's claims that it would really love to do what Safari and Firefox are doing in terms of cooking blocking, but, uh, it's <em>complicated</em>.
apple  google  firefox  browsing  privacy 
17 days ago by charlesarthur
Facebook paid contractors to transcribe user audio files • Bloomberg
Sarah Frier:
<p>Facebook has been paying hundreds of outside contractors to transcribe clips of audio from users of its services, according to people with knowledge of the work.

The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained - only to transcribe it, said the people, who requested anonymity for fear of losing their jobs. They’re hearing Facebook users’ conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed, the people said.

Facebook confirmed that it had been transcribing users’ audio and said it will no longer do so, following scrutiny into other companies. “Much like Apple and Google, we paused human review of audio more than a week ago,” the company said Tuesday. The company said the users who were affected chose the option in Facebook’s Messenger app to have their voice chats transcribed. The contractors were checking whether Facebook’s artificial intelligence correctly interpreted the messages, which were anonymized.</p>

But of COURSE Facebook was doing this, same as everyone else. Clearly this was an open secret within the voice assistant industry.
facebook  ai  privacy  voice 
5 weeks ago by charlesarthur
Operator of email management service settles FTC allegations that it deceived consumers • Federal Trade Commission
<p>An email management company will be required to delete personal information it collected from consumers as part of a settlement with the Federal Trade Commission over allegations that the company deceived some consumers about how it accesses and uses their personal emails.

In a complaint, the FTC alleges that Unrollme Inc., falsely told consumers that it would not “touch” their personal emails, when in fact it was sharing the users’ email receipts (e-receipts) with its parent company, Slice Technologies, Inc.

E-receipts are emails sent to consumers following a completed transaction and can include, among other things, the user’s name, billing and shipping addresses, and information about products or services purchased by the consumer. Slice uses anonymous purchase information from Unrollme users’ e-receipts in the market research analytics products it sells.

Unrollme helps users unsubscribe from unwanted subscription emails and consolidates wanted email subscriptions into one daily email called the Rollup. The service requires users to provide Unrollme with access to their email accounts.

“What companies say about privacy matters to consumers,” said Andrew Smith, Director of the FTC’s Bureau of Consumer Protection. “It is unacceptable for companies to make false statements about whether they collect information from personal emails.”</p>

Pity there isn't a fine too. "closed" to EU customers back in May 2018 because it couldn't comply with GDPR; and had been discovered in early 2017 selling its data to Uber and others. (The <a href="">CEO's mea culpa</a> from April 2017, which I linked to here, has mysteriously vanished from the company blog, which is filled instead with <a href="">utter pap</a>, and it doesn't seem to figure in the retrospective. I did some digging on the Waybaack Machine: it was removed from the blog some time between mid-July and early August of 2018.)
unroll  ftc  privacy  email 
5 weeks ago by charlesarthur
Black Hat: GDPR privacy law exploited to reveal personal data • BBC News
Dave Lee:
<p>About one in four companies revealed personal information to a woman's partner, who had made a bogus demand for the data by citing an EU privacy law.
The security expert contacted dozens of UK and US-based firms to test how they would handle a "right of access" request made in someone else's name.

In each case, he asked for all the data that they held on his fiancée…

He declined to identify the organisations that had mishandled the requests, but said they had included:<br />• a UK hotel chain that shared a complete record of his partner's overnight stays<br />• two UK rail companies that provided records of all the journeys she had taken with them over several years<br />• a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey

[University of Oxford-based researcher James] Pavur has, however, named some of the companies that he said had performed well. He said they included:<br />• the supermarket Tesco, which had demanded a photo ID<br />• the domestic retail chain Bed Bath and Beyond, which had insisted on a telephone interview<br />• American Airlines, which had spotted that he had uploaded a blank image to the passport field of its online form.</p>

Social engineering: still one of the best kinds of hacking.
dataprotection  privacy  gdpr  hacking 
5 weeks ago by charlesarthur
South Wales police to use facial recognition apps on phones • The Guardian
Ian Sample:
<p>Liberty, the campaign group, called the announcement “chilling”, adding that it was “shameful” that South Wales police had chosen to press ahead with handheld facial recognition systems even as it faced a court challenge over the technology.

In May, Liberty brought a legal case against the force for its recent use of automated facial recognition on city streets, at music festivals, and at football and rugby matches.

South Wales police said the technology would secure quicker arrests and enable officers to resolve cases of mistaken identity without the need for a trip to a station or custody suite. The officers testing the app would be under “careful supervision”, it said in a statement.

“This new app means that, with a single photo, officers can easily and quickly answer the question of ‘are you really the person we are looking for?’,” said deputy chief constable Richard Lewis. “When dealing with a person of interest during their patrols in our communities officers will be able to access instant, actionable data, allowing to them to identify whether the person stopped is, or is not, the person they need to speak to, without having to return to a police station.”</p>

There is next to zero information about which company built this app, what its accuracy is, and a whole lot more. Is it basically an identikit system on a phone?
privacy  apps  facialrecognition  police 
6 weeks ago by charlesarthur
Apple halts practice of contractors listening in to users on Siri • The Guardian
Alex Hern:
<p>Contractors working for Apple in Ireland said they were not told about the decision when they arrived for work on Friday morning, but were sent home for the weekend after being told the system they used for the grading “was not working” globally. Only managers were asked to stay on site, the contractors said, adding that they had not been told what the suspension means for their future employment.

The suspension was prompted by a report in the Guardian last week that revealed the company’s contractors “regularly” hear confidential and private information while carrying out the grading process, including in-progress drug deals, medical details and people having sex.

The bulk of that confidential information was recorded through accidental triggers of the Siri digital assistant, a whistleblower told the Guardian. The Apple Watch was particularly susceptible to such accidental triggers, they said. “The regularity of accidental triggers on the watch is incredibly high … The watch can record some snippets that will be 30 seconds – not that long, but you can gather a good idea of what’s going on.</p>

One week from the original report to this change. That's impressive - moreso given that Bloomberg had a weaker form of this report much earlier this year but didn't get anything like the detail. The power of newsprint: it makes a difference having something you can put on a chief executive's desk (even if you have to fly it out there).

Apple has indicated that it's eventually going to restart this, but on an opt-in basis.
apple  privacy  data  siri 
6 weeks ago by charlesarthur
Apple contractors 'regularly hear confidential details' on Siri recordings • The Guardian
Alex Hern:
<p>Apple contractors regularly hear confidential medical information, drug deals, and recordings of couples having sex, as part of their job providing quality control, or “grading”, the company’s Siri voice assistant, the Guardian has learned.

Although Apple does not explicitly disclose it in its consumer-facing privacy documentation, a small proportion of Siri recordings are passed on to contractors working for the company around the world. They are tasked with grading the responses on a variety of factors, including whether the activation of the voice assistant was deliberate or accidental, whether the query was something Siri could be expected to help with and whether Siri’s response was appropriate.

Apple says the data “is used to help Siri and dictation … understand you better and recognise what you say”.

But the company does not explicitly state that that work is undertaken by humans who listen to the pseudonymised recordings.</p>

So there's the trifecta: all of Amazon, Google and Apple sends some audio to humans to listen. In its way, rather like the revelation that <a href="">your smartphone maps where you go and stores it</a>, which we didn't intuitively know in 2011 - but turns out everyone did that too.
apple  siri  privacy 
7 weeks ago by charlesarthur
My browser, the spy: how extensions slurped up browsing histories from 4m users • Ars Technica
Dan Goodin:
<p>DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics, which markets itself as “God mode for the Internet” and uses the tag line “See Anyone’s Analytics Account.”

Web histories may not sound especially sensitive, but a subset of the published links led to pages that are not protected by passwords—but only by a hard-to-guess sequence of characters (called tokens) included in the URL. Thus, the published links could allow viewers to access the content at these pages. (Security practitioners have long discouraged the publishing of sensitive information on pages that aren't password protected, but the practice remains widespread.)

According to the researcher who <a href="">discovered and extensively documented the problem</a>, this non-stop flow of sensitive data over the past seven months has resulted in the publication of links to:

• Home and business surveillance videos hosted on Nest and other security services<br />• Tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive,, and other online services<br />• Vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers<br />• Patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services<br />• Travel itineraries hosted on Priceline,, and airline websites<br />• Facebook Messenger attachments and Facebook photos, even when the photos were set to be private.

Nacho Analytics turns out to have been grabbing data from tons of extensions, listed in the story.
browser  privacy 
8 weeks ago by charlesarthur
FaceApp responds to privacy concerns • TechCrunch
Natasha Lomas:
<p>The tl;dr here is that concerns had been raised that FaceApp, a Russian startup, uploads users’ photos to the cloud — without making it clear to them that processing is not going on locally on their device.

Another issue raised by FaceApp users was that the iOS app appears to be overriding settings if a user had denied access to their camera roll, after people reported they could still select and upload a photo — i.e. despite the app not having permission to access their photos.

As we reported earlier, the latter is actually allowed behavior in iOS — which gives users the power to choose to block an app from full camera roll access but select individual photos to upload if they so wish.

This isn’t a conspiracy, though Apple could probably come up with a better way of describing the permission, as we suggested earlier.

On the wider matter of cloud processing of what is, after all, facial data, FaceApp confirms that most of the processing needed to power its app’s beautifying/gender-bending/age-accerating/-defying effects are done in the cloud.

Though it claims it only uploads photos users have specifically selected for editing. Security tests have also not found evidence the app uploads a user’s entire camera roll.</p>

The app <a href="">first surfaced two years ago</a>, so that's a pretty tenacious startup.
faceapp  russia  privacy 
9 weeks ago by charlesarthur
The US, China, and case 311/18 on Standard Contractual Clauses • European Law Blog
Peter Swire:
<p>In the aftermath of the 2015 case [on Facebook transferring data to the US, which found against Facebook and invalidated those transfers], most companies that transfer data from the EU were left to rely on contract standards promulgated by the European Commission, called Standard Contractual Clauses (SCC).  The SCCs set strict requirements for handling personal data by the company that transfers the data.

The legality of SCCs is now before the CJEU, with a similar challenge to Privacy Shield awaiting the outcome of the first case.

A CJEU decision that invalidates SCCs would result in the prohibition of most transfers of personal data from the EU to the US. The case primarily concerns the quality of legal safeguards in the United States for government surveillance, especially by the NSA. (Note – I was selected to provide independent expert testimony on US law by Facebook; under Irish law, I was prohibited from contact with Facebook while serving as an expert, and I have played no further role in the litigation.)

A decision invalidating SCCs, however, would pose a terrible dilemma to EU courts and decisionmakers.

At a minimum, the CJEU might “merely” prohibit data flows to the US due to a finding of lack of sufficient safeguards, notably an insufficient remedy for an EU data subject who makes a subject access request to the NSA. The EU on this approach would continue to authorize the transfer of personal data to countries not directly covered by the Court decision, such as, for example, China.  This approach would be completely unjustified: it would prohibit transfers of data to the US, which has numerous legal safeguards characteristic of a state under the rule of law, while allowing such transfers toward China, where the protection of personal data vis-à-vis the government is essentially non-existent.</p>
data  privacy  europe  china 
9 weeks ago by charlesarthur
German privacy watchdog: Microsoft’s Office 365 cannot be used in public schools • WinBuzzer
Luke Jones:
<p>A data authority in the German State of Hesse has warned Microsoft’s Office 365 cannot be used in schools. Michael Ronellenfitsch, Hesse’s data protection commissioner, says the standard Office 365 configuration creates privacy issues.

He warned this week that data stored in the cloud by the productivity suite could be accessed in the United States. Specifically, personal information from teachers and students would be in the cloud. Ronellenfitsch says even if the data was held in centers in Europe, it is still “exposed to possible access by US authorities”.

The commissioner says public intuitions in Hesse and across Germany “have a special responsibility with regard to the permissibility and traceability of the processing of personal data."…

…It is worth noting that Ronellenfitsch previously endorsed the use of Office 365 in schools. Back in 2017, he said schools can use the suite under certain conditions that match Germany’s data protection compliancy laws. At the time, Microsoft was partnered with Deutsche Telekom and offering the “Germany Cloud” initiative that is now depreciated.</p>

This isn't an opportunity for Google or Apple: they don't meet the authority's criteria on privacy and data either.
privacy  data  microsoft 
9 weeks ago by charlesarthur
Yep, human workers are listening to recordings from Google Assistant, too • The Verge
<p>In the story by VRT NWS, which focuses on Dutch and Flemish speaking Google Assistant users, the broadcaster reviewed a thousand or so recordings, 153 of which had been captured accidentally. A contractor told the publication that he transcribes around 1,000 audio clips from Google Assistant every week. In one of the clips he reviewed he heard a female voice in distress and said he felt that “physical violence” had been involved. “And then it becomes real people you’re listening to, not just voices,” said the contractor.

Tech companies say that sending audio clips to humans to be transcribed is an essential process for improving their speech recognition technology. They also stress that only a small percentage of recordings are shared in this way. A spokesperson for Google told Wired that just 0.2 percent of all recordings are transcribed by humans, and that these audio clips are never presented with identifying information about the user.

However, that doesn’t stop individuals revealing sensitive information in the recording themselves. And companies are certainly not upfront about this transcription process. The privacy policy page for Google Home, for example, does not mention the company’s use of human contractors, or the possibility that Home might mistakenly record users.

These obfuscations could cause legal trouble for the company, says Michael Veale, a technology privacy researcher at the Alan Turing Institute in London. He told Wired that this level of disclosure might not meet the standards set by the EU’s GDPR regulations. “You have to be very specific on what you’re implementing and how,” said Veale. “I think Google hasn’t done that because it would look creepy.”</p>

Guess it's time for Apple to say yes or no to this question, just for completeness. But this certainly backs up why I don't activate any Google Assistant or Alexa devices. Google <a href="">has a blogpost about this</a>, complaining about the worker "leaking confidential Dutch audio data". Sure, but if the data hadn't been there in the first place...
google  ai  privacy  speech 
9 weeks ago by charlesarthur
Google’s 4,000-word privacy policy is a secret history of the internet • The New York Times
Charlie Warzel:
<p>The late 1990s was a simpler time for Google. The nascent company was merely a search engine, and Gmail, Android and YouTube were but glimmers in the startup’s eye. Google’s first privacy policy reflected that simplicity. It was short and earnest, a quaint artifact of a different time in Silicon Valley, when Google offered 600 words to explain how it was collecting and using personal information.

That version of the internet (and Google) is gone. Over the past 20 years, that same privacy policy has been rewritten into a sprawling 4,000-word explanation of the company’s data practices.

This evolution, across two decades and 30 versions, is the story of the internet’s transformation through the eyes of one of its most crucial entities. The web is now terribly complex, and Google has a privacy policy to match.</p>

The visuals for this - because it is done through visuals - are lovely, but also telling. The longer the privacy policy, the less private you are to the company.
google  internet  privacy  gdpr 
9 weeks ago by charlesarthur
Is Firefox better than Chrome? It comes down to privacy • The Washington Post
Geoffrey Fowler:
<p>Seen from the inside, [Google's] Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s login pages.</p>
google  chrome  privacy  firefox 
10 weeks ago by charlesarthur
Kids’ apps are filled with manipulative ads, according to a new study • Vox
Chavie Lieber:
<p>suddenly, the game is interrupted. A bubble pops up with a new mini game idea, and when a child clicks on the bubble, they are invited to purchase it for $1.99, or unlock all new games for $3.99. There’s a red X button to cancel the pop-up, but if the child clicks on it, the character on the screen shakes its head, looks sad, and even begins to cry.

The game, developed by the Slovenian software company Bubadu and intended for kids as young as 6, is marketed as “educational” because it teaches kids about different types of medical treatments.

But it’s structured so that the decision to not buy anything from the game is wrong; the child is shamed into thinking they’ve done something wrong. Pulling such a move on a young gamer raises troubling ethical questions, especially as children’s gaming apps — and advertising within them — have become increasingly popular.

On Tuesday, a group of 22 consumer and public health advocacy groups sent a letter to the Federal Trade Commission calling on the organization to look into the questionable practices of the children’s app market. The <a href="">letter asks the FTC to investigate apps</a> that “routinely lure young children to make purchases and watch ads” and hold the developers of these games accountable.</p>
ftc  advertising  regulation  privacy  children 
10 weeks ago by charlesarthur
Mozilla: No plans to enable DNS-over-HTTPS by default in the UK • ZDNet
Catalin Cimpanu:
<p>After the UK's leading industry group of internet service providers named Mozilla an "Internet Villain" because of its intentions to support a new DNS security protocol named DNS-over-HTTPS (DoH) inside Firefox, the browser maker told ZDNet that such plans don't currently exist.

"We have no current plans to enable DoH by default in the UK," a spokesperson ZDNet last night.

The browser maker's decision comes after both ISPs and the UK government, through MPs and GCHQ have criticized Mozilla and fellow browser maker Google during the last two months for their plans to support DNS-over-HTTPS.

The technology, if enabled, would thwart the ability of some internet service providers to sniff customer traffic in order to block users from accessing bad sites, such as those hosting copyright-infringing materials, child abuse images, and extremist material.

UK ISPs block websites at the government requests; they also block other sites voluntarily at the request of various child protection groups, and they block adult sites as part of parental controls options they provide to their customers.

Not all UK ISPs will be impacted by Mozilla and Google supporting DNS-over-HTTPS, as some use different technologies to filter customers' traffic…</p>

This is the story which <a href="">came out horrendously confused in the Sunday Times</a> about three months ago, talking about "plans to encrypt Chrome", which left everyone who understands what the words actually mean puzzled.
privacy  isp  uk  dns  https 
10 weeks ago by charlesarthur
Over 1,300 Android apps scrape personal data regardless of permissions • TechRadar
David Lumb:
<p>Researchers at the International Computer Science Institute (ICSI) created a controlled environment to test 88,000 apps downloaded from the US Google Play Store. They peeked at what data the apps were sending back, compared it to what users were permitting and - surprise - <a href="">1,325 apps were forking over specific user data they shouldn’t have</a>.

Among the test pool were “popular apps from all categories,” according to ICSI’s report. 

The researchers disclosed their findings to both the US Federal Trade Commission and Google (receiving a bug bounty for their efforts), though the latter stated a fix would only be coming in the full release of Android Q, according to CNET.

Before you get annoyed at yet another unforeseen loophole, those 1,325 apps didn’t exploit a lone security vulnerability - they used a variety of angles to circumvent permissions and get access to user data, including geolocation, emails, phone numbers, and device-identifying IMEI numbers.

One way apps determined user locations was to get the MAC addresses of connected WiFi base stations from the ARP cache, while another used picture metadata to discover specific location info even if a user didn’t grant the app location permissions. The latter is what the ICSI researchers described as a “side channel” - using a circuitous method to get data.

They also noticed apps using “covert channels” to snag info: third-party code libraries developed by a pair of Chinese companies secretly used the SD card as a storage point for the user’s IMEI number. If a user allowed a single app using either of those libraries access to the IMEI, it was automatically shared with other apps.</p>

Android Q isn't going to be universally adopted by any means. Data leaks are going to go on.
android  data  privacy  security 
10 weeks ago by charlesarthur
Superhuman is Spying on You » Mike Industries
Mike Davidson has been using Superhuman - you know, the $30 per month email service that does it all for you - for a while:
<p>when I see great design, I proactively try to spread it as far and wide as possible.

What I see in Superhuman though is a company that has mistaken taking advantage of people for good design. They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”

When products are introduced into the market with behaviors like this, customers are trained to think they are not just legal but also ethical. They don’t always take the next step and ask themselves “wait, should I be doing this?” It’s kind of like if you walked by someone’s window at night and saw them naked. You could do one of two things: a) look away and get out of there, realizing you saw something that person wouldn’t want you to see, or b) keep staring, because if they really didn’t want anyone to see them, they should have closed their blinds. It’s two ways of looking at the world, and Superhuman is not just allowing for option B but <em>actively causing it to happen</em>.</p>

Tracking pixels like that aren't unique to Superhuman; PR companies use them all the time, and others too. But that's different, as Davidson explains. He deals with peoples' responses in his blogpost (including one from an investor in Superhuman), and its legal boilerplate. In short: Superhuman has been <a href="">milkshake ducked</a>.
superhuman  email  surveillance  privacy 
11 weeks ago by charlesarthur
The Pentagon has a laser that can identify people from a distance—by their heartbeat • MIT Technology Review
David Hambling:
<p>A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat)…

…Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.</p>
privacy  biometrics  technology  heart 
11 weeks ago by charlesarthur
Google's new reCaptcha has a dark side • Fast Company
Katharine Schwab:
<p>According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account. “If you have a Google account it’s more likely you are human,” he says. Google did not respond to questions about the role that Google cookies play in reCaptcha.

With reCaptcha v3, technology consultant Marcos Perona and Akrout’s tests both found that their reCaptcha scores were always low risk when they visited a test website on a browser where they were already logged into a Google account. Alternatively, if they went to the test website from a private browser like Tor or a VPN, their scores were high risk.

To make this risk-score system work accurately, website administrators are supposed to embed reCaptcha v3 code on all of the pages of their website, not just on forms or log-in pages. Then, reCaptcha learns over time how their website’s users typically act, helping the machine learning algorithm underlying it to generate more accurate risk scores.</p>

But that also means Google is seeing everything you do. Okayyy but.. it does anyway?
google  privacy  captcha 
11 weeks ago by charlesarthur
Before you use a password manager • Medium
Stuart Schechter:
<p>In this article, I’ll start by examining the benefits and risks of using a password manager. It’s hard to overstate the importance of protecting the data in your password manager, and having a recovery strategy for that data, so I’ll cover that next. I’ll then present a low-risk approach to experimenting with using a password manager, which will help you understand the tough choices you’ll need to make before using it for your most-important passwords. I’ll close with a handy list of the most important decisions you’ll need to make when using a password manager.

There are a lot of password managers to choose from. There’s a password manager built into every major web browser today, and many stand-alone password managers that work across browsers. In addition to remembering your passwords, most password managers will type your password into login forms. The better ones will create randomly-generated passwords for you, ensuring that you’re not using easily-guessed passwords or re-using passwords between sites. Some will even identify passwords you’ve re-used between sites and help you replace them.

The low-risk approach seems like a good plan. It's the idea of jumping in that many people find problematic.
security  software  privacy  password 
12 weeks ago by charlesarthur
The new wilderness • Idle Words
Maciej Cieglowski on the erosion of what he calls "ambient privacy" - the expectation that your interactions aren't monitored or remembered:
<p>Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

One can argue that ambient privacy is a relic of an older world, just like the ability to see the stars in the night sky was a pleasant but inessential feature of the world before electricity. This is the argument Mr. Zuckerberg made when he unilaterally removed privacy protections from every Facebook account back in 2010. Social norms had changed, he explained at the time, and Facebook was changing with them. Presumably now they have changed back.

My own suspicion is that ambient privacy plays an important role in civic life. When all discussion takes place under the eye of software, in a for-profit medium working to shape the participants’ behavior, it may not be possible to create the consensus and shared sense of reality that is a prerequisite for self-government. If that is true, then the move away from ambient privacy will be an irreversible change, because it will remove our ability to function as a democracy.

All of this leads me to see a parallel between privacy law and environmental law, another area where a technological shift forced us to protect a dwindling resource that earlier generations could take for granted.</p>

Always a must-read; easily comprehensible phrasing, but conveying deep meaning.
google  facebook  privacy  politics  democracy 
june 2019 by charlesarthur
LaLiga’s app listened in on fans to catch bars illegally streaming soccer • The Verge
Dami Lee:
<p>Spain’s data protection agency has fined the country’s soccer league, LaLiga, €250,000 (about $280,000) for allegedly violating EU data privacy and transparency laws. The app, which is used for keeping track of games and stats, was using the phone’s microphone and GPS to track bars illegally streaming soccer games, Spanish newspaper El País reported.

Using a Shazam-like technology, the app would record audio to identify soccer games, and use the geolocation of the phone to locate which bars were streaming without licenses. El Diario reports that fans have downloaded that app more than 10 million times, essentially turning them into undercover narcs. The league claims that the app asks for permission to access the phone’s microphone and location, and that the data — which is received as a code, not audio — is only used to detect LaLiga streams.</p>

You've got to admit: that is clever. Sneaky, but ever so clever. Of course people will be at bars with their smartphones. Of course.
privacy  hacking  smartphone 
june 2019 by charlesarthur
We read 150 privacy policies. They were an incomprehensible disaster • The New York Times
Kevin Litman-Navarro:
<p>For comparison, here are the scores for some classic texts. Only Immanuel Kant’s famously difficult “Critique of Pure Reason” registers a more challenging readability score than Facebook’s privacy policy. (To calculate their reading time, I measured the first chapter of each text.)

The vast majority of these privacy policies exceed the college reading level. And according to the most recent literacy survey conducted by the National Center for Education Statistics, over half of Americans may struggle to comprehend dense, lengthy texts. That means a significant chunk of the data collection economy is based on consenting to complicated documents that many Americans can’t understand.

The BBC has an unusually readable privacy policy. It’s written in short, declarative sentences, using plain language. Here’s how the policy outlines the BBC’s guidelines for collecting and using personal data:
<p>“We have to have a valid reason to use your personal information. It's called the ‘lawful basis for processing.’ Sometimes we might ask your permission to do things, like when you subscribe to an email. Other times, when you'd reasonably expect us to use your personal information, we don't ask your permission, but only when: the law says it's fine to use it, and it fits with the rights you have.”</p>

Airbnb’s privacy policy, on the other hand, is particularly inscrutable. It’s full of long, jargon-laden sentences that obscure Airbnb’s data practices and provides cover to use data in expansive ways…

“You’re confused into thinking these are there to inform users, as opposed to protect companies,” said Albert Gidari, the consulting director of privacy at the Stanford Center for Internet and Society.</p>

Amazing piece of work. Plaudits to the BBC, at least.
privacy  obscuration 
june 2019 by charlesarthur
Facebook turned off search features used to catch war criminals, child predators, and other bad actors • Buzzfeed News
Craig Silverman:
<p>In August 2017, the International Criminal Court issued a warrant for [Libyan military commander Mahmoud Mustafa Busayf al-Werfalli] for allegedly participating in or ordering the execution of 33 people in Benghazi, Libya. At the core of the evidence against him are seven videos, some of which were found on Facebook, that allegedly show Werfalli committing crimes. His case marked the first time the ICC issued a warrant based largely on material gathered from social media.

Now that kind of work is being put in jeopardy, according to Koenig, executive director of the Human Rights Center at the University of California, Berkeley. She said Facebook’s recent decision to turn off the features in its graph search product could be a “disaster” for human rights research.

“To make it even more difficult for human rights actors and war crimes investigators to search that site—right as they’re realizing the utility of the rich trove of information being shared online for documenting abuses—is a potential disaster for the human rights and war crimes community,” she said. “We need Facebook to be working with us and making access to such information easier, not more difficult.”

Simply put, Facebook graph search is a way to receive an answer to a specific query on Facebook, such as “people in Nebraska who like Metallica.” Using graph search, it’s possible to find public — and only public — content that’s not easily accessed via keyword searches.

Late last week, Facebook turned off several features that have long been accessible via graph search, such as the ability to find public videos that a specific Facebook user was tagged in. </p>
facebook  search  privacy 
june 2019 by charlesarthur
Apple launches 'Sign in with Apple' button for apps, ‘no tracking’ login • 9to5 Mac
Benjamin Mayo:
<p>Apple announced a new Sign in with Apple button as part of its iOS 13 announcements. The button offers Apple ID single-sign on functionality similar to sign-in buttons from Twitter, Facebook or Google.

Apple is marketing this as a privacy-secure sign-in option. Apple will mask user email addresses and other personal information, whilst still allowing the apps to contact users indirectly.

Users select what information to share with the destination app. You can share your real email address with the third-party app, or use the ‘hide my email’ option to forward email onwards. In the latter case, the app would only see a random anonymous email address.

Of course, apps must update to integrate the ‘Sign in with Apple’ button. A lot of apps may not want to add the Apple ID login because they cannot access customer data they want.</p>

Logical expectation is that Apple will push it on its devices, so apps and sites may feel they need to support it. But with the tech landscape as it is, there might be some reluctance to not gather data when you can slurp it up via Google or Facebook. Those sites and apps aren't on your side. They're on their own side.
Apple  data  privacy  signon 
june 2019 by charlesarthur
iPhone privacy is broken…and apps are to blame • WSJ
Joanna Stern:
<p>Congratulations! You’ve bought an iPhone! You made one of the best privacy-conscious decisions... until you download an app from Apple’s App Store. Most are littered with secret trackers, slurping up your personal data and sending it to more places than you can count.

Over the last few weeks, my colleague Mark Secada and I tested 80 apps, most of which are promoted in Apple’s App Store as “Apps We Love.” All but one used third-party trackers for marketing, ads or analytics. The apps averaged four trackers apiece.

Some apps send personal data without ever informing users in their privacy policies, others just use industry-accepted—though sometimes shady—ad-tracking methods. As my colleague Sam Schechner reported a few months ago (also with Mark’s assistance), many apps send info to Facebook, even if you’re not logged into its social networks. In our new testing, we found that many also send info to other companies, including Google and mobile marketers, for reasons that are not apparent to the end user.

We focused on the iPhone in our testing—largely because of Apple’s aggressive marketing of personal privacy. However, apps in Google’s Play Store for Android use the same techniques. In some cases, when it comes to providing on-device information to developers and trackers, Android is worse. Google recently updated its app permissions and says it is taking a deeper look at how apps access personal user information.</p>

Stern must be furious that her former colleague Geoff Fowler, now at the Washington Post, got ahead of her with the story - his appeared a day or two before hers - but it shows that we've become complacent about apps, and especially the third-party trackers they tend to incorporate.
apple  apps  data  privacy 
may 2019 by charlesarthur
Apple promises privacy, but iPhone apps share your data with trackers, ad companies and research firms • The Washington Post
Geoffrey Fowler:
<p>You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.

IPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. That’s half of an entire basic wireless service plan from AT&T.

“This is your data. Why should it even leave your phone? Why should it be collected by someone when you don’t know what they’re going to do with it?” says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. “I know the value of data, and I don’t want mine in any hands where it doesn’t need to be,” he told me.

In a world of data brokers, Jackson is the data breaker. He developed <a href="">an app called Privacy Pro</a> that identifies and blocks many trackers. If you’re a little bit techie, I recommend trying the free iOS version to glimpse the secret life of your iPhone.</p>

Certainly worth a try. That's a dismaying lot of trackers (hellooo Washington Post, for which Fowler writes). Expect Apple to try to crack down on this in a future iOS release - though the US could try something like GDPR. I wonder what those apps do in Europe.
mobile  privacy  apps 
may 2019 by charlesarthur
DuckDuckGo CEO Gabe Weinberg talks “do not track” legislation on Kara Swisher podcast Recode Decode • Vox
Eric Johnson:
<p>People don’t realize just how much they’re being tracked online, says DuckDuckGo CEO Gabe Weinberg — but he’s confident that once they learn how much tech companies like Google and Facebook are quietly slurping up their private data, they will demand a change.

“They’re getting purchase history, location history, browsing history, search history,” Weinberg said on the latest episode of Recode Decode with Kara Swisher. “And then when you go to, now, a website that has advertising from one of these networks, there’s a real-time bidding against you, as a person. There’s an auction to sell you an ad based on all this creepy information you didn’t even realize people captured.”

DuckDuckGo offers a privacy-minded search engine that has about 1 percent of the search market share in the US (Google’s share is more than 88 percent), as well as a free browser extension for Firefox and Google Chrome that blocks ad networks from tracking you. But rather than waiting for a comprehensive privacy bill to lurch through Congress over many years, he’s proposed a small, simple tweak to US regulations that might help: Make not being tracked by those networks the default, rather than something you have to opt into.

“The fact that consumers have already adopted it and it’s in the browser is just an amazing legislative opportunity, just give it teeth,” he said. “It’s actually a better mechanism for privacy laws because once you have this setting and it works, you don’t have to deal with all the popups anymore. You just set it once, and then sites can’t track you.”</p>

Weinberg is always good value. Also: DuckDuckGo is profitable; it doesn't have huge VC funding to chase to repay millions of times over.
search  duckduckgo  privacy 
may 2019 by charlesarthur
Inside Apple's top secret testing facilities where iPhone defences are forged in temperatures of -40C • The Independent
Andrew Griffin:
<p>The cost of those [Apple] products has led to some criticism from Apple's rivals, who have said that it is the price of privacy; that Apple is fine talking about how little data it collects, but it is only able to do so because of the substantial premiums they command. That was the argument recently made by Google boss Sundar Pichai, in just one of a range of recent broadsides between tech companies about privacy.

"Privacy cannot be a luxury good offered only to people who can afford to buy premium products and services," [Google chief Sundar] Pichai wrote in an op-ed in the New York Times. He didn't name Apple, but he didn't need to.

Pichai argued that the collection of data helps make technology affordable, echoing a sentiment often heard about Apple, that their commitment to privacy is only possible because their products are expensive and it can afford to take such a position. Having a more lax approach to privacy helps keep the products made by almost all of the biggest technology products in the world – from Google to Instagram – free, at least at the point of use.

"I don't buy into the luxury good dig," says Federighi, giving the impression he was genuinely surprised by the public attack.

"On the one hand gratifying that other companies in space over the last few months, seemed to be making a lot of positive noises about caring about privacy. I think it's a deeper issue than then, what a couple of months and a couple of press releases would make. I think you've got to look fundamentally at company cultures and values and business model. And those don't change overnight.

"But we certainly seek to both set a great example for the world to show what's possible to raise people's expectations about what they should expect the products, whether they get them from us or from other people. And of course, we love, ultimately, to sell Apple products to everyone we possibly could certainly not just a luxury, we think a great product experience is something everyone should have. So we aspire to develop those."</p>

Lots of other details in there, but this is the core.
apple  privacy  google 
may 2019 by charlesarthur
Google Gmail tracks your purchase history (not just from Google); here's how to delete it • CNBC
Todd Haselton and Megan Graham:
<p>Go here to see your own:

“To help you easily view and keep track of your purchases, bookings and subscriptions in one place, we’ve created a private destination that can only be seen by you,” a Google spokesperson told CNBC. “You can delete this information at any time. We don’t use any information from your Gmail messages to serve you ads, and that includes the email receipts and confirmations shown on the Purchase page.”

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. In order to remove them from Google Purchases and keep them in your Gmail inbox, you need to delete them one by one from the Purchases page. It would take forever to do that for years’ worth of purchase information.

Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page.

Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.</p>

There's an even more interesting page: <a href="">Purchases and Subscriptions</a>, which you reach by hitting the back button on the Purchases page. What is Google up to with this? It's tracking purchases and subscriptions from absolutely all over. It might say that it's not using this to serve you ads, but frankly it's hard to think what this is for except that - unless it's being fed to the AI systems, which then make some sort of conclusion about ads. Perhaps it's to *avoid* serving you ads about things you've already bought - in which case "we don't use the information to serve you ads" would just about be true.
google  privacy  purchases 
may 2019 by charlesarthur
Angry Birds, Candy Crush, and a history of mobile game data collection • Vox
Kaitlyn Tiffany:
<p>Something as vague and banal-sounding as “gameplay data” is not as obviously salacious as the types of personal data collection we know we should be scandalized by. Nobody’s getting your Social Security number from Angry Birds. Nobody’s getting your private messages.

“With Facebook, you’re putting a lot more clearly personal information out there, and with a game you’re not really sure what it’s getting from you,” says Chris Hazard, an engineer with experience in gaming and AI, currently the CTO of a startup called Diveplane. “It’s not as front and center.” Basically, it’s not obvious that data about how you play a mobile game can be as useful and as personal as your wedding photos or a rattled-off screed about the Democratic National Committee.

But people should be worried. The intricacies of gameplay data can tell you a lot about what makes people tick, and what’s going on with them — studies have shown that you play games differently when you’re depressed, or dieting. “Nobody gets too upset about games,” Nieborg says. “But the underlying technology is really powerful. These people are really pushing the technology to the limits where the potential for abuse is massive.”

Developers collect data on who was playing, for how long, how well, and how much money they were spending. It doesn’t seem like sensitive information, and it’s useful mostly because it helps developers target their Facebook ads to find more people who will “monetize well” on these games.</p>
advertising  privacy  games 
may 2019 by charlesarthur
Google Face Match brings privacy debate into the home • Financial Times
Tim Bradshaw:
<p>The “Google Nest” rebranding comes with a prompt for Nest customers to merge their user accounts with their Google profiles. “We want to make sure we are seamlessly integrating these devices,” said Rishi Chandra, vice-president and general manager of Google’s Home and Nest products.

For some customers, merging Nest data could include years of information on a family’s comings and goings, home energy usage and security camera video recordings. Google says it will not use that information for advertising.

“That data will never be used for ads personalisation,” said Mr Chandra, before being corrected by a member of Google’s public relations team. “We can never say never,” he added hastily, “but the commitment we are making is, it is not being used.”

Google is hoping to recapture some of the trust it lost this year when it emerged that its Nest security hub included a secret microphone. Mr Chandra conceded that it was a “mistake” not to inform customers when it went on sale.</p>
google  nest  privacy 
may 2019 by charlesarthur
Hey, Alexa: stop recording me • The Washington Post
Geoffrey Fowler:
<p>“Eavesdropping” is a sensitive word for Amazon, which has battled lots of consumer confusion about when, how and even who is listening to us when we use an Alexa device. But much of this problem is of its own making.

Alexa keeps a record of what it hears every time an Echo speaker activates. It’s supposed to record only with a “wake word” — “Alexa!” — but anyone with one of these devices knows they go rogue. I counted dozens of times when mine recorded without a legitimate prompt. (Amazon says it has improved the accuracy of “Alexa” as a wake word by 50 percent over the past year.)

What can you do to stop Alexa from recording? Amazon’s answer is straight out of the Facebook playbook: “Customers have control,” it says — but the product’s design clearly isn’t meeting our needs. You can manually delete past recordings if you know <a href="">exactly where to look</a> and remember to keep going back. You cannot stop Amazon from making these recordings, aside from muting the Echo’s microphone (defeating its main purpose) or unplugging the darned thing.</p>

As he points out, this is true too about devices that hook into the Alexa system if they're activated (I haven't activated it on Sonos speakers with the capability). Google has changed its defaults: it now doesn't record. Nor does Apple.
privacy  alexa 
may 2019 by charlesarthur
America’s favorite door-locking app has a data privacy problem • OneZero
Sage Lazzaro:
<p>Latch is on a mission to digitize the front door, offering apartment entry systems that forgo traditional keys in favor of being able to unlock entries with a smartphone. The company touts convenience — who wants to fiddle with a metal key? — and has a partnership with UPS, so you can get packages delivered inside your lobby without a doorman. But while it may keep homes private and secure, the same can’t be said about tenants’ personal data.

Latch — which has raised $96m in venture capital funding since launching in 2014, including $70m in its Series B last year — offers three products. Two are entry systems for specific units, and one is for lobbies and other common areas like elevators and garages. The company claims one in 10 new apartment buildings in the U.S. is being built with its products, with leading real estate developers like Brookfield and Alliance Residential now installing them across the country.

Experts say they’re concerned about the app’s privacy policy, which allows Latch to collect, store, and share sensitive personally identifiable information (PII) with its partners and, in some cases, landlords. And while Latch is far from the only tech company with questionable data practices, it’s harder for a tenant to decouple from their building’s door than, say, Instagram: If your landlord installs a product like the keyhole-free Latch R, you’re stuck. The issue of tenant consent is currently coming to a head in New York City, where residents of a Manhattan building are suing their landlord in part over privacy concerns related to the app.</p>

Latch wouldn't be interviewed but said that it offers smartphone app unlocking, Bluetooth proximity, or keycard. But the problem is still about controlling where the information goes.
Door  security  privacy 
may 2019 by charlesarthur
Amazon’s facial-recognition technology is supercharging local police • Washington Post
Drew Harwell:
<p>A grainy picture of someone’s face — captured by a security camera, a social-media account or a deputy’s smartphone — can quickly become a link to their identity, including their name, family and address. More than 1,000 facial-recognition searches were logged last year, said deputies, who sometimes used the results to find a suspect’s Facebook page, visit their home or make an arrest.

But Washington County [where Amazon's system has been used since late 2017] also became ground zero for a high-stakes battle over the unregulated growth of policing by algorithm. Defense attorneys, artificial-intelligence researchers and civil rights experts argue that the technology could lead to the wrongful arrest of innocent people who bear only a resemblance to a video image. [Amazon's system] Rekognition’s accuracy is also hotly disputed, and some experts worry that a case of mistaken identity by armed deputies could have dangerous implications, threatening privacy and people’s lives.

Some police agencies have in recent years run facial-recognition searches against state or FBI databases using systems built by contractors such as Cognitec, IDEMIA and NEC. But the rollout by Amazon has marked perhaps the biggest step in making the controversial face-scanning technology mainstream. Rekognition is easy to activate, requires no major technical infrastructure, and is offered to virtually anyone at bargain-barrel prices. Washington County spent about $700 to upload its first big haul of photos, and now, for all its searches, pays about $7 a month.

It’s impossible to tell, though, just how accurate or effective the technology has been during its first 18 months of real-world tests.</p>

That last bit feels like it ought to have a lot more emphasis, doesn't it? But wow, that is cheap. $7, compared with all the shoe leather and time of hunting down and going through photos.
Amazon  facialrecognition  police  privacy 
may 2019 by charlesarthur
The terrifying potential of the 5G network • The New Yorker
Sue Halpern:
<p>A totally connected world will also be especially susceptible to cyberattacks. Even before the introduction of 5G networks, hackers have breached the control center of a municipal dam system, stopped an Internet-connected car as it travelled down an interstate, and sabotaged home appliances. Ransomware, malware, crypto-jacking, identity theft, and data breaches have become so common that more Americans are afraid of cybercrime than they are of becoming a victim of violent crime. Adding more devices to the online universe is destined to create more opportunities for disruption. “5G is not just for refrigerators,” Spalding said. “It’s farm implements, it’s airplanes, it’s all kinds of different things that can actually kill people or that allow someone to reach into the network and direct those things to do what they want them to do. It’s a completely different threat that we’ve never experienced before.”

Spalding’s solution, he told me, was to build the 5G network from scratch, incorporating cyber defenses into its design. Because this would be a massive undertaking, he initially suggested that one option would be for the federal government to pay for it and, essentially, rent it out to the telecom companies. But he had scrapped that idea. A later draft, he said, proposed that the major telecom companies—Verizon, AT+T, Sprint, and T-Mobile—form a separate company to build the network together and share it. “It was meant to be a nationwide network,” Spalding told me, not a nationalized one. “They could build this network and then sell bandwidth to their retail customers. That was one idea, but it was never that the government would own the network. It was always about, How do we get industry to actually secure the system?”</p>
mobile  privacy  data  5g 
april 2019 by charlesarthur
Facebook sets aside billions of dollars for a potential FTC fine • The Washington Post
Elizabeth Dwoskin and Tony Romm:
<p>Facebook on Wednesday said it would set aside $3bn to cover costs in its ongoing investigation with the US Federal Trade Commission over the social media company’s privacy practices, as its recent scandals take a toll on its balance sheet in a big way.

That number, which the company said could ultimately range between $3bn and $5bn, correlates with the size of the fine the agency is expected to levy against the tech giant and would be represent the largest the FTC has ever imposed.

Facebook’s decision to set aside billions of dollars comes as the company continues negotiating with the FTC on a settlement that would end its investigation. As part of those talks, federal officials have sought to force Facebook to pay a fine into the billions of dollars, sources previously told the Post. That would set a new record for the largest fine imposed by the FTC for a repeat privacy violation, after Google had to pay $22.5m a few years ago.

The FTC came to determine that violations could result in a multi-billion dollar fine after computing the number of times Facebook breached a 2011 order with the government to improve its privacy practices.</p>

This is going to be quite a thing to watch. Will Facebook, like Google, be able to shrug it off and move on? If the FTC hands down that size of fine it's going to lead a lot of news bulletins. That will get a lot of peoples' attention.
facebook  privacy  fine 
april 2019 by charlesarthur
Facebook uploaded 1.5 million users' email contacts without permission • Business Insider
Rob Price:
<p>Facebook harvested the email contacts of 1.5 million users without their knowledge or consent when they opened their accounts.

Business Insider has learned that since May 2016, the social networking company has collected the contact lists of 1.5 million users new to the social network. The Silicon Valley company says they were "unintentionally uploaded to Facebook," and it is now deleting them. You can read Facebook's full statement below.

The revelation comes after a security researcher noticed that Facebook was asking some users to enter their email passwords when they signed up for new accounts to verify their identities, in a move widely condemned by security experts. Business Insider then discovered that if you did enter your email password, a message popped up saying it was "importing" your contacts, without asking for permission first.

At the time, it wasn't clear what was actually happening — but a Facebook spokesperson has now confirmed that 1.5 million people's contacts were collected this way, and fed into Facebook's systems, where they were used to build Facebook's web of social connections and recommend friends to add. It's not immediately clear if these contacts were also used for ad-targeting purposes. [Later: it did.]

Facebook says that prior to May 2016, it offered an option to verify a user's account and voluntarily upload their contacts at the same time. However, Facebook says, it changed the feature, and the text informing users that their contacts would be uploaded was deleted — but the underlying functionality was not. Facebook didn't access the content of users' emails, the spokesperson added.</p>

Notice how Facebook's errors always fall in favour of it getting more information, and using it to target ads? Never getting less information and reducing ad loads? Though at this point it looks sociopathic.
facebook  email  privacy 
april 2019 by charlesarthur
Tracking phones, Google is a dragnet for the police • The New York Times
Jennifer Valentino-DeVries:
<p>When detectives in a Phoenix suburb arrested a warehouse worker in a murder investigation last December, they credited a new technique with breaking open the case after other leads went cold.

The police told the suspect, Jorge Molina, they had data tracking his phone to the site where a man was shot nine months earlier. They had made the discovery after obtaining a search warrant that required Google to provide information on all devices it recorded near the killing, potentially capturing the whereabouts of anyone in the area.

Investigators also had other circumstantial evidence, including security video of someone firing a gun from a white Honda Civic, the same model that Mr. Molina owned, though they could not see the license plate or attacker.

But after he spent nearly a week in jail, the case against Mr. Molina fell apart as investigators learned new information and released him. Last month, the police arrested another man: his mother’s ex-boyfriend, who had sometimes used Mr. Molina’s car.

The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.</p>

Hello, Google's Location History feature - which <a href="">will collect data about your location all the time</a> (on Android) or when allowed (on iOS).

See yours: <a href=""></a>.
google  privacy  surveillance 
april 2019 by charlesarthur
Amazon workers are listening to what you tell Alexa • Bloomberg
Matt Day , Giles Turner , and Natalia Drozdiak:
<p>Amazon employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands. 

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district.</p>

That is a LOT of listening. Is this another "not really AI" example?
amazon  privacy  surveillance  alexa 
april 2019 by charlesarthur
Does Google meet its users’ expectations around consumer privacy? This news industry research says no » Nieman Journalism Lab
Jason Kint:
<p>Digital Content Next surveyed a nationally representative sample1 to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

Q: Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?<br />YES: 48%NO: 52%

Q: Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?<br />YES: 43%NO: 57%

Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.</p>

Don't expect – or perhaps aren't aware that it's capable of doing.
google  privacy  surveillance 
april 2019 by charlesarthur
Microsoft, Facebook, trust and privacy • Benedict Evans
Evans finds strong parallels, 25-odd years apart:
<p>much like the [creators of the] Microsoft macro viruses, the ‘bad actors’ on Facebook did things that were in the manual. They didn’t prise open a locked window at the back of the building - they knocked on the front door and walked in. They did things that you were supposed to be able to do, but combined them in an order and with malign intent that hadn’t really been anticipated.

It’s also interesting to compare the public discussion of Microsoft and of Facebook before these events. In the 1990s, Microsoft was the ‘evil empire’, and a lot of the narrative within tech focused on how it should be more open, make it easier for people to develop software that worked with the Office monopoly, and make it easier to move information in and out of its products. Microsoft was ‘evil’ if it did anything to make life harder for developers. Unfortunately, whatever you thought of this narrative, it pointed in the wrong direction when it came to this use case. Here, Microsoft was too open, not too closed.

Equally, in the last 10 years many people have argued that Facebook is too much of a ‘walled garden’ - that is is too hard to get your information out and too hard for researchers to pull information from across the platform. People have argued that Facebook was too restrictive on how third party developers could use the platform. And people have objected to Facebook's attempts to enforce the single real identities of accounts. As for Microsoft, there may well have been justice in all of these arguments, but also as for Microsoft, they pointed in the wrong direction when it came to this particular scenario. For the Internet Research Agency, it was too easy to develop for Facebook, too easy to get data out, and too easy to change your identity. The walled garden wasn’t walled enough. </p>
security  facebook  microsoft  privacy 
april 2019 by charlesarthur
Guardian Mobile Fireweall aim to block the apps that grab your data • Fast Company
Glenn Fleishman:
<p>A New York Times report in December focused on location data being shared with third-party organizations and tied to specific users; in February, a Wall Street Journal investigation reported that app makers were sharing events as intimate as ovulation cycles and weight with Facebook. But no matter how alarmed you are by such scenarios, there hasn’t been much you could do. Mobile operating systems don’t let you monitor your network connection and block specific bits of data from leaving your phone.

That led Strafach and his colleagues at Sudo Security Group aim to take practical action. “We are aware of almost every active tracker that is in the App Store,” he says. Building on years of research, Sudo is putting the finishing touches on an iPhone app called Guardian Mobile Firewall, a product that combines a virtual private network (VPN) connection with a sophisticated custom firewall managed by Sudo.

It looks like Guardian will be the first commercial entry into a fresh category of apps and services that look not only just for malicious behavior, but also what analysis shows could be data about you leaving your phone without your explicit permission. It will identify and variably block all kinds of leakage, based on Sudo’s unique analysis of App Store apps.

Sudo is <a href="">taking preorders for the app in the Apple Store</a> and plans a full launch no later than June. It will debut on iOS, and required some lengthy conversations with Apple’s app reviewers as Sudo laid out precisely what part of its filtering happens in the app (none of it) and what happens at its cloud-based firewall (everything). The price will be in the range of a high-end, unlimited VPN—about $8 or $9 a month. Sudo plans an expanded beta program in April, followed by a production release that will be automatically delivered to preorder customers.</p>

You'd need to be pretty worried about data grabs to pay that amount, wouldn't you? That's nearly a music subscription. Is your data *that* valuable? Wouldn't an adblocker be a lot cheaper?
sudo  data  privacy 
march 2019 by charlesarthur
Android Q will kill clipboard manager apps in the name of privacy • Android Police
Ryan Whitwam:
<p>Privacy is a primary focus of Android Q for Google, and that may spell trouble for some of your favorite apps. In Android Q, Google has restricted access to clipboard data <a href="">as previously rumoured</a>, which means most apps that currently aim to manage that data won't work anymore.

Having an app that sits in the background and collects clipboard data can be a handy way to recall past snippets of data. However, that same mechanism could be used for malicious intent. Google's playing it safe by restricting access to clipboard data to input method editors (you might know those as keyboards). Foreground apps that have focus will also be able to access the clipboard, but background apps won't.</p>

iOS and Android are on a very slow collision course to having the same approach to security.
androidq  privacy  clipboard 
march 2019 by charlesarthur
Eero is now officially part of Amazon, pledges to keep network data private • The Verge
Nilay Patel:
<p>concerns that Amazon would somehow make expanded use of Eero network data have been growing ever since the deal was announced — obviously, your Wi-Fi router can see all your network traffic, and Eero’s system in particular relies on a cloud service for network optimization and other features. But Eero is committed to keeping that data private, said [Eero CEO Nick] Weaver, who also <a href="">published a blog post</a> this morning that explicitly promises Eero will never read any actual network traffic.

“If anything, we’re just going to strengthen our commitment to both privacy and security,” Weaver told us. “We’ve got some pretty clear privacy principles that we’ve used for developing all of our products, that are the really the underpinnings of everything. Those aren’t going to change.”

Those three principles, as laid out in the blog post, are that customers have a “right to privacy” that includes transparency around what data is being collected and control over that data; that network diagnostic information will only be collected to improve performance, security, and reliability; and that Eero will “actively minimize” the amount of data it can access, while treating the data it does collect with “the utmost security.”</p>

Never is a long time; there was a time when Nest was never going to be integrated into Google. A more proximate worry for a smaller group of people is whether it's going to keep advertising on podcasts.
amazon  eero  data  privacy 
march 2019 by charlesarthur
Zuckerberg’s so-called shift toward privacy • NY Times
Zeynep Tufekci on Mark Zuckerberg's latest splurge of intent:
<p>what we really need — and it is not clear what Facebook has in mind — is privacy for true person-to-person messaging apps, not messaging apps that also allow for secure mass messaging.

At the moment, critics can (and have) held Facebook accountable for its failure to adequately moderate the content it disseminates — allowing for hate speech, vaccine misinformation, fake news and so on. Once end-to-end encryption is put in place, Facebook can wash its hands of the content. We don’t want to end up with all the same problems we now have with viral content online — only with less visibility and nobody to hold responsible for it.

It’s also worth noting that encrypted messaging, in addition to releasing Facebook from the obligation to moderate content, wouldn’t interfere with the surveillance that Facebook conducts for the benefit of advertisers. As Mr. Zuckerberg <a href="">admitted in an interview</a> after he posted his plan, Facebook isn’t “really using the content of messages to target ads today anyway.” In other words, he is happy to bolster privacy when doing so would decrease Facebook’s responsibilities, but not when doing so would decrease its advertising revenue.

Another point that Mr. Zuckerberg emphasized in his post was his intention to make Facebook’s messaging platforms, Messenger, WhatsApp and Instagram, “interoperable.” He described this decision as part of his “privacy-focused vision,” though it is not clear how doing do — which would presumably involve sharing user data — would serve privacy interests.

Merging those apps just might, however, serve Facebook’s interest in avoiding antitrust remedies. Just as regulators are realizing that allowing Facebook to gobble up all its competitors (including WhatsApp and Instagram) may have been a mistake, Mr. Zuckerberg decides to scramble the eggs to make them harder to separate into independent entities. What a coincidence.

In short, the few genuinely new steps that Mr. Zuckerberg announced on Wednesday seem all too conveniently aligned with Facebook’s needs, whether they concern government regulation, public scandal or profitability.</p>

The European Commission is hopping mad about the idea that Facebook would roll itself, Instagram and WhatsApp together, having promised it wouldn't. Fines may follow.
Facebook  privacy 
march 2019 by charlesarthur
Facebook won’t let you opt-out of its phone number ‘look up’ setting • TechCrunch
Zack Whittaker:
<p>Users are complaining that the phone number Facebook hassled them to use to secure their account with two-factor authentication has also been associated with their user profile — which anyone can use to “look up” their profile.

Worse, Facebook doesn’t give you an option to opt out.

Last year, Facebook was forced to admit that after months of pestering its users to switch on two-factor by signing up their phone number, it was also using those phone numbers to target users with ads. But some users are finding out just now that Facebook’s default setting allows everyone — with or without an account — to look up a user profile based off the same phone number previously added to their account.

The recent hubbub began today after a <a href="">tweet</a> by Jeremy Burge blew up, criticizing Facebook’s collection and use of phone numbers, which he likened to “a unique ID that is used to link your identity across every platform on the internet.”</p>

Facebook has handled this badly because it handles anything where it gets more data, especially data tying to you individually, badly - that is, as a thing which it wants above all other things, and will not relent in its use. Last year, the complaint was that if you use your phone number for 2FA, it pings you - even if you have all "notify me" settings turned off - to say that things are happening on your account.

You can however use a code generator program such as Authy or Google Authenticator for the 2FA part.
facebook  privacy  phonenumber  ethics  security  hacking 
march 2019 by charlesarthur
Revealed: Facebook’s global lobbying against data privacy laws • The Guardian
Carole Cadwalladr and Duncan Campbell:
<p>The documents appear to emanate from a court case against Facebook by the app developer Six4Three in California, and reveal that Sandberg considered European data protection legislation a “critical” threat to the company. A memo written after the Davos economic summit in 2013 quotes Sandberg describing the “uphill battle” the company faced in Europe on the “data and privacy front” and its “critical” efforts to head off “overly prescriptive new laws”.

Most revealingly, it includes details of the company’s “great relationship” with Enda Kenny, the Irish prime minister at the time, one of a number of people it describes as “friends of Facebook”. Ireland plays a key role in regulating technology companies in Europe because its data protection commissioner acts for all 28 member states. The memo has inflamed data protection advocates, who have long complained about the company’s “cosy” relationship with the Irish government.

The memo notes Kenny’s “appreciation” for Facebook’s decision to locate its headquarters in Dublin and points out that the new proposed data protection legislation was a “threat to jobs, innovation and economic growth in Europe”. It then goes on to say that Ireland is poised to take on the presidency of the EU and therefore has the “opportunity to influence the European Data Directive decisions”. It makes the extraordinary claim that Kenny offered to use the “significant influence” of the EU presidency as a means of influencing other EU member states “even though technically Ireland is supposed to remain neutral in this role”.</p>

Campbell's presence on the byline is worth noting: he's a very well-connected highly experienced journalist who has done a lot on defence and spying in the past. If his contacts have these emails, that's interesting.
facebook  privacy  ireland  politics  gdpr 
march 2019 by charlesarthur
Why it still feels like Facebook is tracking you, even after all the privacy measures • WSJ
Katherine Bindley:
<p>If we take advantage of all these privacy controls, it shouldn’t still feel as if Facebook is spying on us, right? We shouldn’t see so many ads that seem so closely tied to our activity on our phones, on the internet or in real life.

The reality? I took all those steps months ago, from turning off location services to opting out of Facebook and Instagram ads tied to off-site behavior. I told my iPhone to “limit ad tracking.” Yet I continue to see eerily relevant ads.

I tested my suspicion by downloading the What to Expect pregnancy app. I didn’t so much as share an email address, yet in less than 12 hours, I got a maternity-wear ad in my Instagram feed. I’m not pregnant, nor otherwise in a target market for maternity-wear. When I tried to retrace the pathway, discussing the issue with the app’s publisher, its data partners, the advertiser and Facebook itself—dozens of emails and phone calls—not one would draw a connection between the two events. Often, they suggested I ask one of the other parties.

Everyday Health Group, which owns What to Expect, said it has no business relationship with Hatch, the maternity brand whose ad I received. Facebook initially said there could be any number of reasons I might have seen the ad—but that downloading the app couldn’t be one of them.</p>

Bindley goes into quite some detail about how location tracking persists, and "Why I'm Seeing an Ad" doesn't explain why, and you might see ads on Facebook based on what you do outside it even if you opt out of seeing ads based on what you do outside Facebook.

It's like the vampire squid of data.
privacy  facebook 
february 2019 by charlesarthur
You give apps sensitive personal information. Then they tell Facebook • WSJ
Sam Schechner and Mark Secada:
<p>Apple Inc. and Alphabet Inc.’s Google, which operate the two dominant app stores, don’t require apps to disclose all the partners with whom data is shared. Users can decide not to grant permission for an app to access certain types of information, such as their contacts or locations. But these permissions generally don’t apply to the information users supply directly to apps, which is sometimes the most personal.

In the Journal’s testing, Instant Heart Rate: HR Monitor, the most popular heart-rate app on Apple’s iOS, made by California-based Azumio, sent a user’s heart rate to Facebook immediately after it was recorded.

Flo Health Inc.’s Flo Period & Ovulation Tracker, which claims 25 million active users, told Facebook when a user was having her period or informed the app of an intention to get pregnant, the tests showed.

Real-estate app, owned by Move Inc., a subsidiary of Wall Street Journal parent News Corp , sent the social network the location and price of listings that a user viewed, noting which ones were marked as favorites, the tests showed.

None of those apps provided users any apparent way to stop that information from being sent to Facebook.

Facebook said some of the data sharing uncovered by the Journal’s testing appeared to violate its business terms, which instruct app developers not to send it “health, financial information or other categories of sensitive information.” Facebook said it is telling apps flagged by the Journal to stop sending information its users might regard as sensitive. The company said it may take additional action if the apps don’t comply…

…Flo Health’s privacy policy says it won’t send “information regarding your marked cycles, pregnancy, symptoms, notes and other information that is entered by you and that you do not elect to share” to third-party vendors.

Flo initially said in a written statement that it doesn’t send “critical user data” and that the data it does send Facebook is “depersonalized” to keep it private and secure.

The Journal’s testing, however, showed sensitive information was sent with a unique advertising identifier that can be matched to a device or profile. A Flo spokeswoman subsequently said the company will “substantially limit” its use of external analytics systems while it conducts a privacy audit.</p>

Just astonishing. Facebook can't help itself; the companies can't help themselves. They're all in thrall to the promise, whether real or not, that gathering more personal data will lead to riches through targeted ads. When in reality the ads just creep us out. And this may be illegal under the GDPR, in Europe at least.
facebook  privacy  apps  data  gdpr 
february 2019 by charlesarthur
When kids Google themselves • The Atlantic
Taylor Lorenz:
<p>Allie was in fourth grade the first time she Googled herself. Like Ellen, she wasn’t expecting to find anything, since she doesn’t yet have her own social-media accounts. Google turned up just a few photos, but she was shocked that there was anything at all. She immediately became hyperaware of the image her mother was building for her on Instagram and Facebook. “My parents have always posted about me,” she said. “I was basically fine with it … then I realized I was making an impression and I was an actual person online too, through her page.”

Not all kids react poorly to finding out they’ve been living an unwitting life online. Some are thrilled. In fourth grade, Nate searched his name and discovered that he was mentioned in a news article about his third-grade class making a giant burrito. “I didn’t know,” he said. “I was surprised, really surprised.” But he was pleased with his newfound clout. “It made me feel famous … I got to make new friends by saying, ‘Oh, I’m in a newspaper [online],’” he said. Ever since, he has Googled himself every few months, hoping to find things.

Natalie, now 13, said that in fifth grade she and her friends competed with one another over the amount of information about themselves on the internet. “We thought it was so cool that we had pics of ourselves online,” she said. “We would brag like, ‘I have this many pics of myself on the internet.’ You look yourself up, and it’s like, ‘Whoa, it’s you!’ We were all shocked when we realized we were out there. We were like, ‘Whoa, we’re real people.’”

Natalie’s parents are stringent about not posting photos of her to social media, so there are only a handful of photos of her out there, but she yearns for more. “I don’t want to live in a hole and only have two pics of me online. I want to be a person who is a person. I want people to know who I am,” she said.</p>

We all want to be someone, don't we? And this is the first way that children discover that they <em>are</em>, to other people - maybe to the people they want to impress, which is their peers. Comes with a word I hadn't seen before: "sharenting" (parents who share too much).
privacy  children  parenting 
february 2019 by charlesarthur
Differential privacy: an easy case • Substack
Mark Hansen:
<p>By law, the Census Bureau is required to keep our responses to its questionnaires confidential. And so, over decades, it has applied several “disclosure avoidance” techniques when it publishes data — these have been meticulously catalogued by Laura McKenna, going back to the 1970 census.

But for 2020, the bureau will instead release its data tables using a “formal privacy” framework known as “differential privacy.”

A unique feature of this new approach is that it explicitly quantifies privacy loss and provides mathematically provable privacy guarantees for those whose data are published as part of the bureau’s tables. 

Differential privacy is simply a mathematical definition of privacy. While there are legal and ethical standards for protecting our personal data, differential privacy is specifically designed to address the risks we face in a world of “big data” and “big computation.”

Given its mathematical origins, discussions of differential privacy can become technical very quickly.</p>

Apple and Google use this to make it harder to de-anonymise personal data. This is quite a long post, but it explains it while sticking to quite simple maths.
privacy  data  bigdata  census 
february 2019 by charlesarthur
Internet censorship: Facebook, Patreon will always be frustrating • Bloomberg
Tyler Cowen:
<p>Facebook recently has devoted a lot of resources to regulating speech on its platform. Yet undesired uses of the platform hardly have gone away, especially outside the U.S. Furthermore, the need for human judgment makes algorithms increasingly costly and hard to scale. As Facebook grows bigger and reaches across more regions and languages, it becomes harder to find the humans who can apply what Facebook considers to be the proper standards. 1

I’d like to suggest a simple trilemma. When it comes to private platforms and speech regulation, you can choose two of three: scalability, effectiveness and consistency. You cannot have all three. Furthermore, this trilemma suggests that we — whether as users, citizens or indeed managers of the platforms themselves — won’t ever be happy with how speech is regulated on the internet.

One view, which may appear cynical, is that the platforms are worth having, so they should appease us by at least trying to regulate effectively, even though both of us know they won’t really succeed. Circa 2019, I don’t see a better solution. Another view is that we’d be better off with how things were a few years ago, when platform regulation of speech was not such a big issue. After all, we Americans don’t flip out when we learn that Amazon sells copies of “Mein Kampf.”

The problem is that once you learn about what you can’t have — speech regulation that is scalable, consistent and hostile to bad agents — it is hard to get used to that fact. Going forward, we’re likely to see platform companies trying harder and harder, and their critics getting louder and louder.</p>

(Via Nathan Taylor's <a href="">fine roundup</a>.)
privacy  surveillance  culture  censorship 
february 2019 by charlesarthur
I blocked Amazon, Facebook, Google, Microsoft, and Apple • Gizmodo
Kashmir Hill:
<p>I am using a Linux laptop made by a company named Purism and a Nokia feature phone on which I am relearning the lost art of T9 texting…

…in preparation for the week, I export all my contacts from Google, which amounts to a shocking 8,000 people. I have also whittled down the over 1,500 contacts in my iPhone to 143 people for my Nokia, or the number of people I actually talk to on a regular basis, which is incredibly close to Dunbar’s number.

I wind up placing a lot of phone calls this week, because texting is so annoying on the Nokia’s numbers-based keyboard. I find people often pick up on the first ring out of concern; they’re not used to getting calls from me.

I don’t think I could have done this cold turkey.
On the first day of the block, I drive to work in silence because my rented Ford Fusion’s “SYNC” entertainment system is powered by Microsoft. Background noise in general disappears this week because YouTube, Apple Music, and our Echo are all banned—as are Netflix, Spotify, and Hulu, because they rely on AWS and the Google Cloud to get their content to users.

The silence causes my mind to wander more than usual. Sometimes this leads to ideas for my half-finished zombie novel or inspires a new question for investigation. But more often than not, I dwell on things I need to do.

Many of these things are a lot more challenging as a result of the experiment, such as when I record an interview with Alex Goldman of the podcast Reply All about Facebook and its privacy problems.

I live in California, and Alex is in New York; we would normally use Skype, but that’s owned by Microsoft, so instead we talk by phone and I record my end with a handheld Zoom recorder. That works fine, but when it comes time to send the 386 MB audio file to Alex, I realize I have no idea how to send a huge file over the internet.</p>

So essentially like living in 1995. Take it from a survivor: we managed. (OK, there weren't Linux laptops. But Windows and MacOS at the time were pretty much the same as Linux is now.)
internet  privacy  data  tech 
february 2019 by charlesarthur
Facebook paid people $20 monthly for access to their digital activity. Why did they sign up? • Slate
Shannon Palus:
<p>One user, who identified themselves as 32 years old and reported that they had netted $30 in gift cards with the app, told me via email, “I’m not too worried about that data because I’m almost certain these companies collect that stuff anyway,” and that, “Google and Amazon know a lot already.” The user explained they do a lot of little paid tasks to earn money, like downloading apps or completing surveys. It isn’t significant, they said, but acts as a little bonus to their household income, which they told me is $60,000 a year. “Lately most of my earnings have gone to simple things (groceries, MetroCards, date night),” they wrote.

Others on Reddit expressed similar sentiments. “I have been enjoying the small amount of money. It helps me buy frivolous things like new games which I may not get as often,” wrote another user, who said they were perplexed as to why reporters like me were “asking about why I would give up so much data.” They wrote they thought the program was upfront in “clearly stat[ing] they farm data for money.” (Perhaps fittingly, when I messaged this person for more information, they offered to answer for $25—a deal which journalistic ethics compelled me to decline.)

Not everyone seemed as unquestioningly enthusiastic about the trade. One user, who said they were 40 (which put them over the age that Facebook was recruiting for), posted that the VPN was “quite obviously some shady shit,” and said they had purposefully installed it on an old junk phone they didn’t use anymore.</p>

Their bigger worry was that the program would get shut down. And guess what!
facebook  privacy 
february 2019 by charlesarthur
EU orders recall of children's smartwatch over severe privacy concerns • ZDNet
Catalin Cimpanu:
<p>For the first time, EU authorities have announced plans to recall a product from the European market because of a data privacy issue.

The product is Safe-KID-One, a children's smartwatch produced by German electronics vendor ENOX.

According to the company's website, the watch comes with a trove of features, such as a built-in GPS tracker, built-in microphone and speaker, a calling and SMS text function, and a companion Android mobile app that parents can use to keep track and contact their children.

The product is what most parents regularly look in a modern smartwatch but in a RAPEX (Rapid Alert System for Non-Food Products) alert published last week and spotted by Dutch news site Tweakers, European authorities ordered a mass recall of all smartwatches from end users citing severe privacy lapses.

"The mobile application accompanying the watch has unencrypted communications with its backend server and the server enables unauthenticated access to data," said authorities in the RAPEX alert. "As a consequence, the data such as location history, phone numbers, serial number can easily be retrieved and changed."

On top of this, authorities also said that "a malicious user can send commands to any watch making it call another number of his choosing, can communicate with the child wearing the device or locate the child through GPS."</p>

But it gets worse: <a href="">the Android app is owned not by Enox, but by a Chinese developer</a>, so the data loops through Chinese servers.
smartwatch  children  privacy  security 
february 2019 by charlesarthur
One of the biggest at-home DNA testing companies is working with the FBI • Buzzfeed News
Salvador Hernandez:
<p>Family Tree DNA, one of the largest private genetic testing companies whose home-testing kits enable people to trace their ancestry and locate relatives, is working with the FBI and allowing agents to search its vast genealogy database in an effort to solve violent crime cases, BuzzFeed News has learned.

Federal and local law enforcement have used public genealogy databases for more than two years to solve cold cases, including the landmark capture of the suspected Golden State Killer, but the cooperation with Family Tree DNA and the FBI marks the first time a private firm has agreed to voluntarily allow law enforcement access to its database.

While the FBI does not have the ability to freely browse genetic profiles in the library, the move is sure to raise privacy concerns about law enforcement gaining the ability to look for DNA matches, or more likely, relatives linked by uploaded user data.

For law enforcement officials, the access could be the key to unlocking murders and rapes that have gone cold for years, opening up what many argue is the greatest investigative tactic since the advent of DNA identification. For privacy advocates, the FBI’s new ability to match the genetic profiles from a private company could set a dangerous precedent in a world where DNA test kits have become as common as a Christmas stocking stuffer…

…In December 2018, the company changed its terms of service to allow law enforcement to use the database to identify suspects of “a violent crime,” such as homicide or sexual assault, and to identify the remains of a victim.</p>

Ah, good old TOS. And yet: the FBI doesn't hold this; it gets to access it just like a normal user, and to get more has to provide a court order or search warrant. This isn't actually the gigantic intrusion it might look like.
privacy  biometrics  fbi  dna 
february 2019 by charlesarthur
Apple is a hypocrite on data privacy • The Atlantic
Ian Bogost:
<p>[In revoking Facebook’s enterprise developer certificate,] Apple didn’t take a position on Facebook’s creation of a paid “research” program to extract data from users. It enforced the terms of a licensing agreement; appearing to fight for user privacy is just a side effect. Apple is flexing its contract-law muscle, not its privacy muscle, and gaining a publicity win in the process. Crucially, Apple didn’t ban Facebook from the App Store or the iPhone platform: You can still download and use Messenger.

Facebook, for its part, maintains that the data-collection activity its Research app undertook was above board and not at all duplicitous. Unlike previous controversies about how Facebook shared user data with developers like Cambridge Analytica or foreign governments, little about the research program was hidden…

…Safari, the web browser that comes with every iPhone, is set up by default to route web searches through Google. For this privilege, Google reportedly paid Apple $9bn in 2018, and as much as $12bn this year. All those searches help funnel out enormous volumes of data on Apple’s users, from which Google extracts huge profits. Apple might not be directly responsible for the questionable use of that data by Google, but it facilitates the activity by making Google its default search engine, enriching itself substantially in the process.

The same could be said for the apps Apple distributes. Companies like Google and Facebook get access to iPhone users by offering their apps—Messenger, Gmail, Google Maps, and so on—for download from the Apple App Store. Most cost consumers nothing, because they exist to trade software services, like email or mapping, for data. That business model helped stimulate the data-privacy dystopia we now occupy.</p>

Occasionally I include an article that I disagree with, and I disagree with this one. Bogost is holding Apple to an impossible standard here. It couldn't know what Facebook was doing with the Enterprise Certificate or the app - to monitor that really *would* be an invasion of privacy, both Facebook's and the users'. That was a contractual violation, and Facebook was punished for it. Setting Google as the Safari default is a commercial decision, but you don't have to use it; and Google obeys privacy rules, as far as we can tell. The "privacy dystopia" is our own fault, but you can actually avoid it by not using Facebook or Google (as much as you can).

For Apple to ban Facebook and Google would open up the huge question: what form of "privacy" is sufficient? If people consent to something, what locus does Apple have to deny that? It's providing a platform. You can give people electricity; some will use it for light, and others will electrocute themselves.
apple  privacy 
january 2019 by charlesarthur
I cut Google out of my life. It screwed up everything • Gizmodo
Kashmir Hill set up a VPN to block all of Google's 8m-odd IP addresses, to see what life looked like:
<p>Google, like Amazon, is woven deeply into the infrastructure of online services and other companies’ offerings, which is frustrating to all the connected devices in my house.

“Your smart home pings Google at the same time every hour in order to determine whether or not it’s connected to the internet,” Dhruv tells me. “Which is funny to me because these devices’ engineers decided to determine connectivity to the entire internet based on the uptime of a single company. It’s a good metaphor for how far the internet has strayed from its original promise to decentralize control.”

In some cases, the Google block means apps won’t work at all, like Lyft and Uber [which use it for maps], or Spotify, whose music is hosted in Google Cloud. The more frequent effect of the Google block though is that the internet itself slows down dramatically for me.

Most of the websites I visit have frustratingly long load times because so many of them rely on resources from Google and get confused when my computer won’t let them talk to the company’s servers. On Airbnb, photos won’t load. New York Times articles won’t appear until the site has tried (and failed) to load Google Analytics, Google Pay, Google News, Google ads, and a Doubleclick tracker.

As I sit staring at my screen and drumming my fingers, I get flashbacks to computing via dial-up in the ’90s, when I used to read a book while waiting for websites to open. It’s amazing to see how often sites are trying to serve trackers, ads, and analytics from Google before their own content. </p>

Clever idea for a story. Facebook next?
google  internet  privacy 
january 2019 by charlesarthur
Facebook pays teens to install VPN that spies on them • Techcrunch
Josh Constine:
<p>Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe.

[Update 11:20pm PT: Facebook now tells TechCrunch it will shut down the iOS version of its Research app in the wake of our report. The rest of this article has been updated to reflect this development.]

Facebook’s Research program will continue to run on Android. We’re still awaiting comment from Apple on whether Facebook officially violated its policy and if it asked Facebook to stop the program. As was the case with Facebook removing Onavo Protect from the App Store last year, Facebook may have been privately told by Apple to voluntarily remove it.

We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.”</p>

Just astonishing. Facebook truly is the scorpion on the back of the frog; it just can't help itself. Apple could justifiably remove its Enterprise Certificate (which it used to do this end-run around Apple's privacy measures) because this breaches its rules.

Meanwhile, it's a reminder that VPNs only offer privacy from those who aren't controlling the VPN.
Facebook  onavo  vpn  privacy  apple 
january 2019 by charlesarthur
FTC considers record-setting fine against Facebook for privacy violations • The Washington Post
Tony Romm and Elizabeth Dwoskin:
<p>US regulators have met to discuss imposing a record-setting fine against Facebook for violating a legally binding agreement with the government to protect the privacy of its users' personal data, according to three people familiar with the deliberations but not authorised to speak on the record.

The fine under consideration at the Federal Trade Commission, a privacy and security watchdog that began probing Facebook last year, would mark the first major punishment levied against Facebook in the United States since reports emerged in March that Cambridge Analytica, a political consultancy, accessed personal information on about 87 million Facebook users without their knowledge.

The penalty is expected to be much larger than the $22.5m fine the agency imposed on Google in 2012. That fine set a record for the greatest penalty for violating an agreement with the FTC to improve its privacy practices.</p>

It would have to be really big to make Facebook pay attention, but it's pretty clear that what happened with Cambridge Analytica and others violated the terms of Facebook's 2011 deal with the FTC. It's hard to see how they could come to any other conclusion.
ftc  facebook  privacy 
january 2019 by charlesarthur
China’s Orwellian social credit score isn’t real • Foreign Policy
Jamie Horsley:
<p>Under the system, government agencies compile and share across departments, regions, and sectors, and with the public, data on compliance with specified industry or sectoral laws, regulations, and agreements by individuals, companies, social organizations, government departments, and the judiciary. Serious offenders may be placed on blacklists published on an integrated national platform called Credit China and subjected to a range of government-imposed inconveniences and exclusions. These are often enforced by multiple agencies pursuant to joint punishment agreements covering such sectors as taxation, the environment, transportation, e-commerce, food safety, and foreign economic cooperation, as well as failing to carry out court judgments.

These punishments are intended to incentivize legal and regulatory compliance under the often-repeated slogan of “whoever violates the rules somewhere shall be restricted everywhere.” Conversely, “red lists” of the trustworthy are also published and accessed nationally through Credit China.

The scope, scale, diversity, and language of the evolving system have caused a lot of confusion, particularly with respect to the existence of a single social credit score. There is no such thing as a national “social credit score. A few dozen towns and cities in China, as well as private companies running loyalty-type programs for their customers, do currently compute scores, primarily to determine rewards or access to various programs. That was the source of at least some of the confusion. Ant Financial’s Sesame Credit program, for instance, which gives rewards on various platforms and easier access to credit, was often cited as a precursor of a planned government program, despite being a private enterprise.

The government does assign universal social credit codes to companies and organizations, which they use as an ID number for registration, tax payments, and other activities, while all individuals have a national ID number. The existing social credit blacklists use these numbers, as do almost all activities in China. But these codes are not scores or rankings.</p>

For something that isn't real, that seems pretty real to me.
china  socialmedia  socialwarming  privacy 
november 2018 by charlesarthur
Period-tracking apps are not for women • Vox
Kaitlyn Tiffany:
<p>There have been free period-tracking apps ever since there have been apps, but they didn’t really boom until the rise of Glow — founded by PayPal’s Max Levchin and four other men — in 2013, which raised $23m in venture funding in its first year, and made it clear that the menstrual cycle was a big business opportunity.

By 2016, there were so many choices, surrounded by so little coherent information and virtually zero regulation, that researchers at Columbia University Medical Center <a href="">buckled down to investigate the entire field</a>. Looking at 108 free apps, they concluded, “Most free smartphone menstrual cycle tracking apps for patient use are inaccurate. Few cite medical literature or health professional involvement.” They also clarified that “most” meant 95 percent.

The Berlin-based, anti-fluff app Clue, founded by Ida Tin, would seem like an answer to this concern. It’s science-backed and science-obsessed, and offers a robust, doctor-sourced blog on women’s health topics. It arrived the same year as Glow but took several more to raise serious funding, provided mostly by Nokia in 2016. Today, Glow has around 15 million users and Clue has 10 million. There are still dozens of other options, but they’re undeniably the big two.

Still, they are not built for women.

“The design of these tools often doesn’t acknowledge the full range of women’s needs. There are strong assumptions built into their design that can marginalize a lot of women’s sexual health experiences,” Karen Levy, an assistant professor of information science at Cornell University, tells me in an email, after explaining that her period tracker couldn’t understand her pregnancy, “a several-hundred-day menstrual cycle.”

Levy coined the term “intimate surveillance” in <a href="">an expansive paper on the topic</a> in the Iowa Law Review in 2015. At the time, when she described intimate data collection as having passed from the state’s public health authorities to every citizen with a smartphone, she was mostly alone in her level of alarm.</p>
app  privacy  periods  ads 
november 2018 by charlesarthur
Facebook failed to police how its partners handled user data • The New York Times
Nicholas Confessore, Michael LaForgia and Gabriel J.X. Dance:
<p>When a team from PricewaterhouseCoopers conducted the initial F.T.C.-mandated assessment in 2013, it tested Facebook’s partnerships with Microsoft and Research in Motion, maker of the BlackBerry handset. In both cases, PricewaterhouseCoopers found only “limited evidence” that Facebook had monitored or checked its partners’ compliance with its data use policies. That finding was redacted from a public version of PricewaterhouseCoopers’s report released by the F.T.C. in June.

“Facebook claimed that its data-sharing partnerships with smartphone manufacturers were on the up and up,” [Oregon Democratic senator Ron] Wyden said. “But Facebook’s own, handpicked auditors said the company wasn’t monitoring what smartphone manufacturers did with Americans’ personal information, or making sure these manufacturers were following Facebook’s own policies.” He added, “It’s not good enough to just take the word of Facebook — or any major corporation — that they’re safeguarding our personal information.”

In a statement, a Facebook spokeswoman said, “We take the F.T.C. consent order incredibly seriously and have for years submitted to extensive assessments of our systems.” She added, “We remain strongly committed to the consent order and to protecting people’s information.”

Facebook, like other companies under F.T.C. consent decree, largely dictates the scope of each assessment. In two subsequent assessments, Facebook’s October letter suggests, the company was graded on a seemingly less stringent policy with data partners. On those two, Facebook had to show that its partners had agreed to its data use policies.

A Wyden aide who reviewed the unredacted assessments said they contained no evidence that Facebook had ever addressed the original problem. The Facebook spokeswoman did not directly address the 2013 test failure, or the company’s apparent decision to change the test in question.</p>

The FTC hit Facebook with a privacy consent decree in 2010. Except Facebook gets to decide the scope of the assessment? That's ludicrous. And then PWC redacts important content?
facebook  ftc  privacy 
november 2018 by charlesarthur
Facebook Portal non-review: why I didn’t put Facebook’s camera in my home • WSJ
Joanna Stern refused to review the Portal in her house, citing privacy concerns, though she did use it in the office:
<p>When I asked about the popular Facebook mic conspiracy, Mr. Bosworth assured me that “it is not true, it will continue to not be true.” On the Portals, specifically, he made a number of privacy and security assurances:

• You can disable the camera and microphone by pressing the button on top of the device. This physically disconnects them so even if the Portal were hacked, they wouldn’t be accessible.<br />• As an added measure, you can block the camera lens with an included plastic camera cover.<br />• All the smart-camera technology—the person detection, etc.—happens locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition to identify people on the call.<br />• Like all Messenger calls and messages, all communications are encrypted.<br />• Like Amazon Echo or Google Home, Portal only sends voice commands to Facebook servers after you say, “Hey Portal.” You can delete Portal’s voice history in your Facebook Activity Log.

However, because this is using Facebook Messenger, the data that is typically collected from a call is still collected. That includes your call history, how long you spent talking to certain contacts, etc. Also, the sheer use of the device indicates to Facebook you’re interested in video calling, so you may be targeted for that. Speaking of ads, Facebook said there are no ads on the Portal’s screen, and the company doesn’t have plans to show ads there.

Facebook’s Promise: The Portal was designed so you’re always in control of your privacy and security.

My Assessment: It’s hard to believe we really have any control of our Facebook data and privacy given the last year.</p>

Facebook execs are clearly sincere about their desire to make the Portal private. But it's the scorpion riding on the frog's back: it'll sting you somehow eventually. That's just its nature. At the same time, the technology is smart. But will the people who can afford it be the ones prepared to let go of their privacy?
Facebook  portal  privacy 
november 2018 by charlesarthur
Apple's new anti-tracking feature in Safari takes toll • Ad Age
George Slefo:
<p>Nearly half of the $88bn spent on digital advertising went toward search last year and the Safari update is already starting to disrupt digital giants like Google.

For instance, the new version makes it more difficult for advertisers to deploy a practice known as remarketing lists for search ads, commonly called RLSA, that allows brands to segment different Google search audiences using their own data. Brands use RLSA to target consumers who visit their website, or abandon items in their shopping cart, through Google search. But "ITP 2 essentially kills the ability to use RLSA in the Safari browser," says Mark Ballard, VP of research at digital agency Merkle.

According to Merkle, the use of RLSA dropped soon after ITP2 came into effect, hitting a seven-month low for the month of September. "The trouble is there are still more questions than answers as to what ITP 2 is going to do," Ballard says. "It may take some months to develop and we have to watch the data to see what comes of it."</p>

Safari has a 50% share on mobile in the US, apparently. That's from about 40% of smartphones in the US.
apple  advertising  privacy  safari 
november 2018 by charlesarthur
Apple’s Tim Cook blasts Silicon Valley over privacy issues • The Washington Post
Tony Romm:
<p>the Apple leader expressed alarm about divisive political rhetoric that proliferates on social media platforms, and rogue actors and governments that seize on algorithms to “deepen divisions, incite violence, and even undermine our shared sense of what is true and what is false.”

He also lamented an emerging “data industrial complex” — a play on a 1960s-era criticism of defense contractors — that allows companies to “know you better than you may know yourself.” Cook didn’t mention Facebook, Google or any other company by name.

Cook stressed that privacy is a “fundamental human right.” He praised the European Union’s newly implemented tough data-protection rules, and he called on U.S. regulators to pass a comprehensive digital privacy law of their own. 

“Now, more than ever — as leaders of governments, as decision-makers in business, and as citizens — we must ask ourselves a fundamental question: What kind of world do we want to live in?” he said.

For Cook, the speech Wednesday in Brussels marked his highest-profile critique to date of his peers in Silicon Valley. Hours later, top executives from Facebook and Google similarly pledged to protect their users’ data and pursue new advancements, such as artificial intelligence, in a responsible way. “We want to make sound choices and build products that benefit society,” said Sundar Pichai, the chief executive officer of Google, in a video address to attendees.</p>

Cook has been saying this for some years; all that's changing is the stage on which he says it and the volume with which he says it.
apple  privacy  politics  gdpr 
october 2018 by charlesarthur
Genome hackers show no one’s DNA is anonymous anymore • WIRED
Megan Molteni:
<p>the amount of DNA information housed in digital data stores has exploded, with no signs of slowing down. Consumer companies like 23andMe and Ancestry have so far created genetic profiles for more than 12 million people, according to recent industry estimates. Customers who download their own information can then choose to add it to public genealogy websites like GEDmatch, which gained national notoriety earlier this year for its role in leading police to a suspect in the Golden State Killer case.

Those interlocking family trees, connecting people through bits of DNA, have now grown so big that they can be used to find more than half the US population. In fact, according to new research led by Erlich, <a href="">published in Science</a>, more than 60% of Americans with European ancestry can be identified through their DNA using open genetic genealogy databases, regardless of whether they’ve ever sent in a spit kit.

“The takeaway is it doesn’t matter if you’ve been tested or not tested,” says Erlich, who is now the chief science officer at MyHeritage, the third largest consumer genetic provider behind 23andMe and Ancestry. “You can be identified because the databases already cover such large fractions of the US, at least for European ancestry.”</p>

Give it a few more years and governments trying to track people (spies? Murderous assassins?) down will publish DNA taken from the scene and, little sigh, say that they don't seem to have any more leads and leave it to open source journalists.
privacy  dna  computing 
october 2018 by charlesarthur
I’m an Amazon employee. My company shouldn’t sell facial recognition tech to police • Medium
It's a great year for important anonymous letters to publications about what's going on inside well-known but often impenetrable organisations:
<p>When a company puts new technologies into the world, it has a responsibility to think about the consequences. Amazon, where I work, is currently allowing police departments around the country to purchase its facial recognition product, Rekognition, and I and other employees demand that we stop immediately.

A couple weeks ago, my co-workers delivered a letter to this effect, signed by over 450 employees, to Jeff Bezos and other executives. The letter also contained demands to kick Palantir, the software firm that powers much of ICE’s deportation and tracking program, off Amazon Web Services and to institute employee oversight for ethical decisions.

We know Bezos is aware of these concerns and the industry-wide conversation happening right now. On stage, he acknowledged that big tech’s products might be misused, even exploited, by autocrats. But rather than meaningfully explain how Amazon will act to prevent the bad uses of its own technology, Bezos suggested we wait for society’s “immune response.”

If Amazon waits, we think the harm will be difficult to undo.

After all, our concern isn’t one about some future harm caused by some other company: Amazon is designing, marketing, and selling a system for dangerous mass surveillance right now…

…We know from history that new and powerful surveillance tools left unchecked in the hands of the state have been used to target people who have done nothing wrong; in the United States, a lack of public accountability already results in outsized impacts and over-policing of communities of color, immigrants, and people exercising their First Amendment rights. Ignoring these urgent concerns while deploying powerful technologies to government and law enforcement agencies is dangerous and irresponsible.</p>

There's also <a href="">an interview with the article writer</a>.
amazon  privacy  police 
october 2018 by charlesarthur
Google will soon give you greater control of your call logs and SMS data • Android Police
C Scott Brown:
<p>what if an app wants to do things related to making phone calls and sending text messages? Should that app have the ability to access your potentially sensitive call logs and SMS data simply through a normal permissions request notification?

Google thinks that is too open-ended, which is why it is specifying a new policy which will prevent applications from even asking for access to your call logs and/or SMS data unless you choose to make that app the default service for making phone calls or sending texts.

This will hopefully prevent apps you’ve downloaded but don’t use often from continuing to monitor your call logs and SMS data after you’ve installed them and given them permission to do so.

Granted, there are still ways rogue developers could abuse this policy, but it will at least make things a little more difficult…

…right now a developer could create an app which uses SMS in some way but doesn’t need to be set as the default service. The app can ask for access to SMS data, the user can agree, and even though the user may never use that app again, it will continuously have access to their data.

In other words, this new policy isn’t 100 percent secure, but it’s certainly better than the current policy. And, either way, it’s the user’s responsibility to only grant permissions to trustworthy apps.</p>

Typically terrible writeup. "Hopefully"? And no, it's Google's responsibility to write an OS which treats call and SMS data as something that shouldn't be accessible to other apps. Android is ten years old now. This shouldn't be something it's just discovering.
google  android  sms  privacy 
october 2018 by charlesarthur
Google exposed user data, feared repercussions of disclosing to public • WSJ
Douglas MacMillan and Robert McMillan:
<p>Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal.

As part of its response to the incident, the Alphabet unit plans to announce a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+, the people said. The move effectively puts the final nail in the coffin of a product that was launched in 2011 to challenge Facebook and is widely seen as one of Google’s biggest failures.

A software glitch in the social site gave outside developers potential access to private Google+ profile data between 2015 and March 2018, when internal investigators discovered and fixed the issue, according to the documents and people briefed on the incident…

…In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response,” [a Google spokesman] said. “None of these thresholds were met here.”

…The profile data that was exposed included full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status; it didn’t include phone numbers, email messages, timeline posts, direct messages or any other type of communication data, one of the people said.</p>

That is a long time for "potential access", which was via more than 130 APIs - masquerade as a developer and you're in. The further one reads into this story the more astonishing it is.

Google subsequently published a blog post about how <a href="">it's closing down "consumer Google+"</a> because, apparently, "there are significant challenges in creating and maintaining a successful Google+ product that meets consumers’ expectations."

And for those of us who said Google+ was a flop, here's what Google says today: "The consumer version of Google+ currently has low usage and engagement: 90% of Google+ user sessions are less than five seconds." How many of those from people hitting the wrong button in GMail, I wonder?

But Google is still under a 20-year privacy oversight from the FTC, signed in 2011 after its disastrous Google Buzz experiment. The FTC must surely follow this up.
google  ftc  privacy  google+ 
october 2018 by charlesarthur
Exclusive: Tim Berners-Lee tells us his radical new plan to upend the
Katrina Brooker:
<p>Ever since revelations emerged that Facebook had allowed people’s data to be misused by political operatives, Berners-Lee has felt an imperative to get this digital idyll into the real world. In <a href="">a post published this weekend</a>, Berners-Lee explains that he is taking a sabbatical from MIT to work full time on Inrupt. The company will be the first major commercial venture built off of Solid, a decentralized web platform he and others at MIT have spent years building.

If all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. And like with Netscape, Berners-Lee hopes Inrupt will be just the first of many companies to emerge from Solid.

“I have been imagining this for a very long time,” says Berners-Lee. He opens up his laptop and starts tapping at his keyboard. Watching the inventor of the web work at his computer feels like what it might have been like to watch Beethoven compose a symphony: It’s riveting but hard to fully grasp. “We are in the Solid world now,” he says, his eyes lit up with excitement. He pushes the laptop toward me so I too can see.

On his screen, there is a simple-looking web page with tabs across the top: Tim’s to-do list, his calendar, chats, address book. He built this app–one of the first on Solid–for his personal use. It is simple, spare. In fact, it’s so plain that, at first glance, it’s hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid’s decentralized technology, allows Berners-Lee to access all of his data seamlessly–his calendar, his music library, videos, chat, research. It’s like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsApp.

The difference here is that, on Solid, all the information is under his control. Every bit of data he creates or adds on Solid exists within a Solid pod–which is an acronym for personal online data store. These pods are what give Solid users control over their applications and information on the web. Anyone using the platform will get a Solid identity and Solid pod. This is how people, Berners-Lee says, will take back the power of the web from corporations.</p>

Hmm. Big intentions. Lot of inertia.
internet  privacy  bernerslee 
september 2018 by charlesarthur
Facebook is giving advertisers access to your shadow contact information • Gizmodo
Kashmir Hill:
<p>Last week, I ran an ad on Facebook that was targeted at a computer science professor named Alan Mislove. Mislove studies how privacy works on social networks and had a theory that Facebook is letting advertisers reach users with contact information collected in surprising ways. I was helping him test the theory by targeting him in a way Facebook had previously told me wouldn’t work. I directed the ad to display to a Facebook account connected to the landline number for Alan Mislove’s office, a number Mislove has never provided to Facebook. He saw the ad within hours.

<img src=",f_auto,fl_progressive,q_80,w_800/vq2h4pvl4xmy9sannbiy.jpg" width="100%" />
<em>What Facebook told Alan Mislove about the ad I targeted at his office landline number
Screenshot: Facebook (Alan Mislove)</em>

One of the many ways that ads get in front of your eyeballs on Facebook and Instagram is that the social networking giant lets an advertiser upload a list of phone numbers or email addresses it has on file; it will then put an ad in front of accounts associated with that contact information… Facebook calls this a “custom audience.”

…Giridhari Venkatadri, Piotr Sapiezynski, and Alan Mislove of Northeastern University, along with Elena Lucherini of Princeton University, did a series of tests that involved handing contact information over to Facebook for a group of test accounts in different ways and then seeing whether that information could be used by an advertiser. They came up with a novel way to detect whether that information became available to advertisers by looking at the stats provided by Facebook about the size of an audience after contact information is uploaded. They go into this in greater length and technical detail <a href="">in their paper</a>.

They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. </p>

That two-factor authentication detail is truly shocking.
facebook  advertising  privacy 
september 2018 by charlesarthur
Disable Google Chrome sign in and sync • No Absolute Truths @ IdeaSynthesis
Femi Omojola:
<p><a href="">As you might have heard</a>, Chrome 69 automatically logs you into the browser when you log into any Google property. As much as I might like Chrome (and Google), I was quite displeased by this particular change: I assume it was in the release notes (that probably a vanishingly small number of Chrome users read), but the rationale that's been given for the change doesn't really make sense, and in any case I really prefer not to have anything synced anywhere. It definitely (for me at least) violated the <a href="">principle of least astonishment</a>: I can't speak for anyone else but I personally don't expect a routine software upgrade to suddenly start uploading passwords somewhere, or copying my passwords onto any random computer I happen to log into.

As noted in the first article above, the Sync enabled/disabled UI was singularly confusing to me as to what the state of things are, and a careful search (well, about 1 minute) through the Chrome settings pages didn't really shed much more light on exactly how I could guarantee no data gets inadvertently synced. I set out to figure out how I could keep using Chrome but still feel relatively comfortable that Chrome Sync wasn't helpfully distributing my data. After a couple of hours running around I finally got it together thanks to <a href=""></a>.</p>

It's a couple of commands in the Terminal window, or two lines in Windows Registry.
google  chrome  browser  privacy 
september 2018 by charlesarthur
Why I’m done with Chrome • A Few Thoughts on Cryptographic Engineering
Matthew Green:
<p>A few weeks ago Google shipped an update to Chrome that fundamentally changes the sign-in experience. From now on, every time you log into a Google property (for example, Gmail), Chrome will automatically sign the browser into your Google account for you. It’ll do this without asking, or even explicitly notifying you. (However, and this is important: Google developers claim this will not actually start synchronizing your data to Google — yet. See further below.)…

…The change hasn’t gone entirely unnoticed: it received some vigorous discussion on sites like Hacker News. But the mainstream tech press seems to have ignored it completely. This is unfortunate — and I hope it changes — because this update has huge implications for Google and the future of Chrome.

In the rest of this post, I’m going to talk about why this matters. From my perspective, this comes down to basically four points:

1. Nobody on the Chrome development team can provide a clear rationale for why this change was necessary, and the explanations they’ve given don’t make any sense.<br />2. This change has enormous implications for user privacy and trust, and Google seems unable to grapple with this.<br />3. The change makes a hash out of Google’s own privacy policies for Chrome.<br />4. Google needs to stop treating customer trust like it’s a renewable resource, because they’re screwing up badly.</p>

I don't use Chrome because it's a gigantic CPU suck, but whatever.
google  chrome  privacy 
september 2018 by charlesarthur
« earlier      
per page:    204080120160

Copy this bookmark:

to read