recentpopularlog in

charlesarthur : security   382

« earlier  
New research: lessons from Password Checkup in action • Google Online Security Blog
<p>Back in February, we announced the <a href="">Password Checkup extension</a> for Chrome to help keep all your online accounts safe from hijacking. The extension displays a warning whenever you sign in to a site using one of over 4 billion usernames and passwords that Google knows to be unsafe due to a third-party data breach. Since our launch, over 650,000 people have participated in our early experiment. In the first month alone, we scanned 21 million usernames and passwords and flagged over 316,000 as unsafe - 1.5% of sign-ins scanned by the extension.

Today, we are sharing our most recent lessons from the launch and announcing an updated set of features for the Password Checkup extension. Our full research study, <a href="">available here</a>, will be presented this week as part of the USENIX Security Symposium.

Which accounts are most at risk?

Hijackers routinely attempt to sign in to sites across the web with every credential exposed by a third-party breach. If you use strong, unique passwords for all your accounts, this risk disappears. Based on anonymous telemetry reported by the Password Checkup extension, we found that users reused breached, unsafe credentials for some of their most sensitive financial, government, and email accounts. This risk was even more prevalent on shopping sites (where users may save credit card details), news, and entertainment sites.

In fact, outside the most popular web sites, users are 2.5x more likely to reuse vulnerable passwords, putting their account at risk of hijacking.</p>

Users are the problem, I guess. 4 billion username/password combinations are unsafe? That's really a lot.
password  security  hacking 
5 days ago by charlesarthur
Major breach found in biometrics system used by banks, UK police and defence firms • The Guardian
Josh Taylor:
<p>The fingerprints of over 1 million people, as well as facial recognition information, unencrypted usernames and passwords, and personal information of employees, was discovered on a publicly accessible database for a company used by the likes of the UK Metropolitan police, defence contractors and banks.

Suprema is the security company responsible for the web-based Biostar 2 biometrics lock system that allows centralised control for access to secure facilities like warehouses or office buildings. Biostar 2 uses fingerprints and facial recognition as part of its means of identifying people attempting to gain access to buildings.

Last month, Suprema announced its Biostar 2 platform was integrated into another access control system – AEOS. AEOS is used by 5,700 organisations in 83 countries, including governments, banks and the UK Metropolitan police.

The Israeli security researchers Noam Rotem and Ran Locar working with vpnmentor, a service that reviews virtual private network services, have been running a side project to scans ports looking for familiar IP blocks, and then use these blocks to find holes in companies’ systems that could potentially lead to data breaches.

In a search last week, the researchers found Biostar 2’s database was unprotected and mostly unencrypted. They were able to search the database by manipulating the URL search criteria in Elasticsearch to gain access to data.</p>

Not clear how you could use the fingerprints, though.
security  biometrics  hacking 
6 days ago by charlesarthur
Inside the hidden world of elevator phone phreaking • WIRED
Andy Greenberg:
<p>"I can dial into an elevator phone, listen in on private conversations, reprogram the phone so that if someone hits it in an emergency it calls a number of my choosing," [security researcher Will] Caruana told me in our first conversation. Elevator phones typically emit audible beeps in the elevator when they connect. But if someone has dialed into the phone of the elevator you're riding before you enter it, Caruana warned me, the only indication might be a red light on the phone's panel. "It’s hard to notice if you're not looking for it," Caruana says.

Over the last year, Caruana has assembled what he believes is the largest public list of elevator phone numbers, which he plans to make available to a limited audience—although he declined to say where exactly he's publishing it. He says he's releasing the list of 80-plus numbers not just because he wants to foster more elevator phone phreaking as an opportunity for whimsy and chance encounters, but also to draw attention to the possibility that elevator phones could be abused for serious privacy invasion and even sabotage. Call up most elevator phones and press 2, and you'll be asked to enter a password to reprogram them. In far too many cases, Caruana says, phone installers and building managers don't change those passwords from easily guessable default codes, allowing anyone to tamper with their settings.</p>

Though who'd expect someone to create a list of all the phone numbers for lifts in the world?
security  hacking  lifts 
8 days ago by charlesarthur
Critical US election systems have been left exposed online despite official denials • VICE
Kim Zetter:
<p>For years, US election officials and voting machine vendors have insisted that critical election systems are never connected to the internet and therefore can’t be hacked.

But a group of election security experts have found what they believe to be nearly three dozen backend election systems in 10 states connected to the internet over the last year, including some in critical swing states. These include systems in nine Wisconsin counties, in four Michigan counties, and in seven Florida counties—all states that are perennial battlegrounds in presidential elections.

Some of the systems have been online for a year and possibly longer. Some of them disappeared from the internet after the researchers notified an information-sharing group for election officials last year. But at least 19 of the systems, including one in Florida’s Miami-Dade County, were still connected to the internet this week, the researchers told Motherboard…

…The systems the researchers found are made by Election Systems & Software, the top voting machine company in the country. They are used to receive encrypted vote totals transmitted via modem from ES&S voting machines on election night, in order to get rapid results that media use to call races, even though the results aren’t final.</p>
security  hacking  elections  voting 
12 days ago by charlesarthur
New York City to consider banning sale of cellphone location data • The New York Times
Jeffery Mays:
<p>Telecommunications firms and mobile-based apps make billions of dollars per year by selling customer location data to marketers and other businesses, offering a vast window into the whereabouts of cellphone and app users, often without their knowledge.

That practice, which has come under increasing scrutiny and criticism in recent years, is now the subject of proposed legislation in New York. If passed, it is believed that the city would become the first to ban the sale of geolocation data to third parties.

The bill, which will be introduced on Tuesday, would make it illegal for cellphone companies and mobile app developers to share location data gathered while a customer’s mobile device is within the five boroughs.

Cellphone companies and mobile apps collect detailed geolocation data of their users and then sell that information to legitimate companies such as digital marketers, roadside emergency assistance services, retail advertisers, hedge funds or — in the case of a class-action lawsuit filed against AT&T — bounty hunters.

“The average person has no idea they are vulnerable to this,” said Councilman Justin L. Brannan, a Brooklyn Democrat who is introducing the bill. “We are concerned by the fact that someone can sign up for cell service and their data can wind up in the hands of five different companies.”</p>

Just me, or is it madness that NYC is only the first, and that this is only "proposed" legislation which, the story says, will be strongly opposed by the ad tech industry "which has a strong presence in the city". Make their execs' location data public, let's see how they feel about it then.
security  location  mobile  nyc 
28 days ago by charlesarthur
What the Slack security incident meant for me, the Keybase CEO • Keybase
Max Krohn was packing for a holiday in January when he got a Slack notification that he had logged in from the Netherlands:
<p>My immediate thoughts, in order:

• Thankfully we don't put sensitive communications (from financials to hiring to shit-talkin') into Slack. We basically just use a #breaking channel in there in case we have Keybase downtime. Phew. I didn't have to worry about being extorted or embarrassed. And Keybase as a company would almost certainly emerge unscathed.<br />• WAIT A SEC. How did this happen? I use strong, secure, distinct, random passwords for all services I log into. Either Slack itself was compromised, my password manager was compromised, or my computers were "rooted" by an attacker.<br/>• Our weekend was hosed.

At risk of getting the car towed, I dashed an email off to Slack's security team, and after a few back-and-forths, received the standard fare. They did not inform me of the directly related 2015 Security Incident but instead implied that I was messy with my security practices and was to blame.

Though I was more than 90% convinced that Slack had been compromised, as the CEO of a security-focused company, I couldn't take any risks. I had to assume the worst, that my computers were compromised.

In the subsequent days and weeks, I reset all of my passwords, threw away all my computers, bought new computers, factory-reset my phone, rotated all of my Keybase devices (i.e., rotated my "keys"), and reestablished everything from the ground up.</p>

Turned out he hadn't been keylogged, but Slack had really screwed up in 2015. Four years ago.
slack  password  security 
4 weeks ago by charlesarthur
Malicious apps infect 25 million Android devices with 'Agent Smith' malware •
Cat Ferguson:
<p>The apps, most of them games, were distributed through third-party app stores by a Chinese group with a legitimate business helping Chinese developers promote their apps on outside platforms. Check Point is not identifying the company, because they are working with local law enforcement. About 300,000 devices were infected in the US.

The malware was able to copy popular apps on the phone, including WhatsApp and the web browser Opera, inject its own malicious code and replace the original app with the weaponized version, using a vulnerability in the way Google apps are updated. The hijacked apps would still work just fine, which hid the malware from users.

Armed with all the permissions users had granted to the real apps, "Agent Smith" was able to hijack other apps on the phone to display unwanted ads to users. That might not seem like a significant problem, but the same security flaws could be used to hijack banking, shopping and other sensitive apps, according to Aviran Hazum, head of Check Point's analysis and response team for mobile devices.

"Hypothetically, nothing is stopping them from targeting bank apps, changing the functionality to send your bank credentials" to a third party, Hazum said. "The user wouldn't be able to see any difference, but the attacker could connect to your bank account remotely."</p>
security  android  hacking  counterfeit 
5 weeks ago by charlesarthur
Apple disables Walkie Talkie app due to vulnerability that could allow iPhone eavesdropping • TechCrunch
Matthew Panzarino:
<p>Apple has disabled the Apple Watch Walkie Talkie app due to an unspecified vulnerability that could allow a person to listen to another customer’s iPhone without consent, the company told TechCrunch this evening.

Apple has apologized for the bug and for the inconvenience of being unable to use the feature while a fix is made.

The Walkie Talkie app on Apple Watch allows two users who have accepted an invite from each other to receive audio chats via a “push to talk” interface reminiscent of the PTT buttons on older cell phones.</p>

People use the Walkie Talkie app? Amazing.
apple  watch  security  vulnerability  hacking 
5 weeks ago by charlesarthur
Superhuman’s superficial privacy fixes do not prevent it from spying on you • Mike Industries
Mike Davidson:
<p>[Rahul Vohra's response to last week's criticisms] also establishes that Superhuman is keeping the feature working almost exactly as-is, with the exception of not collecting or displaying actual locations. I’ve spoken with several people about how they interpreted Rahul’s post on this particular detail. Some believed the whole log of timestamped read events was going away and were happy about that. Others read it the way Walt, Josh, and I did: you can still see exactly when and how many times someone has opened your email, complete with multiple timestamps — you just can’t see the location anymore. That, to me, is not sufficient. “A little less creepy” is still creepy.

Also worth noting, “turning receipts off by default” does nothing to educate customers about the undisclosed surveillance they are enabling if they flip that switch. If they’ve used read receipts at all in the past, they will probably assume it works just like Outlook. At the very least, Superhuman should display a message when you flip that switch saying something like “by turning on Read Receipts, you are monitoring your recipients’ actions without their knowledge or permission. Are you sure you want to do this?”

Rahul’s fifth and final fix [building an option to disable remote image loading <em>in Superhuman users' emails</em>] is also good in that they now realize pixel spying is a threat that they need to protect their own users from. This introduces a moral paradox, however: if the technology you are using on others is something you need to protect your own users from, then why are you using it on others in the first place? These are all questions I’ve asked Rahul publicly in this series of tweets, which I’m still waiting for a response on, four days later:</p>
security  business  superhuman  email  pixel 
6 weeks ago by charlesarthur
Zoom Zero Day: 4+ Million Webcams & maybe an RCE? Just get them to visit your website! • Medium
Jonathan Leitschuh:
<p>This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user's permission. On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.</p>

Zoom puts a server with an open port on your machine, and doesn't wipe it if the app is deleted, all so you won't have to click "OK" to access your camera. It can re-download the app if you delete; a host can force your video camera on when you join a meeting. It's an unbelievable hot mess of security vulnerabilities, to which it responded with a <a href="">mea not so much culpa</a> ("There is only one scenario where a Zoom user’s video is automatically enabled upon joining a meeting. Two conditions must be met: 1) The meeting creator (host) has set their participants’ video to be on AND 2) The user has not checked the box to turn their video off" 🙄). Zoom really doesn't understand it. But it's a publicly traded company whose mission is "make video communications frictionless"; notice that "frictionless" doesn't have to mean "secure", nor does it contain any concern about collateral damage in getting rid of friction.
security  vulnerability  hacking  zoom 
6 weeks ago by charlesarthur
Over 1,300 Android apps scrape personal data regardless of permissions • TechRadar
David Lumb:
<p>Researchers at the International Computer Science Institute (ICSI) created a controlled environment to test 88,000 apps downloaded from the US Google Play Store. They peeked at what data the apps were sending back, compared it to what users were permitting and - surprise - <a href="">1,325 apps were forking over specific user data they shouldn’t have</a>.

Among the test pool were “popular apps from all categories,” according to ICSI’s report. 

The researchers disclosed their findings to both the US Federal Trade Commission and Google (receiving a bug bounty for their efforts), though the latter stated a fix would only be coming in the full release of Android Q, according to CNET.

Before you get annoyed at yet another unforeseen loophole, those 1,325 apps didn’t exploit a lone security vulnerability - they used a variety of angles to circumvent permissions and get access to user data, including geolocation, emails, phone numbers, and device-identifying IMEI numbers.

One way apps determined user locations was to get the MAC addresses of connected WiFi base stations from the ARP cache, while another used picture metadata to discover specific location info even if a user didn’t grant the app location permissions. The latter is what the ICSI researchers described as a “side channel” - using a circuitous method to get data.

They also noticed apps using “covert channels” to snag info: third-party code libraries developed by a pair of Chinese companies secretly used the SD card as a storage point for the user’s IMEI number. If a user allowed a single app using either of those libraries access to the IMEI, it was automatically shared with other apps.</p>

Android Q isn't going to be universally adopted by any means. Data leaks are going to go on.
android  data  privacy  security 
6 weeks ago by charlesarthur
Read statuses • Superhuman
Rahul Vohra is CEO of Superhuman, the pricey email app which has been getting dinged this week:
<p>Over the last few days, we have seen four main criticisms of read statuses in Superhuman:

• Location data could be used in nefarious ways<br />• Read statuses are on by default<br />• Recipients of emails cannot opt out<br />• Superhuman users cannot disable remote image loading<br />•

On all these, we hear you loud and clear. We are making these changes:<br />• We have stopped logging location information for new email, effective immediately<br />• We are releasing new app versions today that no longer show location information<br />• We are deleting all historical location data from our apps<br />• We are keeping the read status feature, but turning it off by default. Users who want it will have to explicitly turn it on<br />• We are prioritizing building an option to disable remote image loading.</p>

That was satisfactorily quick. Vohra seems sincere in his apology (though he also points out that other "prosumer" email apps use "read status on by default".
security  email  superhuman 
6 weeks ago by charlesarthur
Before you use a password manager • Medium
Stuart Schechter:
<p>In this article, I’ll start by examining the benefits and risks of using a password manager. It’s hard to overstate the importance of protecting the data in your password manager, and having a recovery strategy for that data, so I’ll cover that next. I’ll then present a low-risk approach to experimenting with using a password manager, which will help you understand the tough choices you’ll need to make before using it for your most-important passwords. I’ll close with a handy list of the most important decisions you’ll need to make when using a password manager.

There are a lot of password managers to choose from. There’s a password manager built into every major web browser today, and many stand-alone password managers that work across browsers. In addition to remembering your passwords, most password managers will type your password into login forms. The better ones will create randomly-generated passwords for you, ensuring that you’re not using easily-guessed passwords or re-using passwords between sites. Some will even identify passwords you’ve re-used between sites and help you replace them.

The low-risk approach seems like a good plan. It's the idea of jumping in that many people find problematic.
security  software  privacy  password 
8 weeks ago by charlesarthur
Samsung accidentally makes the case for not owning a smart TV • The Verge
Jon Porter on Samsung's <a href="">bizarre tweet</a> suggesting owners of its smart TVs should do a virus scan every few weeks or so:
<p>There haven’t been any recent security vulnerabilities reported for Samsung’s smart TVs, but back in 2017 WikiLeaks revealed that the CIA had developed a piece of software called “Weeping Angel” that was capable of turning Samsung’s smart TVs into a listening device. Less than a month later a security researcher found 40 zero-day vulnerabilities in Samsung’s smart TV operating system, Tizen. At the time, Samsung released a blog post detailing the security features of its TVs, which includes its ability to detect malicious code on both its platform and application levels.

Virus scans are another reminder of how annoying modern smart TVs can be. Sure, they have pretty much every streaming app under the sun built in, and Samsung’s models can even be used to stream games from a local PC. But they also contain microphones that can be a privacy risk, and are entrusted with credit card details for buying on-demand video content. Even when everything’s working as the manufacturer intended, they can be yet another way of putting ads in front of you, either on your home screen or even in some cases directly into your own video content.

Samsung’s little PSA about scanning for “malware viruses” (eh hem) might be a sound security practice on a Samsung smart TV, but it’s also an excellent reminder for why you might not want to buy one in the first place.</p>

The microphones are obviously for voice commands. The world is full of microphones.
security  samsung  tv  virus 
9 weeks ago by charlesarthur
US escalates online attacks on Russia’s power grid • The New York Times
David Sanger and Nicole Perlroth:
<p>In interviews over the past three months, the officials described the previously unreported deployment of American computer code inside Russia’s grid and other targets as a classified companion to more publicly discussed action directed at Moscow’s disinformation and hacking units around the 2018 midterm elections.

Advocates of the more aggressive strategy said it was long overdue, after years of public warnings from the Department of Homeland Security and the FBI that Russia has inserted malware that could sabotage American power plants, oil and gas pipelines, or water supplies in any future conflict with the United States.

But it also carries significant risk of escalating the daily digital Cold War between Washington and Moscow.</p>

Quite a thing, right? And now look at this little extra, buried wayyyy down the story:
<p>Two administration officials said they believed Mr. Trump had not been briefed in any detail about the steps to place “implants” — software code that can be used for surveillance or attack — inside the Russian grid.

Pentagon and intelligence officials described broad hesitation to go into detail with Mr. Trump about operations against Russia for concern over his reaction — and the possibility that he might countermand it or discuss it with foreign officials, as he did in 2017 when he mentioned a sensitive operation in Syria to the Russian foreign minister.

Because the new law defines the actions in cyberspace as akin to traditional military activity on the ground, in the air or at sea, no such briefing would be necessary, they added.</p>

Shall we tell the president? Nah, better not.
infrastructure  russia  power  hacking  security 
9 weeks ago by charlesarthur
New security warning issued for Google's 1.5 billion Gmail and Calendar users • Forbes
Davey Winder:
<p>users of the Gmail service are being targeted primarily through the use of malicious and unsolicited Google Calendar notifications. Anyone can schedule a meeting with you, that's how the calendar application is designed to work. Gmail, which receives the notification of the invitation, is equally designed to tightly integrate with the calendaring functionality.

When a calendar invitation is sent to a user, a pop-up notification appears on their smartphone. The threat actors craft their invitations to include a malicious link, leveraging the trust that user familiarity with calendar notifications brings with it.

The researchers have noticed attackers throughout the last month using this technique to effectively spam users with phishing links to credential stealing sites. By populating the location and topic fields to announce a fake online poll or questionnaire with a financial incentive to participate, the threat actors encourage the victim to follow the malicious link where bank account or credit card details can be collected. By exploiting such a "non-traditional attack vector," the criminals can get around the fact that people are increasingly aware of common methods to encourage link-clicking.

"Beyond phishing, this attack opens up the doors for a whole host of social engineering attacks," says Javvad Malik, security awareness advocate at KnowBe4. Malik told me that in order to gain access to a building, for example, you could put in a calendar invite for an interview or similar face to face appointment such as building maintenance which, he warns "could allow physical access to secure areas."</p>

Google was told about this in 2017, and said that "making this change would cause major functionality drawbacks for legitimate API events with regards to Calendar." But don't worry! It scans for malicious links. Huh. Apple had a similar problem like this - spammy calendar invites being sent, mainly from China - <a href="">in November 2016</a>. Seems to have solved it.
security  hacking  google 
9 weeks ago by charlesarthur
For sale: Have I Been Pwned • Gizmodo
Jennings Brown:
<p>In a <a href="">blog post</a>, [security researcher Troy] Hunt explained the reasons for his decisions and hopes for the future of the platform.

“It’s time to go from that one guy doing what he can in his available time to a better-resourced and better-funded structure that’s able to do way more than what I ever could on my own,” Hunt wrote.

The blog states that HIBP now has almost 3 million subscribers for notifications, and the platform can now check about eight billion breached records. According to Hunt the site usually gets around 150,000 unique visits on a typical day, and 10 million unique visits on an “abnormal day.”

Troy wrote that traffic spiked in January when he broke the news of the behemoth “Collection #1” breach that exposed 773 million emails and 21 million passwords. Since then, the site has continued to grow and Hunt has come to the realization he “was getting very close to burn-out.”

Now he’s ready to hand much of the workload off. Hunt said he is laying the groundwork for acquisition and has had some early talks with organizations who may be interested in acquiring HIBP.</p>

One possible buyer is, apparently, Mozilla; wonder if they'll try to monetise it if they do purchase it. HIBP is good if you care about data breaches, but since Hunt started it in December 2013, they've gone from being a bit unusual to being completely quotidien. It's almost a surprise if you have an email address that <em>hasn't</em> been revealed in a breach at some point.
hacking  security  troyhunt 
10 weeks ago by charlesarthur
Personal details of 23m drivers given out by DVLA • The Times
Graeme Paton:
<p>The information watchdog is to hold an inquiry after the Driver and Vehicle Licensing Agency released the personal details of a record 23 million vehicle owners last year.

The Times has learnt that an unprecedented 63,600 records a day were handed to third parties including bailiffs and private investigators, often allowing motorists to be aggressively pursued for parking and toll road fines.

The DVLA charged organisations to obtain almost 7.8 million records, suggesting that it made £19.4m from the release of the data of almost two thirds of all vehicle owners in the UK.

Motoring groups called for an independent inquiry amid questions over how a data release on this scale could be properly policed, particularly in light of the rigorous new General Data Protection Regulation (GDPR) introduced across Europe last year.

There are fears that not all organisations that obtained the vehicle records did so legitimately, nor put them to a proper use.</p>
dvla  data  security  gdpr 
10 weeks ago by charlesarthur
China accused of 'rigging' 5G tests to favour Huawei • Daily Telegraph
Anna Isaac, Christopher Williams and Hannah Boland:
<p>More than 100 computer security experts are conducting a security test of 5G equipment, from makers including Huawei and Western rivals Nokia and Ericsson, in which hacking techniques are used to check for weak spots. The ostensibly legitimate exercise is part of planning for 5G and its leap forward in speed and data capacity in the world’s biggest mobile market.

However, British officials and industry sources tracking the tests allege they are being rigged to defend Huawei. It is believed that vulnerabilities discovered by China’s secret state hackers have been passed to the 5G testers to ensure Nokia and Ericsson’s equipment is found to be unsecure.

Officials and Western telecoms executives held crisis meetings about the campaign last week.

Although knowledge of the effort is patchy, it is expected that testing will end around June 10, in time for Beijing to use the results to attempt to influence a crucial EU review of 5G security this summer. Two sources suggested China particularly intends to undermine cautionary advice on Huawei provided by British intelligence. Beijing’s hacking attack comes after a series of steps to turn China into what one corporate source has called a “hostile environment for non-Chinese telecoms firms”.</p>

The discomfort of western intelligence agencies at this is very clear. It would be astonishing if China's leaders didn't long ago decide that telecoms is a critical infrastructure for the future, and that if they happen to be the ones supplying to the rest of the world, all the better.
huawei  hacking  security 
11 weeks ago by charlesarthur
Google relents slightly on blocking ad-blockers – for paid-up enterprise Chrome users, everyone else not so much • The Register
Thomas Claburn:
<p>Google Chrome users will continue to have access to the full content blocking power of the webRequest API in their browser extensions, but only if they're paying enterprise customers.

Everyone else will have to settle for extensions that use the <a href="">neutered</a> declarativeNetRequest API, which is being developed as part of a pending change to the way Chrome Extensions work. And chances are Chrome users will have fewer extensions to choose from because some developers won't be able to rework their extensions so they function under the new regime, or won't want to do so…

…developer Raymond Hill, who created popular content control extension uBlock Origin, contends blocking capabilities matter more than observing. Losing the ability to block content with the webRequest API is his main concern.

"This breaks uBlock Origin and uMatrix, [which] are incompatible with the basic matching algorithm [Google] picked, ostensibly designed to enforce EasyList-like filter lists," he explained in an email to The Register. "A blocking webRequest API allows open-ended content blocker designs, not restricted to a specific design and limits dictated by the same company which states that content blockers are a threat to its business."

Google did not respond to a request for comment. The ad biz previously said its aim with Manifest v3 is "to create stronger security, privacy, and performance guarantees."

But Hill, in <a href="">a note</a> posted over the weekend to GitHub, observes that performance problems arise more from bloated web pages stuffed with tracking code than from extensions intercepting and processing content.</p>

So, basically, Google is making harder to have adblocking extensions that actually block ads. (Thanks Stormyparis for the link.)
chrome  security  adblocking 
11 weeks ago by charlesarthur
What I learned trying to secure Congressional Campaigns • Idle Words
Maciej Cieglowski spent a lot of last year helping candidates lock down their accounts against hackers:
<p>There are two big areas of sensitive information around a political campaign. Let's call them 'Bucket A' and 'Bucket B'.

Bucket A is the stuff that is campaign-specific and needs to be kept confidential. This includes fundraising numbers and mailing lists, campaign memos on issue positions, research on opponents, strategy documents, media buys, correspondence with the national party, unflattering photos of the candidate and so on. The training materials the Democratic Party provides to campaigns are meant to keep this stuff safe.

Bucket B is what lives in people's personal accounts. This includes every email they've written, their social media history, complete access (via password reset) to all the online services they've signed up for, their chat history, creepy DMs, sexts to minors, plus all the stuff they've forwarded to their personal accounts from the campaign account, the Dropbox folder they keep their passwords in, and so on.

As an attacker, I would be drawn to bucket B. There is nothing interesting in a campaign's financials or strategy. The strategy is always ‘talk about health care’, and the financials have to be disclosed every quarter by law. Everything juicy lives in the personal accounts, and moving laterally between those accounts will eventually give you access to bucket A anyway, because people are terrible at keeping this stuff separate.

Targeting Bucket B means you can also target more people, like the candidate's spouse and family, who the people defending Bucket A consider out of scope.

In our training, we worked off the assumption that the Podesta hacks were a template for what might happen to campaigns, and that securing campain-adjacent personal accounts was more important than worrying about campaign data.</p>

As ever, he's hilarious, wry, and laser-accurate.
security  hacking  podesta 
12 weeks ago by charlesarthur
Vodafone and EE just killed Huawei's 5G launch in the UK • Android Authority
Scott Scrivens:
<p>Things are going from bad to worse for Huawei. In the wake of the US Government executive order that restricts US companies from doing business with the Chinese tech company, the repercussions are mounting. Huawei and Honor phones could lose Google services and access to future Android updates and HiSilicon's Kirin chips are also under threat. Now, two major UK carriers have dropped Huawei from their 5G launch plans.

BT-owned network EE was the first to announce that it would be pulling Huawei phones from its 5G selection, with the service to be turned on in 16 UK cities this year, starting May 30. Google's enforced decision that could see Huawei devices lose access to the Play Store and Android version updates is the key factor, with an EE spokesperson releasing the following statement:

“We’ve put the Huawei devices on pause, until we have more information. Until we have the information and confidence that ensures our customers will get support for the lifetime of their devices with us then we’ve got the Huawei devices on pause.”

In a further blow, Vodafone has followed suit and will also not sell the Huawei Mate 20 X 5G when its new network goes online on July 3. The UK's third largest mobile operator has said only that the device “is yet to receive the necessary certifications,” but it's likely similar pressures faced by EE were also behind the decision.</p>

It never rains but it absolutely pours for days on end.
huawei  5g  security 
may 2019 by charlesarthur
SensorID: sensor calibration fingerprinting for smartphones • Cambridge Computing Lab
Jiexin Zhang, Alastair Beresford and Ian Sheret:
<p>We have developed a new type of fingerprinting attack, the calibration fingerprinting attack. Our attack uses data gathered from the accelerometer, gyroscope and magnetometer sensors found in smartphones to construct a globally unique fingerprint. Overall, our attack has the following advantages:

• The attack can be launched by any website you visit or any app you use on a vulnerable device without requiring any explicit confirmation or consent from you<br />• The attack takes less than one second to generate a fingerprint<br />• The attack can generate a globally unique fingerprint for iOS devices<br />• The calibration fingerprint never changes, even after a factory reset<br />• The attack provides an effective means to track you as you browse across the web and move between apps on your phone.

Following our disclosure, Apple has patched this vulnerability in iOS 12.2.

…Our approach works by carefully analysing the data from sensors which are accessible without any special permissions to both websites and apps. Our analysis infers the per-device factory calibration data which manufacturers embed into the firmware of the smartphone to compensate for systematic manufacturing errors. This calibration data can then be used as the fingerprint.

We found that the gyroscope and magnetometer on iOS devices are factory calibrated and the calibration data differs from device to device. In addition, we find that the accelerometer of Google Pixel 2 and Pixel 3 can also be fingerprinted by our approach.</p>
security  iphone  ios  tracking  surveillance 
may 2019 by charlesarthur
Why WhatsApp will never be secure • Telegram blog
Pavel Durov is one of the authors of Telegram:
<p>Everything on your phone, including photos, emails and texts was accessible by attackers just because <a href="">you had WhatsApp installed</a>.  

This news didn’t surprise me though. Last year WhatsApp had to admit they had a very similar issue – a single video call via WhatsApp was all a hacker needed to <a href="">get access to your phone’s entire data</a>. 

Every time WhatsApp has to fix a critical vulnerability in their app, a new one seems to appear in its place. All of their security issues are conveniently suitable for surveillance, and look and work a lot like backdoors.  

Unlike Telegram, WhatsApp is not open source, so there’s no way for a security researcher to easily check whether there are backdoors in its code. Not only does WhatsApp not publish its code, they do the exact opposite: WhatsApp deliberately obfuscates their apps’ binaries to make sure no one is able to study them thoroughly. 

WhatsApp and its parent company Facebook may even be required to implement backdoors – via secret processes such as the FBI’s gag orders. It’s not easy to run a secure communication app from the US. A week our team spent in the US in 2016 prompted three <a href="">infiltration attempts</a> <a href="">by the FBI</a><a href=""></a>. Imagine what 10 years in that environment can bring upon a US-based company. </p>

The open-source argument is probably good. The argument that its flaws are conveniently about surveillance isn't; the general purpose of hacking into apps or phones is always surveillance. And Telegram has its own problems - emanating from its users.
security  whatsapp  hacking  telegram 
may 2019 by charlesarthur
WhatsApp voice calls used to inject Israeli spyware on phones • Financial Times
Mehul Srivastava:
<p>WhatsApp, which is used by 1.5bn people worldwide, discovered in early May that attackers were able to install surveillance software on to both iPhones and Android phones by ringing up targets using the app’s phone call function. 

The malicious code, developed by the secretive Israeli company NSO Group, could be transmitted even if users did not answer their phones, and the calls often disappeared from call logs, said the spyware dealer, who was recently briefed on the WhatsApp hack.

WhatsApp, which is owned by Facebook, is too early into its own investigations of the vulnerability to estimate how many phones were targeted using this method, said a person familiar with the issue.

As late as Sunday, as WhatsApp engineers raced to close the loophole, a UK-based human rights lawyer’s phone was targeted using the same method. </p>

Further reading on this: <a href="">the CVE details about which platforms the WhatsApp vulnerability exists on</a> (all of them, including Tizen, because the weakness is in the WhatsApp VOIP stack.

Iyad El-Baghdadi's <a href="">press conference transcript about being targeted by the Saudis using this attack</a>.

A story from May 7, in the Guardian, about <a href="">how the CIA and others warned El-Baghdadi he was being targeted</a>.

<a href="">Amnesty International's supporting action for legal action in Israel to suspect NSO Group's export licence</a>, which would stop is selling software to governments which target human rights defenders.
security  hacking  nso  israel  whatsapp 
may 2019 by charlesarthur
Israel’s NSO: the business of spying on your iPhone • Financial Times
Mehul Srivastava and Robert Smith:
<p>At an investor presentation in London in April, NSO bragged that the typical security patches from Apple did not address the “weaknesses exploited by Pegasus”, according to an unimpressed potential investor. Despite the annual software updates unveiled by companies such as Apple, NSO had a “proven record” of identifying new weaknesses, the company representative told attendees.

NSO’s pitch has been a runaway success — allowing governments to buy off the shelf the sort of software that was once thought to be restricted to only the most sophisticated spy agencies, such as GCHQ in the UK and the National Security Agency in America.

The sale of such powerful and controversial technologies also gives Israel an important diplomatic calling card. Through Pegasus, Israel has acquired a major presence — official or not — in the deeply classified war rooms of unlikely partners, including, researchers say, Gulf states such as Saudi Arabia and the United Arab Emirates. Although both countries officially reject the existence of the Jewish state, they now find themselves the subject of a charm offensive by Prime Minister Benjamin Netanyahu that mixes a shared hostility to Iran with intelligence knowhow.

The Israeli government has never talked publicly about its relationship with NSO. Shortly after he stepped down as defence minister in November, Avigdor Lieberman, who had responsibility for regulating NSO’s sales, said: “I am not sure now is the right time to discuss this . . . I think that I have a responsibility for the security of our state, for future relations.” But he added: “It is not a secret today that we have contact with all the moderate Arab world. I think it is good news.”</p>
security  hacking  nso  iphone 
may 2019 by charlesarthur
America’s favorite door-locking app has a data privacy problem • OneZero
Sage Lazzaro:
<p>Latch is on a mission to digitize the front door, offering apartment entry systems that forgo traditional keys in favor of being able to unlock entries with a smartphone. The company touts convenience — who wants to fiddle with a metal key? — and has a partnership with UPS, so you can get packages delivered inside your lobby without a doorman. But while it may keep homes private and secure, the same can’t be said about tenants’ personal data.

Latch — which has raised $96m in venture capital funding since launching in 2014, including $70m in its Series B last year — offers three products. Two are entry systems for specific units, and one is for lobbies and other common areas like elevators and garages. The company claims one in 10 new apartment buildings in the U.S. is being built with its products, with leading real estate developers like Brookfield and Alliance Residential now installing them across the country.

Experts say they’re concerned about the app’s privacy policy, which allows Latch to collect, store, and share sensitive personally identifiable information (PII) with its partners and, in some cases, landlords. And while Latch is far from the only tech company with questionable data practices, it’s harder for a tenant to decouple from their building’s door than, say, Instagram: If your landlord installs a product like the keyhole-free Latch R, you’re stuck. The issue of tenant consent is currently coming to a head in New York City, where residents of a Manhattan building are suing their landlord in part over privacy concerns related to the app.</p>

Latch wouldn't be interviewed but said that it offers smartphone app unlocking, Bluetooth proximity, or keycard. But the problem is still about controlling where the information goes.
Door  security  privacy 
may 2019 by charlesarthur
The facts about parental control apps • Apple
<p>We recently removed several parental control apps from the App Store, and we did it for a simple reason: they put users’ privacy and security at risk. It’s important to understand why and how this happened.

Over the last year, we became aware that several of these parental control apps were using a highly invasive technology called Mobile Device Management, or MDM. MDM gives a third party control and access over a device and its most sensitive information including user location, app use, email accounts, camera permissions, and browsing history. We started exploring this use of MDM by non-enterprise developers back in early 2017 and updated our guidelines based on that work in mid-2017.

MDM does have legitimate uses. Businesses will sometimes install MDM on enterprise devices to keep better control over proprietary data and hardware. But it is incredibly risky—and a clear violation of App Store policies—for a private, consumer-focused app business to install MDM control over a customer’s device. Beyond the control that the app itself can exert over the user's device, research has shown that MDM profiles could be used by hackers to gain access for malicious purposes.</p>

It's very unusual for Apple to make a public statement like this. It removed 11 of 17 of the most-downloaded screen time/parental control apps, which the <a href="">NY Times suggested</a> was anti-competitive. Apple's saying: not at all.
apple  apps  security  hacking  parental 
april 2019 by charlesarthur
How Nest, designed to keep intruders out of people’s homes, effectively allowed hackers to get in • Washington Post
Reed Albergotti:
<p>Nest, which is part of Google, has been featured on local news stations throughout the country for hacks similar to what the Thomases experienced [where hackers accessed a webcam in a child's room]. And Nest’s recognizable brand name may have made it a bigger target. While Nest’s thermostats are dominant in the market, its connected security cameras trail the market leader, Arlo, according to Jack Narcotta, an analyst at the market research firm Strategy Analytics. Arlo, which spun out of Netgear, has around 30% of the market, he said. Nest is in the top five, he said.

Nik Sathe, vice president of software engineering for Google Home and Nest, said Nest has tried to weigh protecting its less security-savvy customers while taking care not to unduly inconvenience legitimate users to keep out the bad ones. “It’s a balance,” he said. Whatever security Nest uses, Sathe said, needs to avoid “bad outcomes in terms of user experience.”

Google spokeswoman Nicol Addison said Thomas could have avoided being hacked by implementing two-factor authentication, where in addition to a password, the user must enter a six-digit code sent via text message. Thomas said she had activated two-factor authentication; Addison said it had never been activated on the account.</p>

That last bit is worth noting: Thomas probably thought her Nest was protected because it's a Google device and she has 2FA on her Gmail account. That's not the same as her Nest account - but understanding that requires a lot of compartmentalisation.

But 2FA v password isn't "a balance". It's an on-off switch, a Rubicon. 2FA is robust; a password isn't.
Google  nest  hacking  security 
april 2019 by charlesarthur
A cartoon intro to DNS over HTTPS • Mozilla Hacks
Lin Clark:
<p>On-path routers can track and spoof DNS because they can see the contents of the DNS requests and responses. But the Internet already has technology for ensuring that on-path routers can’t eavesdrop like this. It’s the encryption that I talked about before.

By using HTTPS to exchange the DNS packets, we ensure that no one can spy on the DNS requests that our users are making.

In addition to providing a trusted resolver which communicates using the DoH protocol, Cloudflare is working with us to make this even more secure… Cloudflare will make the request from one of their own IP addresses near the user. This provides geolocation without tying it to a particular user. In addition to this, we’re looking into how we can enable even better, very fine-grained load balancing in a privacy-sensitive way.

Doing this — removing the irrelevant parts of the domain name and not including your IP address — means that DNS servers have much less data that they can collect about you.</p>

Thanks to <a href="">Seth Finkelstein</a>, we have the answer to the puzzle of what <a href="">yesterday's Sunday Times link</a> was about: DNS over HTTPS. It's not clear what Google's timetable is for making this the default in Chrome, but BT is worried enough about it to have highlighted it in a <a href="">discussion paper written earlier in April</a>, which explains it pretty well.

Would have been nice if the Times writeup had explained this. But the journalists didn't seem to understand it themselves.
internet  security  dns 
april 2019 by charlesarthur
A hotspot finder app exposed two million Wi-Fi network passwords • TechCrunch
<p>A popular hotspot finder app for Android exposed the Wi-Fi network passwords for more than two million networks.

The app, downloaded by thousands of users, allowed anyone to search for Wi-Fi networks in their nearby area. The app allows the user to upload Wi-Fi network passwords from their devices to its database for others to use.

That database of more than two million network passwords, however, was left exposed and unprotected, allowing anyone to access and download the contents in bulk.

Sanyam Jain, a security researcher and a member of the GDI Foundation, found the database and reported the findings to TechCrunch.

We spent more than two weeks trying to contact the developer, believed to be based in China, to no avail. Eventually we contacted the host, DigitalOcean, which took down the database within a day of reaching out.</p>

Crazy app: you can upload the SSID and password for any Wi-Fi network. And then it's sitting there on its database, which turns out to be not that secure (predictably enough). Why would you trust some random app from the Play Store, except that it says "free Wi-Fi!!!!" It's greed blinding people to security.
wifi  security  app 
april 2019 by charlesarthur
Unmasked: an analysis of 10 million passwords • WP Engine
<p>We already knew a few fairly high-profile people were in the Gmail dump. For instance, Mashable noted a month after the list was released that one of its reporters was included (the password listed for him was his Gmail password, but several years old and no longer in use). But we didn’t think Full Contact would turn up so many more.

Within the 78,000 matches we found, there were hundreds of very high-profile people. We’ve selected about 40 of the most notable below. A few very important points:

1. We’ve deliberately not identified anyone by name.<br />2. The company logos represent those organizations the individuals work for now and not necessarily when they were using the password listed for them.<br />3. There’s no way of knowing where the passwords were originally used. They may have been personal Gmail passwords, but it’s more likely that they were used on other sites like File Dropper. It’s therefore possible that many of the weak passwords are not representative of the passwords the individuals currently use at work, or anywhere else for that matter.<br />4. Google confirmed that when the list was published, less than 2% (100,000) of the passwords might have worked with the Gmail addresses they were paired with. And all affected account holders were required to reset their passwords. In other words, the passwords below—while still educational—are no longer in use. Instead, they’ve been replaced by other, hopefully more secure, combinations.

If the passwords hadn’t been reset, however, the situation would be more of a concern. Several studies have shown that a number of us use the same passwords for multiple services. And given that the list below includes a few CEOs, many journalists, and someone very high up at the talent management company of Justin Bieber and Ariana Grande, this dump could have caused a lot of chaos. Thankfully it didn’t, and now can’t.</p>

It's really shocking how short the "crack time" is for some of these passwords: well under a second.
security  password  computers 
april 2019 by charlesarthur
Facebook stored millions of passwords in plaintext—change yours now • WIRED
Lily Hay Newman:
<p>By now, it’s difficult to summarize all of Facebook’s privacy, misuse, and security missteps in one neat description. It just got even harder: On Thursday, following a <a href="">report by Krebs on Security</a>, Facebook acknowledged a bug in its password management systems that caused hundreds of millions of user passwords for Facebook, Facebook Lite, and Instagram to be stored as plaintext in an internal platform. This means that thousands of Facebook employees could have searched for and found them. Krebs reports that the passwords stretched back to those created in 2012.</p>

Brian Krebs's report was on 21 March. This acknowledgement has come nearly a month later, at the end of the day before Easter Friday, after the release of the Mueller report which of course sucked up huge amounts of media attention.

Did it really take four weeks to acknowledge this?
facebook  security 
april 2019 by charlesarthur
Cracking the code: a toddler, an iPad, and a tweet • The New Yorker
Evan Osnos:
<p>I’d left the iPad in its usual home––an overflowing basket, on a low table, of mail, stamps, power cords, and partially broken earphones. The low table, it turns out, was a mistake. Our son Ollie, age three, gets to use the iPad on airplanes, but rarely at home, a rule he regards as unspeakably cruel. Now and then, when he finds it in his grasp, he’ll enter random numbers into the passcode screen, until a parent lifts the device up and out of his tiny hands, at which point he rendeth his garments and lieth on the earth.

The iPad was not in the basket. Ollie, it turns out, had got hold of it and gone to town on the passcode, trying one idea after another, with the fury and focus of Alan Turing trying to beat the Nazis. It’s not clear how many codes Ollie tried, but, by the time he gave up, the screen said “iPad is disabled, try again in 25,536,442 minutes.” That works out to about 48 years. I took a picture of it with my phone, wrote a tweet asking if anyone knew how to fix it, and went downstairs to dinner.</p>

What happens is rather lovely, though also an indication of what modern media life is like.
ipad  security 
april 2019 by charlesarthur
Serious flaws leave WPA3 vulnerable to hacks that steal Wi-Fi passwords • Ars Technica
Dan Goodin:
<p>the current WPA2 version (in use since the mid 2000s) has suffered a crippling design flaw that has been known for more than a decade: the four-way handshake—a cryptographic process WPA2 uses to validate computers, phones, and tablets to an access point and vice versa—contains a hash of the network password. Anyone within range of a device connecting to the network can record this handshake. Short passwords or those that aren’t random are then trivial to crack in a matter of seconds…

…A research paper titled Dragonblood: A Security Analysis of WPA3’s SAE Handshake disclosed several vulnerabilities in WPA3 that open users to many of the same attacks that threatened WPA2 users. The researchers warned that some of the flaws are likely to persist for years, particularly in lower-cost devices. They also criticized the WPA3 specification as a whole and the process that led to its formalization by the Wi-Fi Alliance industry group.

“In light of our presented attacks, we believe that WPA3 does not meet the standards of a modern security protocol,” authors Mathy Vanhoef of New York University, Abu Dhabi, and Eyal Ronen of Tel Aviv University and KU Leuven wrote. “Moreover, we believe that our attacks could have been avoided if the Wi-Fi Alliance created the WPA3 certification in a more open manner.”</p>

Amazing: the Wi-Fi Alliance has screwed the pooch on the security of Wi-Fi since before Wi-Fi was a standard: as I wrote in my book, a security researcher pointed out that WEP (the first Wi-Fi security method) was trivial to crack before it was standardised. Some people just don't learn.
security  wifi  encryption 
april 2019 by charlesarthur
A powerful spyware app now targets iPhone owners • TechCrunch
Zack Whittaker:
<p>Security researchers have discovered a powerful surveillance app first designed for Android devices can now target victims with iPhones.

The spy app, found by researchers at mobile security firm Lookout, said its developer abused their Apple-issued enterprise certificates to bypass the tech giant’s app store to infect unsuspecting victims.

The disguised carrier assistance app once installed can silently grab a victim’s contacts, audio recordings, photos, videos and other device information — including their real-time location data. It can be remotely triggered to listen in on people’s conversations, the researchers found. Although there was no data to show who might have been targeted, the researchers noted that the malicious app was served from fake sites purporting to be cell carriers in Italy and Turkmenistan.

Researchers linked the app to the makers of a previously discovered Android app, developed by the same Italian surveillance app maker Connexxa, known to be in use by the Italian authorities.</p>

What's not clear is whether the app could grab those contacts, photos etc without the user's permission, or whether iOS's permissions structure is robust against that threat. Of course the social engineering side - "this app needs to access…" - can still work.
iphone  malware  hacking  security 
april 2019 by charlesarthur
Researchers find Google Play store apps were actually government malware • Motherboard
Lorenzo Franceschi-Bicchierai and Riccardo Coluccini:
<p>Hackers working for a surveillance company infected hundreds of people with several malicious Android apps that were hosted on the official Google Play Store for months, Motherboard has learned.

In the past, both government hackers and those working for criminal organizations have uploaded malicious apps to the Play Store. This new case once again highlights the limits of Google’s filters that are intended to prevent malware from slipping onto the Play Store. In this case, more than 20 malicious apps went unnoticed by Google over the course of roughly two years.

Motherboard has also learned of a new kind of Android malware on the Google Play store that was sold to the Italian government by a company that sells surveillance cameras but was not known to produce malware until now. Experts told Motherboard the operation may have ensnared innocent victims as the spyware appears to have been faulty and poorly targeted. Legal and law enforcement experts told Motherboard the spyware could be illegal.</p>

Italy's government subsequently shut down the malware infrastructure and investigated the company behind the spyware.
security  google  malware 
april 2019 by charlesarthur
Microsoft, Facebook, trust and privacy • Benedict Evans
Evans finds strong parallels, 25-odd years apart:
<p>much like the [creators of the] Microsoft macro viruses, the ‘bad actors’ on Facebook did things that were in the manual. They didn’t prise open a locked window at the back of the building - they knocked on the front door and walked in. They did things that you were supposed to be able to do, but combined them in an order and with malign intent that hadn’t really been anticipated.

It’s also interesting to compare the public discussion of Microsoft and of Facebook before these events. In the 1990s, Microsoft was the ‘evil empire’, and a lot of the narrative within tech focused on how it should be more open, make it easier for people to develop software that worked with the Office monopoly, and make it easier to move information in and out of its products. Microsoft was ‘evil’ if it did anything to make life harder for developers. Unfortunately, whatever you thought of this narrative, it pointed in the wrong direction when it came to this use case. Here, Microsoft was too open, not too closed.

Equally, in the last 10 years many people have argued that Facebook is too much of a ‘walled garden’ - that is is too hard to get your information out and too hard for researchers to pull information from across the platform. People have argued that Facebook was too restrictive on how third party developers could use the platform. And people have objected to Facebook's attempts to enforce the single real identities of accounts. As for Microsoft, there may well have been justice in all of these arguments, but also as for Microsoft, they pointed in the wrong direction when it came to this particular scenario. For the Internet Research Agency, it was too easy to develop for Facebook, too easy to get data out, and too easy to change your identity. The walled garden wasn’t walled enough. </p>
security  facebook  microsoft  privacy 
april 2019 by charlesarthur
Laptops to stay in bags as TSA brings new technology to airports • Bloomberg Government
<p>Air passengers at a growing number of US airports will no longer need to remove electronics, liquids, and other items from their carry-on luggage at security checkpoints as the Transportation Security Administration rolls out new technology.

The TSA took a major step in a broader plan to revamp its overall screening process with faster, more advanced technology when it signed a contract Thursday for hundreds of new carry-on baggage screening machines, Administrator David Pekoske said on a press call Friday. The agency has tested the new technology at more than a dozen airports since 2017, along with the relaxed protocols that allow passengers to leave items such as laptops and toiletries inside their luggage.

The rollout of the computed tomography, or CT, machines will begin this summer, Pekoske said. The $97m contract will buy 300 machines, but the list of airports receiving them has yet to be made final, Pekoske said.

The technology creates 3-D images of bags’ contents and will eventually be able to detect items automatically that the TSA now asks passengers to remove, he said.

“It’s not a little bit better, it’s a lot better,” Pekoske said of the technology.</p>

This is going to be introduced over the next eight years - so it's going to be "do I need to..?" all over the place. By the time it's everywhere, we'll only notice the places where it's slow.
government  security  airports 
april 2019 by charlesarthur
Asus was warned of hacking risks months ago, thanks to leaky passwords • TechCrunch
Zack Whittaker:
<p>A security researcher warned Asus two months ago that employees were improperly publishing passwords in their GitHub repositories that could be used to access the company’s corporate network.

One password, found in an employee repo on the code sharing, allowed the researcher to access an email account used by internal developers and engineers to share nightly builds of apps, drivers and tools to computer owners. The repo in question was owned by an Asus engineer who left the email account’s passwords publicly exposed for at least a year. The repo has since been wiped clean, though the GitHub account still exists.

“It was a daily release mailbox where automated builds were sent,” said the researcher, who goes by the online handle SchizoDuckie, in a message to TechCrunch. Emails in the mailbox contained the exact internal network path where drivers and files were stored…

…The researcher’s findings would not have stopped the hackers who targeted Asus’ software update tool with a backdoor, revealed this week, but reveals a glaring security lapse that could have put the company at risk from similar or other attacks. Security firm Kaspersky warned Asus on January 31 — just a day before the researcher’s own disclosure on February 1 — that hackers had installed a backdoor in the company’s Asus Live Update app. </p>

That's two strikes against Asus; not looking good. Security is hard, especially when you do it badly.
asus  github  hacking  security 
march 2019 by charlesarthur
Damning Huawei security report: the top 10 key takeaways • Computer Business Review
Ed Targett:
<p>These are Computer Business Review’s Top 10 takeaways from the <a href="">Huawei security report</a> [pdf].

1: Huawei’s build processes are dangerously poor<br />Huawei’s underlying build process provides “no end-to-end integrity, no good configuration management, no lifecycle management of software components across versions, use of deprecated and out of support tool chains (some of which are non-deterministic) and poor hygiene in the build environments” HCSEC said.

2: Security officials don’t blame Beijing<br />The National Cyber Security Centre (NCSC) which oversees HCSEC, said it “does not believe that the defects identified are a result of Chinese state interference.”

3: Pledges of a $2bn overhaul mean nothing, yet…<br />Huawei promises to transform its software engineering process through the investment of $2bn over five years are “currently no more than a proposed initial budget for as yet unspecified activities.” Until there is “evidence of its impact on products being used in UK networks” HCSEC has no confidence it will drive change.

4: The vulnerabilities are bad…<br />Vulnerabilities identified in Huawei equipment include unprotected stack overflows in publicly accessible protocols, protocol robustness errors leading to denial of service, logic errors, cryptographic weaknesses, default credentials and many other basic vulnerability types, HCSEC reported.</p>

Also there: old issues aren't fixed, managing the risk will grow, UK operators may have to replace hardware because of the "significant risk", it's using outdated OSs, and the lack of progress is becoming critical. You wonder if this is new? Read on.
huawei  security  hacking 
march 2019 by charlesarthur
Huawei bungled router security, leaving kit open to botnets, despite alert from ISP years prior • The Register
Gareth Corfield:
<p>Huawei bungled its response to warnings from an ISP's code review team about a security vulnerability common across its home routers – patching only a subset of the devices rather than all of its products that used the flawed firmware.

Years later, those unpatched Huawei gateways, still vulnerable and still in use by broadband subscribers around the world, were caught up in a Mirai-variant botnet that exploited the very same hole flagged up earlier by the ISP's review team.

The Register has seen the ISP's vulnerability assessment given to Huawei in 2013 that explained how a programming blunder in the firmware of its HG523a and HG533 broadband gateways could be exploited by hackers to hijack the devices, and recommended the remote-command execution hole be closed.

Our sources have requested anonymity.

After receiving the security assessment, which was commissioned by a well-known ISP, Huawei told the broadband provider it had fixed the vulnerability, and had rolled out a patch to HG523a and HG533 devices in 2014, our sources said. However, other Huawei gateways in the HG series, used by other internet providers, suffered from the same flaw because they used the same internal software, and remained vulnerable and at risk of attack for years because Huawei did not patch them.

One source described the bug as a "trivially exploitable remote code execution issue in the router."</p>

And exploited it was. Repeatedly. But Huawei would only patch as it was told about exploits, model by model, despite them all using the same firmware.
huawei  security  hacking 
march 2019 by charlesarthur
Cummings demands docs on Kushner's alleged use of WhatsApp for official business • POLITICO
Andrew Desiderio and Kyle Cheney:
<p>House Democrats are raising new concerns about what they say is recently revealed information from Jared Kushner’s attorney indicating that the senior White House aide has been relying on encrypted messaging service WhatsApp and his personal email account to conduct official business.

The revelation came in a Dec. 19 meeting — made public by the House Oversight and Reform Committee for the first time on Thursday — between Rep. Elijah Cummings (D-Md.), Rep. Trey Gowdy, the former chairman of the oversight panel, and Kushner’s lawyer, Abbe Lowell.

Cummings, who now leads the Oversight Committee, says in a new letter to White House Counsel Pat Cipollone that Lowell confirmed to the two lawmakers that Kushner “continues to use” WhatsApp to conduct White House business. Cummings also indicated that Lowell told them he was unsure whether Kushner had ever used WhatsApp to transmit classified information.

"That's above my pay grade," Lowell told the lawmakers, per Cummings' letter.

Lowell added, according to Cummings, that Kushner is in compliance with recordkeeping law. Lowell told the lawmakers that Kushner takes screenshots of his messages and forwards them to his White House email in order to comply with records preservation laws, Cummings indicated.

Kushner, whom the president charged with overseeing the administration’s Middle East policies, reportedly has communicated with Saudi Crown Prince Mohammed bin Salman via WhatsApp.</p>

Hmm. Kushner's an utterly talentless ballsack, but I can't see using WhatsApp as bad - especially compared to using email. There's no evidence it has ever been cracked. It's as insecure as your phone login - and you can decide if that's high or medium or low. Governments all over the place get things done via WhatsApp. I'd always recommend it over email, which offers far more targets to break into.
whatsapp  kushner  security  hacking 
march 2019 by charlesarthur
Triton is the world’s most murderous malware, and it’s spreading • MIT Technology Review
Martin Giles:
<p>In a worst-case scenario, the rogue code could have led to the release of toxic hydrogen sulfide gas or caused explosions, putting lives at risk both at the facility and in the surrounding area.

[Julian] Gutmanis recalls that dealing with the malware at the petrochemical plant, which had been restarted after the second incident, was a nerve-racking experience. “We knew that we couldn’t rely on the integrity of the safety systems,” he says. “It was about as bad as it could get.”

In attacking the plant, the hackers crossed a terrifying Rubicon. This was the first time the cybersecurity world had seen code deliberately designed to put lives at risk. Safety instrumented systems aren’t just found in petrochemical plants; they’re also the last line of defence in everything from transportation systems to water treatment facilities to nuclear power stations.

Triton’s discovery raises questions about how the hackers were able to get into these critical systems. It also comes at a time when industrial facilities are embedding connectivity in all kinds of equipment—a phenomenon known as the industrial internet of things. This connectivity lets workers remotely monitor equipment and rapidly gather data so they can make operations more efficient, but it also gives hackers more potential targets.</p>

First spotted late in 2017; origin still unknown.
security  malware  triton 
march 2019 by charlesarthur
Samsung Galaxy S10 face unlock can be fooled by a photo, video, or even your sister • Android Police
Ryne Hager:
<p>Both The Verge and Lewis Hilsenteger (Unbox Therapy) were able to trick the S10's face recognition tech with a video played back on another phone. In the case of the latter, this is explicitly on a device smudged with fingerprints and dust, etc., only a couple of inches away. There should have been plenty of indirect cues there — focus distance, sufficient resolution to see pixel-level details, overlaid static features — to indicate that something might be off, but the S10 paid such details no mind.

Italian tech outlet SmartWorld was able to fool it with a static image, as well.

You may not even need a photo or video to trick the S10's facial recognition tech. Jane Wong, of great social app teardown fame, was able to fool her brother's recently purchased Galaxy S10 with her own face; a mere family resemblance was reportedly enough to confuse it.</p>

Come on. That is shamefully bad. It would be better not to ship something so woeful. Touch ID is more than five years old, Face ID is more than a year old, and Samsung offers this bag of insecurity?
samsung  security  facebook 
march 2019 by charlesarthur
Iranian-backed hackers stole data from major US government contractor • NBC News
Dan De Luce and Courtney Kube:
<p>Iranian-backed hackers have stolen vast amounts of data from a major software company that handles sensitive computer projects for the White House communications agency, the U.S. military, the FBI and many American corporations, a cybersecurity firm told NBC News.

Citrix Systems Inc. came under attack twice, once in December and again Monday, according to Resecurity, which notified the firm and law enforcement authorities.

Employing brute force attacks that guess passwords, the assault was carried out by the Iranian-linked hacking group known as Iridium, which was also behind recent cyberattacks against numerous government agencies, oil and gas companies and other targets, Charles Yoo, Resecurity's president, said.

The hackers extracted at least six terabytes of data and possibly up to 10 terabytes in the assault on Citrix, Yoo said. The attackers gained access to Citrix through several compromised employee accounts, he said.</p>

Successful brute-force attacks? Citrix really needs to rethink its approach to security. Password lockouts and/or two-factor authentication.
citrix  iran  hacking  security 
march 2019 by charlesarthur
Robert Ou @ BSidesSF on Twitter: "Fun thing I learned today…"
<p>Fun thing I learned today regarding secure passwords: the password "ji32k7au4a83" looks like it'd be decently secure, right? But if you check e.g. HIBP [Have I Been Pwned, which collects hashes of passwords], it's been seen over a hundred times. Challenge: explain why and how this happened and how this password might be guessed</p>

I hardly ever link just to single tweets, but the answer to this one (it's <a href="">here</a>; but it's better to read the thread on Twitter) is just mindblowing - and, once you've seen it, so obvious.
security  twitter  language  password 
march 2019 by charlesarthur
Facebook won’t let you opt-out of its phone number ‘look up’ setting • TechCrunch
Zack Whittaker:
<p>Users are complaining that the phone number Facebook hassled them to use to secure their account with two-factor authentication has also been associated with their user profile — which anyone can use to “look up” their profile.

Worse, Facebook doesn’t give you an option to opt out.

Last year, Facebook was forced to admit that after months of pestering its users to switch on two-factor by signing up their phone number, it was also using those phone numbers to target users with ads. But some users are finding out just now that Facebook’s default setting allows everyone — with or without an account — to look up a user profile based off the same phone number previously added to their account.

The recent hubbub began today after a <a href="">tweet</a> by Jeremy Burge blew up, criticizing Facebook’s collection and use of phone numbers, which he likened to “a unique ID that is used to link your identity across every platform on the internet.”</p>

Facebook has handled this badly because it handles anything where it gets more data, especially data tying to you individually, badly - that is, as a thing which it wants above all other things, and will not relent in its use. Last year, the complaint was that if you use your phone number for 2FA, it pings you - even if you have all "notify me" settings turned off - to say that things are happening on your account.

You can however use a code generator program such as Authy or Google Authenticator for the 2FA part.
facebook  privacy  phonenumber  ethics  security  hacking 
march 2019 by charlesarthur
You do not need blockchain: eight popular use cases and why they do not work • Smartdec
Ivan Ivanitskiy:
<p>1. Supply chain management<br />Let’s say you ordered some goods, and a carrier guarantees to maintain certain transportation conditions, such as keeping your goods cold. A proposed solution is to install a sensor in a truck that will monitor fridge temperature and regularly transmit the data to the blockchain. This way, you can make sure that the promised conditions are met along the entire route.

The problem here is not blockchain, but rather sensor, related. Being part of the physical world, the sensor is easy to fool. For example, a malicious carrier might only cool down a small fridge inside the truck in which they put the sensor, while leaving the goods in the non-refrigerated section of the truck to save costs.

I would describe this problem as: Blockchain is not Internet of Things (IOT).

We will return to this statement a few more times.</p>

Quite a few.
blockchain  security 
february 2019 by charlesarthur
Nest Secure had a secret microphone, can now be a Google Assistant • CSO Online
Ms. Smith:
<p>When announcing that a software update will make Google Assistant available on Nest Guard, Google added, “The Google Assistant on Nest Guard is an opt-in feature, and as the feature becomes available to our users, they’ll receive an email with instructions on how to enable the feature and turn on the microphone in the Nest app. Nest Guard does have one on-device microphone that is not enabled by default.”

Nest Secure owners have been able to use Google Assistant and voice commands, but it previously required a separate Google Assistant device to hear your commands. I suppose it depends upon your outlook on if you are happy or creeped out that your security system secretly had an undocumented microphone capable of doing the listening all along.

Google didn’t really focus on the “surprise there was a microphone hidden in the Nest Guard brain of your Nest Secure” angle, preferring a take on how Google Assistant and Nest Guard can help you out.</p>

This is not something you accidentally include. It's not something you accidentally forget to tell people about either, because your engineers know that it's there, because they're going to enable it in the future: it's on the schedule.

Surprising that a teardown by iFixit et al didn't find this. But it's bad for Google not to tell people, because that's how you undermine trust.
Google  nest  iot  security  microphone 
february 2019 by charlesarthur
Facebook's security team tracks posts, location for 'BOLO' threat list • CNBC
Salvador Rodriguez:
<p>In early 2018, a Facebook user made a public threat on the social network against one of the company's offices in Europe.

Facebook picked up the threat, pulled the user's data and determined he was in the same country as the office he was targeting. The company informed the authorities about the threat and directed its security officers to be on the lookout for the user.

"He made a veiled threat that 'Tomorrow everyone is going to pay' or something to that effect," a former Facebook security employee told CNBC.

The incident is representative of the steps Facebook takes to keep its offices, executives and employees protected, according to more than a dozen former Facebook employees who spoke with CNBC. The company mines its social network for threatening comments, and in some cases uses its products to track the location of people it believes present a credible threat.</p>

"BOLO" is "be on the lookout". There seemed to be a fair amount of pearl-clutching about this online, but it seems reasonable to me: recall that <a href="">a woman critically injured three people in a shooting at YouTube</a> in April 2018, and she had made lots of noise about her anger on social media ahead of time. I'd say it's sensible to protect your employees.
facebook  security 
february 2019 by charlesarthur
Another demonstration of CRS/GDS insecurity • The Practical Nomad blog
Edward Hasbrouck:
<p>Zack Whittaker had a report yesterday for Techcrunch on the latest rediscovery of a continuing vulnerability affecting sensitive personal data in airline reservations that I first reported, both publicly and to the responsible companies, more than 15 years ago: computerized reservations systems and systems that rely on them for data storage and retrieval, including airline check-in Web sites, use a short, insecure, unchangeable, system-assigned, and fundamentally insecure "record locator" as though it were a secure password to control access to passenger name record (PNR) data.

I wrote about these vulnerabilities and reported them to each of the major CRS/GDS companies in 2001, 2002, and 2003, specifically noting their applicability to airline check-in Web sites (among many other Web services). I pointed these vulnerabilities out in a submission to the US Federal Trade Commission in 2009 which was co-signed by several consumer and privacy organizations, in my 2013 testimony as an invited expert witness before the Advisory Committee on Aviation Consumer Protection of the U.S. Department of Transportation, in a complaint which was which finally accepted and docketed by the European Commission in 2017, and in my comments to the European Commission in December 2018 with respect to its current review of the European Union's regulations governing protection of personal data by CRSs.</p>

Ah, so it's not a new thing by any means. That makes it a lot worse. (Thanks, Wendy Grossman.)
airlines  hacking  security 
february 2019 by charlesarthur
EU orders recall of children's smartwatch over severe privacy concerns • ZDNet
Catalin Cimpanu:
<p>For the first time, EU authorities have announced plans to recall a product from the European market because of a data privacy issue.

The product is Safe-KID-One, a children's smartwatch produced by German electronics vendor ENOX.

According to the company's website, the watch comes with a trove of features, such as a built-in GPS tracker, built-in microphone and speaker, a calling and SMS text function, and a companion Android mobile app that parents can use to keep track and contact their children.

The product is what most parents regularly look in a modern smartwatch but in a RAPEX (Rapid Alert System for Non-Food Products) alert published last week and spotted by Dutch news site Tweakers, European authorities ordered a mass recall of all smartwatches from end users citing severe privacy lapses.

"The mobile application accompanying the watch has unencrypted communications with its backend server and the server enables unauthenticated access to data," said authorities in the RAPEX alert. "As a consequence, the data such as location history, phone numbers, serial number can easily be retrieved and changed."

On top of this, authorities also said that "a malicious user can send commands to any watch making it call another number of his choosing, can communicate with the child wearing the device or locate the child through GPS."</p>

But it gets worse: <a href="">the Android app is owned not by Enox, but by a Chinese developer</a>, so the data loops through Chinese servers.
smartwatch  children  privacy  security 
february 2019 by charlesarthur
How machine learning could keep dangerous DNA out of terrorists' hands • Nature
Sara Reardon:
<p>Biologists the world over routinely pay companies to synthesize snippets of DNA for use in the laboratory or clinic. But intelligence experts and scientists alike have worried for years that bioterrorists could hijack such services to build dangerous viruses and toxins — perhaps by making small changes in a genetic sequence to evade security screening without changing the DNA’s function.

Now, the US government is backing efforts that use machine learning to detect whether a DNA sequence encodes part of a dangerous pathogen. Researchers are beginning to make progress towards designing artificial-intelligence-based screening tools, and several groups are presenting early results at the American Society for Microbiology (ASM) Biothreats meeting in Arlington, Virginia, on 31 January. Their findings could lead to a better understanding of how pathogens harm the body, as well as new ways for scientists to link DNA sequences to specific biological functions.</p>

At LAST someone has put together terrorism, DNA and machine learning.
security  genomics  machinelearning 
february 2019 by charlesarthur
The problem with throwing away a smart device • Hackster Blog
Alasdair Allan:
<p>Last week a <a href="">teardown of the LiFX Mini white </a>was published on the Limited Results site, and it shows that this smart lightbulb is anything but smart.

In a very short space of time the teardown established that if you’ve connected the bulb to your Wi-Fi network then your network password will be stored in plain text on the bulb, and can be easily recovered just by downloading the firmware and inspecting it using a hex editor.
In other words, throwing this lightbulb in the trash is effectively the same as taping a note to your front door with your wireless SSID and password written on it. This probably isn’t something you should be comfortable doing.

Worse yet both the root certificate and RSA private key for the bulb are also present in the firmware in plain text, and the devices is completely open—no secure boot, no flash encryption, and with the debug interface fully enabled.

It turns out that this particular LiFX bulb is built around an Espressif ESP32 which, as we know, has a sprawling and fairly mature open source ecosystem. But that also means that the security implemented by LiFX for the bulb was inexplicably poor. Because while the recovery of the password and keys was aided by the mature state of the development environment, the ESP32 also supports both secure boot and flash encryption, and the later would have provided “at-rest” data encryption, and stopped the this sort of attack dead in its tracks.</p>
smarthome  hacking  security 
february 2019 by charlesarthur
Teenager and his mom tried to warn Apple of FaceTime bug • WSJ
<p>An Arizona teenager and his mother spent more than a week trying to warn Apple of a bug in its FaceTime video-chat software before news of the glitch—which allows one FaceTime user calling another in a group chat to listen in while the recipient’s Apple device is still ringing—blew up on social media Monday.

In the days following their discovery, the pair posted on Twitter and Facebook , called and faxed Apple, and learned they needed a developer account to report the bug. They eventually traded a few emails, viewed by The Wall Street Journal, with Apple’s security team.

But it wasn’t until word of the bug started spreading more widely on social media that Apple disabled the software feature at the heart of the issue.

Michele Thompson said her 14-year-old son, Grant, discovered the issue Jan. 20. She said it was frustrating trying to get the attention of one of the world’s largest technology companies.

“Short of smoke signals, I was trying every method that someone could use to get a hold of someone at Apple,” said Ms. Thompson, 43, who lives with her son in Tucson…

…Grant, a high-school freshman, was setting up a FaceTime chat with friends ahead of a “Fortnite” videogame-playing session when he stumbled on the bug. Using FaceTime, Mr. Thompson found that as he added new members to his group chat, he could hear audio from other participants, even if they hadn’t answered his request to join the chat.</p>

Apple turned off Group FaceTime once this blew up; that seems to be the core of the fault. Surprising it wasn't found during testing; surprising it wasn't found a great deal earlier after release. Which implies.. not that many people have used Group FaceTime.
apple  facetime  security 
january 2019 by charlesarthur
The messy truth about infiltrating computer supply chains • The Intercept
Micah Lee and Henrik Moltke:
<p>while Bloomberg’s story [about a tiny chip on motherboards compromising Apple and Amazon systems] may well be completely (or partly) wrong, the danger of China compromising hardware supply chains is very real, judging from classified intelligence documents. US spy agencies were warned about the threat in stark terms nearly a decade ago and even assessed that China was adept at corrupting the software bundled closest to a computer’s hardware at the factory, threatening some of the US government’s most sensitive machines, according to documents provided by National Security Agency whistleblower Edward Snowden. The documents also detail how the US and its allies have themselves systematically targeted and subverted tech supply chains, with the NSA conducting its own such operations, including in China, in partnership with the CIA and other intelligence agencies. The documents also disclose supply chain operations by German and French intelligence.

What’s clear is that supply chain attacks are a well-established, if underappreciated, method of surveillance — and much work remains to be done to secure computing devices from this type of compromise.

“An increasing number of actors are seeking the capability to target … supply chains and other components of the US information infrastructure,” the intelligence community stated in a secret 2009 report. “Intelligence reporting provides only limited information on efforts to compromise supply chains, in large part because we do not have the access or technology in place necessary for reliable detection of such operations.”</p>

The NSA compromised Cisco routers; that's pretty well known.
security  hacking 
january 2019 by charlesarthur
February 2017: VIZIO to pay $2.2m to FTC, state of New Jersey to settle charges it collected viewing histories on 11 million smart televisions without users’ consent • Federal Trade Commission
<p>VIZIO, Inc., one of the world’s largest manufacturers and sellers of internet-connected “smart” televisions, has agreed to pay $2.2m to settle charges by the Federal Trade Commission and the Office of the New Jersey Attorney General that it installed software on its TVs to collect viewing data on 11 million consumer TVs without consumers’ knowledge or consent.

The stipulated federal court order requires VIZIO to prominently disclose and obtain affirmative express consent for its data collection and sharing practices, and prohibits misrepresentations about the privacy, security, or confidentiality of consumer information they collect. It also requires the company to delete data collected before March 1, 2016, and to implement a comprehensive data privacy program and biennial assessments of that program.</p>
security  vizio 
january 2019 by charlesarthur
Iranian phishers bypass 2FA protections offered by Yahoo Mail and Gmail • Ars Technica
Dan Goodin:
<p>A recent phishing campaign targeting US government officials, activists, and journalists is notable for using a technique that allowed the attackers to bypass two-factor authentication (2FA) protections offered by services such as Gmail and Yahoo Mail, researchers said Thursday. The event underscores the risks of 2FA that relies on one-tap logins or one-time passwords, particularly if the latter are sent in SMS messages to phones.

Attackers working on behalf of the Iranian government collected detailed information on targets and used that knowledge to write spear-phishing emails that were tailored to the targets’ level of operational security, researchers with security firm Certfa Lab <a href="">said in a blog post</a>. The emails contained a hidden image that alerted the attackers in real time when targets viewed the messages. When targets entered passwords into a fake Gmail or Yahoo security page, the attackers would almost simultaneously enter the credentials into a real login page. In the event targets’ accounts were protected by 2FA, the attackers redirected targets to a new page that requested a one-time password [OTP].

“In other words, they check victims’ usernames and passwords in realtime on their own servers, and even if two-factor authentication such as text message, authenticator app or one-tap login are enabled they can trick targets and steal that information too,” Certfa Lab researchers wrote.

In an email, a Certfa representative said company researchers confirmed that the technique successfully breached accounts protected by SMS-based 2fa.</p>

It isn't that hard, when you think about it: if you can get someone to believe they're at a login page (feasible given how easy it is to get a security certificate for a page), you can use the time - about 30 seconds - to use the OTP. What isn't widely known is that OTPs last longer than the 30 seconds they claim. (Yes, I wrote about this in Cyber Wars.)
security  2fa  email  hacking  iran. 
december 2018 by charlesarthur
Lenovo tells Asia-Pacific staff: Work lappy with your unencrypted data on it has been nicked • The Register
Paul Kunert:
<p>A corporate-issued laptop lifted from a Lenovo employee in Singapore contained a cornucopia of unencrypted payroll data on staff based in the Asia Pacific region, The Register can exclusively reveal.

Details of the massive screw-up reached us from Lenovo staffers, who are simply bewildered at the monumental mistake. Lenovo has sent letters of shame to its employees confessing the security snafu.

"We are writing to notify you that Lenovo has learned that one of our Singapore employees recently had the work laptop stolen on 10 September 2018," the letter from Lenovo HR and IT Security, dated 21 November, stated.

"Unfortunately, this laptop contained payroll information, including employee name, monthly salary amounts and bank account numbers for Asia Pacific employees and was not encrypted."

Lenovo employs more than 54,000 staff worldwide, the bulk of whom are in China.

The letter stated there is currently "no indication" that the sensitive employee data has been "used or compromised", and Lenovo said it is working with local police to "recover the stolen device".</p>
lenovo  hacking  security 
december 2018 by charlesarthur
Google+ to shut down in April after new security flaw found • Financial Times
Richard Waters:
<p>Google said it has discovered a new vulnerability in its Google+ social network that could have revealed private data on 52.5m users, just a month after it disclosed an earlier security flaw and announced plans to close down the service.

The new problem was disclosed on Monday, prompting the internet giant to say it will bring forward the date for ending the consumer Google+ service by four months, to April next year.

The company said it had identified the new flaw less than a week after it was introduced, and that it been fixed. There was “no evidence” that any third-party app developers had misused user data as a result of the flaw, it said.

The latest disclosure marks an embarrassing stumble by Google as it tried to plug previous gaps in its privacy protections. It could also hamper its attempts to give Google+ a second life as a collaboration and communication service for workers, after closing down the free consumer version.</p>
google  security 
december 2018 by charlesarthur
Major sites running unauthenticated JavaScript on their payment pages • Terence Eden's Blog
Terence Eden:
<p>A few months ago, British Airways' customers had their credit card details stolen. How was this possible? The best guess goes something like this:
• BA had 3rd party JS on its payment page<br /> • The 3rd party's site was hacked, and the JS was changed.<br />• BA's customers ran the script, which then harvested their credit card details as they were typed in.

This should have been a wake-up call to the industry. Don't load unauthenticated code on your website - and especially not on your payments page.</p>

Deliveroo, Spotify, The Guardian, Fanduel, EasyJet (sorta), and British Airways. Argh.
security  hacking  javascript 
november 2018 by charlesarthur
A leaky database of SMS text messages exposed password resets and two-factor codes • TechCrunch
Zack Whittaker:
<p>A security lapse has exposed a massive database containing tens of millions of text messages, including password reset links, two-factor codes, shipping notifications and more.

The exposed server belongs to Voxox (formerly Telcentris), a San Diego, Calif.-based communications company. The server wasn’t protected with a password, allowing anyone who knew where to look to peek in and snoop on a near-real-time stream of text messages.

For Sébastien Kaul, a Berlin-based security researcher, it didn’t take long to find.

Although Kaul found the exposed server on Shodan, a search engine for publicly available devices and databases, it was also attached to to one of Voxox’s own subdomains. Worse, the database — running on Amazon’s Elasticsearch — was configured with a Kibana front-end, making the data within easily readable, browsable and searchable for names, cell numbers and the contents of the text messages themselves.</p>

Everyone gets hacked. Sometimes, they just do it to themselves.
security  hack  sms 
november 2018 by charlesarthur
People who live in smart houses shouldn’t throw parties • Terence Eden's Blog
<p>I have friends. More than one! I also have a home full of smart-gadgets which are controlled by apps.

The two don't mix.

This is yet another complaint about solipsistic app design.

Let's take my Lifx bulbs. I have a friend staying for a few days, and he needs to be able to turn lights on and off. Lifx make this functionally impossible. The available options are...<br />• Give my full email address & password to him. This feels suboptimal.<br />• Allow him on to my main WiFi. Again, suboptimal.

This is why my ISP-provided router has a guest mode.

Bleugh. Neither is a good solution. Luckily I have an Amazon Alexa hooked up to the lights. But because Alexa's "AI" is barely above the level of a speak-n-spell, that's also unsatisfactory.

My guest tried to turn off the hall lights. Only he used the wrong invocation. "Alexa, turn off the landing light" just doesn't cut it. Such AI, much recognition, big data mood.</p>

As he points out, the answer is obvious: guest accounts. "I know it is a cliche - but Silicon Valley geeks who are too anti-social to have friends and family is a right pain in the arse for everyone else." See also his advice to commenters.
smarthome  security 
november 2018 by charlesarthur
Here's why [insert thing here] is not a password killer • Troy Hunt
<p>Despite their respective merits, every one of these [proposed] solutions [to "replace the password"] has a massive shortcoming that severely limits their viability and it's something they simply can't compete with:

Despite its many flaws, the one thing that the humble password has going for it over technically superior alternatives is that everyone understands how to use it. Everyone.

This is where we need to recognise that decisions around things like auth schemes go well beyond technology merits alone. Arguably, the same could be said about any security control and I've made the point many times before that these things need to be looked at from a very balanced viewpoint. There are merits and there are deficiencies and unless you can recognise both (regardless of how much you agree with them), it's going to be hard to arrive at the best outcome…

…Almost a year ago, I travelled to Washington DC and sat in front of a room full of congressmen and congresswomen and <a href="">explained why knowledge-based authentication (KBA) was such a problem in the age of the data breach</a>. I was asked to testify because of my experience in dealing with data breaches, many of which exposed personal data attributes such as people's date of birth. You know, the thing companies ask you for in order to verify that you are who you say you are! We all recognise the flaws in using static KBA (knowledge of something that can't be changed), but just in case the penny hasn't yet dropped, do a find for "dates of birth" on <a href="">the list of pwned websites in Have I Been Pwned</a>. So why do we still use such a clearly fallible means of identity verification? For precisely the same reason we still use the humble password and that's simply because every single person knows how to use it.

This is why passwords aren't going anywhere in the foreseeable future and why [insert thing here] isn't going to kill them. No amount of focusing on how bad passwords are or how many accounts have been breached or what it costs when people can't access their accounts is going to change that.</p>

Essentially, we're stuck with what we started with, because it's so widely used. Though biometrics on phones do offer even less friction, and are increasingly hard to fool.
security  password  usability 
november 2018 by charlesarthur
The CIA's communications suffered a catastrophic compromise • Yahoo News
Zach Dorfman and Jenna McLaughlin:
<p>Though the Iranians didn’t say precisely how they infiltrated the network, two former U.S. intelligence officials said that the Iranians cultivated a double agent who led them to the secret CIA communications system. This online system allowed CIA officers and their sources to communicate remotely in difficult operational environments like China and Iran, where in-person meetings are often dangerous.

A lack of proper vetting of sources may have led to the CIA inadvertently running a double agent, said one former senior official — a consequence of the CIA’s pressing need at the time to develop highly placed agents inside the Islamic Republic. After this betrayal, Israeli intelligence tipped off the CIA that Iran had likely identified some of its assets, said the same former official.

The losses could have stopped there. But U.S. officials believe Iranian intelligence was then able to compromise the covert communications system. At the CIA, there was “shock and awe” about the simplicity of the technique the Iranians used to successfully compromise the system, said one former official.

In fact, the Iranians used Google to identify the website the CIA was using to communicate with agents. Because Google is continuously scraping the internet for information about all the world’s websites, it can function as a tremendous investigative tool — even for counter-espionage purposes. And Google’s search functions allow users to employ advanced operators — like “AND,” “OR,” and other, much more sophisticated ones — that weed out and isolate websites and online data with extreme specificity.

According to the former intelligence official, once the Iranian double agent showed Iranian intelligence the website used to communicate with his or her CIA handlers, they began to scour the internet for websites with similar digital signifiers or components — eventually hitting on the right string of advanced search terms to locate other secret CIA websites. From there, Iranian intelligence tracked who was visiting these sites, and from where, and began to unravel the wider CIA network.</p>

Iran then cooperated with China to identify US agents there, and then more widely identified US agents worldwide. Stunning piece of reporting. A long read, but worth it. Because of this, a number of US agents in China were caught and executed - the latter fact was reported separately of this a while back.
google  security  china  cia 
november 2018 by charlesarthur
Signing into Google now requires JavaScript • PCMag UK
Matthew Humphries:
<p>Attempting to sign in with JavaScript disabled in your browser will result in a "Couldn't sign you in" message appearing, suggesting JavaScript either isn't supported by your browser or turned off. The only solution is to turn it back on or use a more modern browser. The good news is, there's plenty of choice with Chrome, Edge, Firefox, Safari, Opera, Vivaldi, and even Internet Explorer offering support and JavaScript turned on by default.

Google doesn't see this demand for JavaScript as being a big problem because according to the search giant only 0.1% of Google Account users turn it off. The internet is becoming increasingly JavaScript-reliant anyway, so it's unlikely that tiny percentage will grow in the future.

There's no details on what Google's risk assessment actually entails, and I don't expect any to be forthcoming. Why would Google publicly share how it's checking the security of a sign-in process? That would only make it a weaker process as the more information an attacker has about how it works, the better the chances of them finding a way to circumvent it.</p>

Not really. It's pretty hard to run Javascript from a command line, which is how lots of faked or automated signins (especially using stolen credentials) would be done. This - plus, I suspect, unrevealed monitoring of keystroke patterns to figure out if there's a human behind the login - would ensure you have to have a person behind the keyboard.

Flip it over. Why would Google enforce the use of something if it doesn't improve security?
google  security  hacking 
november 2018 by charlesarthur
When Trump phones friends, the Chinese and the Russians listen and learn • The New York Times
Matthew Rosenberg and Maggie Haberman:
<p>Mr. Trump’s use of his iPhones was detailed by several current and former officials, who spoke on the condition of anonymity so they could discuss classified intelligence and sensitive security arrangements. The officials said they were doing so not to undermine Mr. Trump, but out of frustration with what they considered the president’s casual approach to electronic security.

American spy agencies, the officials said, had learned that China and Russia were eavesdropping on the president’s cellphone calls from human sources inside foreign governments and intercepting communications between foreign officials…

…The current and former officials said they have also determined that China is seeking to use what it is learning from the calls — how Mr. Trump thinks, what arguments tend to sway him and to whom he is inclined to listen — to keep a trade war with the United States from escalating further. In what amounts to a marriage of lobbying and espionage, the Chinese have pieced together a list of the people with whom Mr. Trump regularly speaks in hopes of using them to influence the president, the officials said.

Among those on the list are Stephen A. Schwarzman, the Blackstone Group chief executive who has endowed a master’s program at Tsinghua University in Beijing, and Steve Wynn, the former Las Vegas casino magnate who used to own a lucrative property in Macau…

…Officials said the president has two official iPhones that have been altered by the National Security Agency to limit their abilities — and vulnerabilities — and a third personal phone that is no different from hundreds of millions of iPhones in use around the world. Mr. Trump keeps the personal phone, White House officials said, because unlike his other two phones, he can store his contacts in it…the calls made from the phones are intercepted as they travel through the cell towers, cables and switches that make up national and international cellphone networks. Calls made from any cellphone — iPhone, Android, an old-school Samsung flip phone — are vulnerable.</p>

So he basically doesn't care. He doesn't think it's important to protect the US's interests, or to weaken its position. Truly, historians will look back on this period with amazement.
security  trump  politics  phone 
october 2018 by charlesarthur
Trivial authentication bypass in libssh leaves servers wide open • Ars Technica
Dan Goodin:
<p>There’s a four-year-old bug in the Secure Shell implementation known as libssh that makes it trivial for just about anyone to gain unfettered administrative control of a vulnerable server. While the authentication-bypass flaw represents a major security hole that should be patched immediately, it wasn’t immediately clear what sites or devices were vulnerable since neither the widely used OpenSSH nor Github’s implementation of libssh was affected…

…A search on Shodan showed 6,351 sites using libssh, but knowing how meaningful the results are is challenging. For one thing, the search probably isn’t exhaustive. And for another, as is the case with GitHub, the use of libssh doesn’t automatically make a site vulnerable.

Rob Graham, who is CEO of the Errata Security firm, said the vulnerability “is a big deal to us but not necessarily a big deal to the readers. It’s fascinating that such a trusted component as SSH now becomes your downfall.”

[A researcher at the security firm NCC, Peter] Winter-Smith agreed. “I suspect this will end up being a nomination for most overhyped bug, since half the people on Twitter seem to worry that it affects OpenSSH and the other half (quite correctly!) worry that GitHub uses libssh, when in fact GitHub isn’t vulnerable.”</p>

The bypass is: when it asks you for verification, you tell it you're verified. Like that. A four-year old bug in open source code used all over the place.
security  ssh  libssh 
october 2018 by charlesarthur
Hackers are using stolen Apple IDs to swipe cash in China • Bloomberg
<p>Alipay, whose parent also operates the world’s largest money market fund, said on its Weibo blog that it contacted Apple and is working to get to the bottom of the breach. It warned users that’ve linked their Apple identities to any payment services, including Tencent’s WePay, to lower transaction limits to prevent further losses. Tencent said in a separate statement it too had noticed the cyber-heist and reached out to the iPhone maker.

China’s two largest companies both recommended that users of their digital wallets take steps to safeguard their Apple accounts, including by changing passwords. It’s unclear how the attackers may have gotten their hands on the Apple IDs, which are required for iPhone users that buy content such as music from iTunes or the app store. Apple representatives haven’t responded to requests and phone calls seeking comment.

“Since Apple hasn’t resolved this issue, users who’ve linked their Apple ID to any payments method, including Alipay, WePay or credit cards, may be vulnerable to theft,” Alipay said in its blogpost.

Digital payments services have become a tempting target for cyber-thieves as their popularity surges around the world. Ant Financial, which is controlled by billionaire Alibaba co-founder Jack Ma, is estimated to handle more than half of China’s $17 trillion in annual online payments. Formally known as Zhejiang Ant Small & Micro Financial Services Group, it leveraged Alipay’s popularity to expand into everything from asset management to insurance, credit scoring and lending. It serves more than 800 million customers. Tencent’s rival payments offering is a key component of the social media service WeChat, which has a billion-plus users.</p>

Wonder how many of the hacked accounts used two-factor authentication? By the way, do you use it on (check) Facebook, Twitter, Dropbox, Gmail/Hotmail/Yahoo Mail, Amazon?
security  hacking  apple  china 
october 2018 by charlesarthur
The big hack: how China used a tiny chip to infiltrate US companies • Bloomberg
Jordan Robertson and Michael Riley:
<p>To help with due diligence, AWS, which was overseeing the prospective acquisition, hired a third-party company to scrutinize Elemental’s security, according to one person familiar with the process. The first pass uncovered troubling issues, prompting AWS to take a closer look at Elemental’s main product: the expensive servers that customers installed in their networks to handle the video compression. These servers were assembled for Elemental by Super Micro Computer Inc., a San Jose-based company (commonly known as Supermicro) that’s also one of the world’s biggest suppliers of server motherboards, the fiberglass-mounted clusters of chips and capacitors that act as the neurons of data centers large and small. In late spring of 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test, the person says.

Nested on the servers’ motherboards, the testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design. Amazon reported the discovery to US authorities, sending a shudder through the intelligence community. </p>

(The chips, they say, were put there by agents of the Chinese Peoples' Liberation Army to spy on Amazon, Apple and others.)

This story has of course been cannoning around the internet, eliciting various gasps of amazement. Amazon and Apple have <a href="">vehemently denied pretty much every element of the story</a>, but the US government has been silent.

A few possibilities. 1) Apple and Amazon aren't allowed to acknowledge it; it's super-high security.<br />2) didn't happen; it's a ploy by US security to get manufacture brought back to the US because they're worried about security of Chinese manufacture. (It's not just a Trump-era ploy, because the reporters have been talking to their sources for years.)<br />3) everyone's getting overheated - the chips weren't what they're being made out to be, which means it's a version of No.2. Read the denials, though. Wow. Apple put out <a href="">an even more aggressive denial</a>, saying it's not under any confidentiality demands.

One notable opinion is that this torpedoes China's ambitions to supply chips: that nobody will trust them. I'd agree.
amazon  apple  security  china  technology 
october 2018 by charlesarthur
Google tested this security app with activists in Venezuela. Now you can use it too • CNET
Alfred Ng:
<p>When connections aren't secure, attackers can intercept DNS traffic, directing people to pages infected with malware instead, or completely block out online resources. Venezuela's government has been known block access to social media applications and news websites through DNS manipulation, according to <a href="">a study from the Open Observatory of Network Interference</a>.

The practice is widespread, as researchers have found governments in more than 60 countries, including Iran, China and Turkey, using DNS manipulation to censor parts of the internet.

Intra was <a href="">released on the Play Store</a> on Wednesday morning for free, and Jigsaw had been testing its security features among a small group of activists in Venezuela since the beginning of the summer, Henck said.

They wanted to keep its public beta limited, but the app spread through word of mouth in Venezuela, to the point where activists from around the world started using it.

"People found it useful as a tool they could use to get the access that they needed," Henck said.

Intra automatically points your device to Google's public DNS server, but you're able to point it to change it to other servers like Cloudflare's through the settings. There's not much you need to do with it for your encrypted connection -- the app really has only one button that you tap to turn on.

This encrypted connection to DNS servers comes by default on the upcoming version of Android Pie, but Jigsaw's developers realized that millions of people that don't have the latest updates wouldn't have that same protection. It's important to consider when about 80% of Android's users aren't on the latest version of the mobile operating system.</p>

As long as you're confident the Google Play link is safe.. But this is definitely a good thing.
google  intra  dns  security 
october 2018 by charlesarthur
Android and Google Play Security Rewards Programs surpass $3M in payouts • Google Online Security Blog
Jason Woloz and Mayank Jain are on the Android Security & Privacy team:
<p>In the ASR program's third year, we received over 470 qualifying vulnerability reports from researchers and the average pay per researcher jumped by 23%. To date, the ASR program has rewarded researchers with over $3M, paying out roughly $1M per year.

Here are some of the highlights from the Android Security Rewards program's third year:<br /> • There were no payouts for our highest possible reward: a complete remote exploit chain leading to TrustZone or Verified Boot compromise.<br />• 99 individuals contributed one or more fixes.<br />• The ASR program's reward averages were $2,600 per reward and $12,500 per researcher.<br />• Guang Gong received our highest reward amount to date: $105,000 for his submission of a <a href="">remote exploit chain</a>.</p>

That's quite a healthy average payout; some way short of earning a living, but if you were to do this across multiple platforms (Google, Facebook, Twitter, Uber, Apple, Microsoft all have bug bounty programs, as do others) then you could.

The question is, is the value of these exploits as paid by Google greater than their market value?
google  security  bugbounty 
september 2018 by charlesarthur
For second Time in three Years, Mobile Spyware Maker mSpy Leaks Millions of Sensitive Records • Krebs on Security
Brian Krebs:
<p>mSpy, the makers of a software-as-a-service product that claims to help more than a million paying customers spy on the mobile devices of their kids and partners, has leaked millions of sensitive records online, including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.

Less than a week ago, security researcher Nitish Shah directed KrebsOnSecurity to an open database on the Web that allowed anyone to query up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software. The database required no authentication.

A list of data points that can be slurped from a mobile device that is secretly running mSpy’s software.
Before it was taken offline sometime in the past 12 hours, the database contained millions of records, including the username, password and private encryption key of each mSpy customer who logged in to the mSpy site or purchased an mSpy license over the past six months. The private key would allow anyone to track and view details of a mobile device running the software, Shah said.</p>

It's like rain on your wedding day, isn't it.
privacy  spyware  hacking  security 
september 2018 by charlesarthur
1,464 Western Australian government officials used ‘Password123’ as their password. Cool, cool • The Washington Post
<p>Somewhere in Western Australia, a government IT employee is probably laughing or crying or pulling their hair out (or maybe all of the above). A <a href="">security audit</a> of the Western Australian government released by the state’s auditor general this week found that 26 percent of its officials had weak, common passwords -- including more than 5,000 including the word “password" out of 234,000 in 17 government agencies.


The legions of lazy passwords were exactly what you -- or a thrilled hacker -- would expect: 1,464 people went for “Password123” and 813 used “password1." Nearly 200 individuals used “password” -- maybe they never changed it to begin with?

Almost 13,000 used variations of the date and season, and almost 7,000 included versions of “123.”</p>

The old favourites are the best.
password  hacking  security 
august 2018 by charlesarthur
Q: Why do keynote speakers keep suggesting that improving security is possible? A: because keynote speakers make bad life decisions and are poor role models • USENIX
James Mickens is a hilarious speaker:
<p>Some people enter the technology industry to build newer, more exciting kinds of technology as quickly as possible. My keynote will savage these people and will burn important professional bridges, likely forcing me to join a monastery or another penance-focused organization. In my keynote, I will explain why the proliferation of ubiquitous technology is good in the same sense that ubiquitous Venus weather would be good, i.e., not good at all. Using case studies involving machine learning and other hastily-executed figments of Silicon Valley’s imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits. At some point, my microphone will be cut off, possibly by hotel management, but possibly by myself, because microphones are technology and we need to reclaim the stark purity that emerges from amplifying our voices using rams’ horns and sheets of papyrus rolled into cone shapes.</p>
security  ethics  comedy 
august 2018 by charlesarthur
Hacker finds hidden 'God mode' on old x86 CPUs • Tom's Hardware
Paul Wagenseil:
<p>The backdoor completely breaks the protection-ring model of operating-system security, in which the OS kernel runs in ring 0, device drivers run in rings 1 and 2, and user applications and interfaces ("userland") run in ring 3, furthest from the kernel and with the least privileges. To put it simply, Domas' God Mode takes you from the outermost to the innermost ring in four bytes.

"We have direct ring 3 to ring 0 hardware privilege escalation," Domas said. "This has never been done."

That's because of the hidden RISC chip, which lives so far down on the bare metal that Domas half-joked that it ought to be thought of as a new, deeper ring of privilege, following the theory that hypervisors and chip-management systems can be considered ring -1 or ring -2.

"This is really ring -4," he said. "It's a secret, co-located core buried alongside the x86 chip. It has unrestricted access to the x86."

The good news is that, as far as Domas knows, this backdoor exists only on VIA C3 Nehemiah chips made in 2003 and used in embedded systems and thin clients. The bad news is that it's entirely possible that such hidden backdoors exist on many other chipsets.

"These black boxes that we're trusting are things that we have no way to look into," he said. "These backdoors probably exist elsewhere."</p>

It's almost certain, isn't it? If it's not the software or the firmware or the hardware, it's the software/firmware/hardware that <em>controls</em> the hardware.
security  hacking  intel  cpu  backdoor  hardware 
august 2018 by charlesarthur
FBI warns of ‘unlimited’ ATM cashout blitz • Krebs on Security
Brian Krebs:
<p>Organized cybercrime gangs that coordinate unlimited attacks typically do so by hacking or phishing their way into a bank or payment card processor. Just prior to executing on ATM cashouts, the intruders will remove many fraud controls at the financial institution, such as maximum ATM withdrawal amounts and any limits on the number of customer ATM transactions daily.

The perpetrators also alter account balances and security measures to make an unlimited amount of money available at the time of the transactions, allowing for large amounts of cash to be quickly removed from the ATM.

“The cyber criminals typically create fraudulent copies of legitimate cards by sending stolen card data to co-conspirators who imprint the data on reusable magnetic strip cards, such as gift cards purchased at retail stores,” the FBI warned. “At a pre-determined time, the co-conspirators withdraw account funds from ATMs using these cards.”

Virtually all ATM cashout operations are launched on weekends, often just after financial institutions begin closing for business on Saturday. Last month, KrebsOnSecurity <a href="">broke a story</a> about an apparent unlimited operation used to extract a total of $2.4m from accounts at the National Bank of Blacksburg in two separate ATM cashouts between May 2016 and January 2017.

In both cases, the attackers managed to phish someone working at the Blacksburg, Virginia-based small bank. From there, the intruders compromised systems the bank used to manage credits and debits to customer accounts.</p>
security  hacking  atm  banks 
august 2018 by charlesarthur
Exclusive: Google tracks your movements, like it or not • Associated Press
Ryan Nakashima:
<p>For the most part, Google is upfront about asking permission to use your location information. An app like Google Maps will remind you to allow access to location if you use it for navigating. If you agree to let it record your location over time, Google Maps will display that history for you in a “timeline” that maps out your daily movements.

Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company will let you “pause” a setting called Location History.

Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”

That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude — accurate to the square foot — and save it to your Google account.

The privacy issue affects some two billion users of devices that run Google’s Android operating software and hundreds of millions of worldwide iPhone users who rely on Google for maps or search.

Storing location data in violation of a user’s preferences is wrong, said Jonathan Mayer, a Princeton computer scientist and former chief technologist for the Federal Communications Commission’s enforcement bureau. A researcher from Mayer’s lab confirmed the AP’s findings on multiple Android devices; the AP conducted its own tests on several iPhones that found the same behavior.</p>

It's amazing. Location tracking comes up as a topic every two years or so, and it's always Google (and sometimes Facebook); Apple has managed to stay out of it since 2010. And then it fizzles away. Jonathan Mayer's involvement is repetitive too: he noted Google hacking Safari's cookies for ad tracking a few years back.
google  location  security  privacy 
august 2018 by charlesarthur
Google: security keys neutralized employee phishing • Krebs on Security
Brian Krebs on Google's requirement for its 85,000 staff:
<p>“We have had no reported or confirmed account takeovers since implementing security keys at Google,” the spokesperson said. “Users might be asked to authenticate using their security key for many different apps/reasons. It all depends on the sensitivity of the app and the risk of the user at that point in time.”

The basic idea behind two-factor authentication (2FA) is that even if thieves manage to phish or steal your password, they still cannot log in to your account unless they also hack or possess that second factor.

The most common forms of 2FA require the user to supplement a password with a one-time code sent to their mobile device via text message or an app. Indeed, prior to 2017 Google employees also relied on one-time codes generated by a mobile app — Google Authenticator.

In contrast, a Security Key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.

Once a device is enrolled for a specific Web site that supports Security Keys, the user no longer needs to enter their password at that site (unless they try to access the same account from a different device, in which case it will ask the user to insert their key).

U2F is an emerging open source authentication standard, and as such only a handful of high-profile sites currently support it, including Dropbox, Facebook, Github (and of course Google’s various services). Most major password managers also now support U2F, including Dashlane, Keepass and LastPass. Duo Security [full disclosure: an advertiser on this site] also can be set up to work with U2F.</p>
google  security  2fa  authentication 
july 2018 by charlesarthur
« earlier      
per page:    204080120160

Copy this bookmark:

to read