We’re in the Uncanny Valley of Targeted Advertising

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Text from Facebook explaining why an ad has been shown to the user.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Congressmembers Raise Doubts About the “Going Dark” Problem

In the wake of a damning report by the DOJ Office of Inspector General (OIG), Congress is asking questions about the FBI’s handling of the locked iPhone in the San Bernardino case and its repeated claims that widespread encryption is leading to a “Going Dark” problem. For years, DOJ and FBI officials have claimed that encryption is thwarting law enforcement and intelligence operations, pointing to large numbers of encrypted phones that the government allegedly cannot access as part of its investigations. In the San Bernardino case specifically, the FBI maintained that only Apple could assist with unlocking the shooter’s phone.

But the OIG report revealed that the Bureau had other resources at its disposal, and on Friday members of the House Judiciary Committee sent a letter to FBI Director Christopher Wray that included several questions to put the FBI’s talking points to the test. Not mincing words, committee members write that they have “concerns that the FBI has not been forthcoming about the extent of the ‘Going Dark’ problem.”

In court filings, testimony to Congress, and in public comments by then-FBI Director James Comey and others, the agency claimed that it had no possible way of accessing the San Bernardino shooter’s iPhone. But the letter, signed by 10 representatives from both parties, notes that the OIG report  “undermines statements that the FBI made during the litigation and consistently since then, that only the device manufacturer could provide a solution.” The letter also echoes EFF’s concerns that the FBI saw the litigation as a test case: “Perhaps most disturbingly, statements made by the Chief of the Cryptographic and Electronic Analysis Unit appear to indicate that the FBI was more interested in forcing Apple to comply than getting into the device.”

Now, more than two years after the Apple case, the FBI continues to make similar arguments. Wray recently claimed that the FBI confronted 7,800 phones it could not unlock in 2017 alone. But as the committee letter points out, in light of recent reports about “the availability of unlocking tools developed by third-parties and the OIG report’s findings that the Bureau was uninterested in seeking available third-party options, these statistics appear highly questionable.” For example, a recent Motherboard investigation revealed that law enforcement agencies across the United States have purchased—or have at least shown interest in purchasing—devices developed by a company called Grayshift. The Atlanta-based company sells a device called GrayKey, a roughly 4x4 inch box that has allegedly been used to successfully crack several iPhone models, including the most recent iPhone X.

The letter ends by posing several questions to Wray designed to probe the FBI’s Going Dark talking points—in particular whether it has actually consulted with outside vendors to unlock encrypted phones it says are thwarting its investigations and whether third-party solutions are in any way insufficient for the task.

EFF welcomes this line of questioning from House Judiciary Committee members and we hope members will continue to put pressure on the FBI to back up its rhetoric about encryption with actual facts.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

To #DeleteFacebook or Not to #DeleteFacebook? That Is Not the Question

Since the Cambridge Analytica news hit headlines, calls for users to ditch the platform have picked up speed. Whether or not it has a critical impact on the company’s user base or bottom line, the message from #DeleteFacebook is clear: users are fed up.

EFF is not here to tell you whether or not to delete Facebook or any other platform. We are here to hold Facebook accountable no matter who’s using it, and to push it and other tech companies to do better for users.

Users should have better options when they decide where to spend their time and attention online.

The problems that Facebook’s Cambridge Analytica scandal highlight—sweeping data collection, indiscriminate sharing of that data, and manipulative advertising—are also problems with much of the surveillance-based, advertising-powered popular web. And there are no shortcuts to solving those problems.

Users should have better options when they decide where to spend their time and attention online. So rather than asking if people should delete Facebook, we are asking: What privacy protections should users have a right to expect, whether they decide to leave or use a platform like Facebook?

If it makes sense for you to delete Facebook or any other account, then you should have full control over deleting your data from the platform and bringing it with you to another. If you stay on Facebook, then you should be able to expect it to respect your privacy rights.

To Leave

As a social media user, you should have the right to leave a platform that you are not satisfied with. That means you should have the right to delete your information and your entire account. And we mean really delete: not just disabling access, but permanently eliminating your information and account from the service’s servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service.

Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab your data for unsavory purposes.

Not To Leave

Deleting Facebook is not a choice that most of its 2 billion users can feasibly make. It’s also not a choice that everyone wants to make, and that’s okay too. Everyone deserves privacy, whether they delete Facebook or stay on it (or never used it in the first place!).

Deleting Facebook is not a choice that most of its users can feasibly make.

For many, the platform is the only way to stay in touch with friends, family, and businesses. It’s sometimes the only way to do business, reach out to customers, and practice a profession that requires access to a large online audience. Facebook also hosts countless communities and interest groups that are simply not available in many users’ cities and areas. Without viable general alternatives, Facebook’s massive user base and associated network effects mean that the costs of leaving it may not outweigh the benefits.

In addition to the right to leave described above, any responsible social media company should ensure users’ privacy rights: the right to informed decision-making, the right to control one’s information, the right to notice, and the right of redress.

Facebook and other companies must respect user privacy by default and by design. If you want to use a platform or service that you enjoy and that adds value to your life, you shouldn't have to leave your privacy rights at the door.

Ethiopia Backslides: the Continuing Harassment of Eskinder Nega

On March 25, bloggers, journalists and activists gathered at a private party in Addis Ababa—the capital of Ethiopia—to celebrate the new freedom of their colleagues. Imprisoned Ethiopian writers and reporters had been released in February under a broad amnesty: some attended the private event, including Eskinder Nega, a blogger and publisher whose detention EFF has been tracking in our Offline series.

But the celebration was interrupted, with the event raided by the authorities. Eskinder, together with Zone 9 bloggers Mahlet Fantahun and Fekadu Mehatemework, online writers Zelalem Workagegnhu and Befiqadu Hailu, and six others were seized and detained without charge.

The eleven have now finally been released, after 12 days of custody. It remains a disturbing example of just how far Ethiopian police are willing to go to intimidate critical voices even in a time of supposed tolerance.

During their detention, the prisoners could be seen through narrow windows in Addis Ababa's Gotera police station, held in tiny stalls crowded with other detainees, in conditions Eskinder described as "inhuman".

...Better to call it jam-packed than imprisoned. About 200 of us are packed in a 5 by 8 meter room divided in three sections. Unable to sit or lay down comfortably, and with limited access to a toilet. Not a single human being deserves this regardless of the crime, let alone us who were captured unjustly. The global community should be aware of such case and use every possible means to bring an end to our suffering immediately.

After a brief Spring of prisoner releases and officially-sanctioned tolerance of anti-government protests and criticism, Ethiopia's autocratic regime appears to be returning to its old ways. A new state of emergency was declared shortly after the resignation of the country's Prime Minister in mid-February. While the government tells the world that it is continuing its policy of re-engagement with its critics, the state of emergency grants unchecked powers to quash dissent, including a wide prohibition on public meetings.

Reporters say that the bloggers were questioned at the party about the display of a traditional Ethiopian flag. The Addis Standard quoted an unnamed politician who attended the event as saying “This has nothing to with the flag, but everything to do with the idea of these individuals... coming together.”

The authorities cannot continue their hair-trigger surveillance and harassment of those documenting its chaotic present online. The country's stability depends on reasonable treatment of its online voices.

Data Privacy Policy Must Empower Users and Innovation

As the details continue to emerge regarding Facebook's failure to protect its users' data from third-party misuse, a growing chorus is calling for new regulations. Mark Zuckerberg will appear in Washington to answer to Congress next week, and we expect lawmakers and others will be asking not only what happened, but what needs to be done to make sure it doesn't happen again.

As recent revelations from Grindr and Under Armour remind us, Facebook is hardly alone in its failure to protect user privacy, and we're glad to see the issue high on the national agenda. At the same time, it’s crucial that we ensure that privacy protections for social media users reinforce, rather than undermine, equally important values like free speech and innovation. We must also be careful not to unintentionally enshrine the current tech powerhouses by making it harder for others to enter those markets. Moreover, we shouldn’t lose sight of the tools we already have for protecting user privacy.

With all of this in mind, here are some guideposts for U.S. users and policymakers looking to figure out what should happen next.

Users Have Rights

Any responsible social media company must ensure users’ privacy rights on its platform and make those rights enforceable. These five principles are a place to start:

Right to Informed Decision-Making

Users have the right to a clear user interface that allows them to make informed choices about who sees their data and how it is used. Any company that gathers data on a user should be prepared to disclose what they’ve collected and with whom they have shared it. Users should never be surprised by a platform’s practices, because the user interface showed them exactly how it would work.

A free and open Internet must be built on respect for the rights of all users. 

Right to Control

Social media platforms must ensure that users retain control over the use and disclosure of their own data, particularly data that can be used to target or identify them. When a service wants to make a secondary use of the data, it must obtain explicit permission from the user. Platforms should also ask their users' permission before making any change that could share new data about users, share users' data with new categories of people, or use that data in a new way.

Above all, data usage should be "opt-in" by default, not "opt-out," meaning that users' data is not collected or shared unless a user has explicitly authorized it. If a social network needs user data to offer a functionality that its users actually want, then it should not have to resort to deception to get them to provide it.

Right to Leave

One of the most basic ways that users can protect their privacy is by leaving a social media platform that fails to protect it. Therefore, a user should have the right to delete data or her entire account. And we mean really delete: not just disabling access but permanently eliminating it from the service's servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service. Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab all of your data for unsavory purposes.

Right to Notice

If users’ data has been mishandled or a company has suffered a data breach, users should be notified as soon as possible. While brief delays are sometimes necessary in order to help remedy the harm before it is made public, any such delay should be no longer than strictly necessary.

Right of Redress

Rights are not meaningful if there’s no way to enforce them. Avenues for meaningful legal redress start with (1) clear, public, and unambiguous commitments that, if breached, would subject social media platforms to unfair advertising, competition, or other legal claims with real remedies; and (2) elimination of tricky terms-of-service provisions that make it impossible for a user to ever hold the service accountable in court.

Many companies will say they support some version of all of these rights, but we have little reason to trust them to live up to their promises. So how do we give these rights teeth?

Start with the Tools We Have

We already have some ways to enforce these user rights, which can point the way for us—including the courts and current regulators at the state and federal level—to go further. False advertising laws, consumer protection regulations, and (to a lesser extent) unfair competition rules have all been deployed by private citizens, and ongoing efforts in that area may find a more welcome response from the courts now that the scope of these problems is more widely understood.

The Federal Trade Commission (FTC) has challenged companies with sloppy or fraudulent data practices. The FTC is currently investigating whether Facebook violated a 2011 consent decree on handling of user data. If it has, Facebook is looking at a fine that could genuinely hurt its bottom line. We should all expect the FTC to fulfill its duty to stand in for the rest of us.

But there is even more we could do.

Focus on Empowering Users and Toolmakers

First, policymakers should consider making it easier for users to have their day in court. As we explained in connection with the Equifax breach, too often courts dismiss data breach lawsuits based on a cramped view of what constitutes "harm." These courts mistakenly require actual or imminent loss of money due to the misuse of information that is directly traceable to a single security breach. If the fear caused by an assault can be actionable (which it can), so should the fear caused by the loss of enough personal data for a criminal to take out a mortgage in your name.

There are also worthwhile ideas about future and contingent harms in other consumer protection areas as well as medical malpractice and pollution cases, just to name a few. If the political will is there, both federal and state legislatures can step up and create greater incentives for security and steeper downsides for companies that fail to take the necessary steps to protect our data. These incentives should include a prohibition on waivers in the fine print of terms of service, so that companies can’t trick or force users into giving up their legal rights in advance.

Second, let’s empower the toolmakers. If we want companies to reconsider their surveillance-based business model, we should put mechanisms in place to discourage that model. When a programmer at Facebook makes a tool that allows the company to harvest the personal information of everyone who visits a page with a "Like" button on it, another programmer should be able to write a browser plugin that blocks this button on the pages you visit. But too many platforms impose technical and legal barriers to writing such a program, effectively inhibiting third parties’ ability to give users more control over how they interact with those services. EFF has long raised concerns about the barriers created by overbroad readings of the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and contractual prohibitions on interoperability.  Removing those barriers would be good for both privacy and innovation.

Third, the platforms themselves should come up with clear and meaningful standards for portability—that is, the user’s ability to meaningfully leave a platform and take her data with her. This is an area where investors and the broader funding and startup community should have a voice, since so many future innovators depend on an Internet with strong interconnectivity where the right to innovate doesn’t require getting permission from the current tech giants. It’s also an area where standards may be difficult to legislate. Even well-meaning legislators are unlikely to have the technical competence or foresight to craft rules that will be flexible enough to adapt over time but tough enough to provide users with real protection. But fostering competition in this space could be one of the most powerful incentives for the current set of companies to do the right thing, spurring a race to the top for social networks.

Finally, transparency, transparency, transparency.  Facebook, Google, and others should allow truly independent researchers access to work with, black box test, and audit their systems. Users should not have to take companies’ word on how data is being collected, stored, and used.

Watch Out for Unintended Effects on Speech and Innovation

As new proposals bubble up, we all need to watch for ways they could backfire.

First, heavy-handed requirements, particularly requirements tied to specific kinds of technology (i.e. tech mandates) could stifle competition and innovation. Used without care, they could actually give even more power to today’s tech giants by ensuring that no new competitor could ever get started. 

Second, we need to make sure that transparency and control provisions don’t undermine online speech. For example, any disclosure rules must take care to protect user anonymity. And the right to control your data should not turn into an unfettered right to control what others say about you—as so-called "right to be forgotten" approaches can often become. If true facts, especially facts that could have public importance, have been published by a third party, requiring their removal may mean impinging on others’ rights to free speech and access to information. A free and open Internet must be built on respect for the rights of all users. 

Asking the Right Questions

Above all, the guiding question should not be, "What legislation do we need to make sure there is never another Cambridge Analytica?" Rather, we should be asking, "What privacy protections are missing, and how can we fill that gap while respecting other important values?" Once we ask the right question, we can look for answers in existing laws, pressure from users and investors, and focused legislative steps where necessary. We need to be both creative and judicious— and take care that today’s solutions don’t become tomorrow’s unexpected problems.

HTTPS Everywhere Introduces New Feature: Continual Ruleset Updates

Today we're proud to announce the launch of a new version of HTTPS Everywhere, 2018.4.3, which brings with it exciting new features. With this newest update, you'll receive our list of HTTPS-supporting sites more regularly, bundled as a package that is delivered to the extension on a continual basis. This means that your HTTPS-Everywhere-protected browser will have more up-to-date coverage for sites that offer HTTPS, and you'll encounter fewer sites that break due to bugs in our list of supported sites. It also means that in the future, third parties can create their own list of URL redirects for use in the extension. This could be useful, for instance, in the Tor Browser to improve the user experience for .onion URLs. This new version is the same old extension you know and love, now with a cleaner behind-the-scenes process to ensure that it's protecting you better than ever before.

How does it work?

You may be familiar with our popular browser extension, available for Firefox, Chrome, Opera, and the Tor Browser. The idea is simple: whenever a user visits a site that we know offers HTTPS, we ensure that their browser connects to that site with the security of HTTPS rather than insecure HTTP. This means that users will have the best security available, avoiding subtle attacks that can downgrade their connections and compromise their data. But knowing is half the battle. Keeping the list of sites that offer HTTPS updated is an enormous effort, comprising a collaboration between hundreds of contributors to the extension and a handful of active maintainers to craft what are known as HTTPS Everywhere's "rulesets." At the time of writing, there are over 23,000 ruleset files - each containing at least one domain name (or FQDN, like

We've modified the extension to periodically check in with EFF to see if a new list is available.

Why go through all this trouble to maintain a list of sites supporting HTTPS, instead of just defaulting to HTTPS? Because a lot of sites still only offer HTTP. Without knowing that a site supports HTTPS, we'd have to try HTTPS first, and then downgrade your connection if it's not available. And for a network attacker, it's easy to fake the browser into thinking that a site does not offer HTTPS. That's why downgrading connections can be dangerous - you can fall right into the trap of an attacker. HTTPS Everywhere forces your browser to use the secure endpoint if it's on our list, thus ensuring that you'll have the highest level of security available for these sites.

Ordinarily, we'll deliver this ruleset list bundled with the extension when you install or update it. But it's a lot of work to release a new version just to deliver a new list of rulesets to you! So we've modified the extension to periodically check in with EFF to see if a new list is available. That way you'll get the newest ruleset list in a timely manner, without having to wait for a new version to be released. In order to verify that these are the authentic EFF rulesets, we've signed them so that your browser can check that they're legitimate, using the Web Crypto API. We've also made it easy for developers and third parties to publish their own rulesets, signed with their own key, and build that into a custom-made edition of HTTPS Everywhere. We've called these "update channels," and the extension is capable of digesting multiple update channels at the same time.

This is just the start

In the future, we plan to build on this feature, making it easy for users to modify the set of update channels they digest in their own HTTPS Everywhere instance. This will entail building out a nicer user experience to modify, delete, and edit update channels.

The fact is that only a small subset of the ruleset files change in a given time. So we'll also be researching how to safely deliver to your browser only the changes between one edition of the rulesets and the next. This will save you a lot of bandwidth, which is especially important in contexts where your ISP provides a slow or throttled connection.

Today, as always, we aim to better your browsing experience by protecting your data with this latest release. We're excited to use bring you these new features, just as we've been glad to keep your browsing safe ever since we launched HTTPS Everywhere in 2010.

We'd like to thank Fastly for providing the bandwidth necessary to deliver our ruleset updates.

If you haven't already, please install and contribute to HTTPS Everywhere, and consider donating to EFF to support our work!

The FBI Could Have Gotten Into the San Bernardino Shooter’s iPhone, But Leadership Didn’t Say That

The Department of Justice’s Office of the Inspector General (OIG) last week released a new report that supports what EFF has long suspected: that the FBI’s legal fight with Apple in 2016 to create backdoor access to a San Bernardino shooter’s iPhone was more focused on creating legal precedent than it was on accessing the one specific device.

The report, called a “special inquiry,” details the FBI’s failure to be completely forthright with Congress, the courts, and the American public. While the OIG report concludes that neither former FBI Director James Comey, nor the FBI officials who submitted sworn statements in court had “testified inaccurately or made false statements” during the roughly month-long saga, it illustrates just how close they came to lying under oath. 

From the onset, we suspected that the FBI’s primary goal in its effort to access to an iPhone found in the wake of the December 2015 mass shootings in San Bernardino wasn’t simply to unlock the device at issue. Rather, we believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security mechanisms. Among other disturbing revelations, the new OIG report confirms our suspicion: senior leaders within the FBI were “definitely not happy” when the agency realized that another solution to access the contents of the phone had been found through an outside vendor and the legal proceeding against Apple couldn’t continue.

By way of digging into the OIG report, let’s take a look at the timeline of events:

  • December 2, 2015: a shooting in San Bernardino results in the deaths of 14 people, including the two shooters. The shooters destroy their personal phones but leave a third phone—owned by their employer—untouched.
  • February 9, 2016: Comey testifies that the FBI cannot access the contents of the shooters’ remaining phone.
  • February 16, 2016: the FBI applies for (and Magistrate Judge Pym grants the same day) an application for an order compelling Apple to develop a new method to unlock the phone.

    As part of that application, the FBI Supervisory Special Agent in charge of the investigation of the phone swears under oath that the FBI had “explored other means of obtaining [access] . . . and we have been unable to identify any other methods feasible for gaining access” other than compelling Apple to create a custom, cryptographically signed version of iOS to bypass a key security feature and allow the FBI to access the device.

    At the same time, according to the OIG report, the chief of the FBI’s Remote Operations Unit (the FBI’s elite hacking team, called ROU) knows “that one of the vendors that he worked closely with was almost 90 percent of the way toward a solution that the vendor had been working on for many months.”

Let’s briefly step out of the timeline to note the discrepancies between what the FBI was saying in early 2016 and what they actually knew. How is it that senior FBI officials testified that the agency had no capability to access the contents of the locked device when, the agency’s own premier hacking team knew capability was within reach? Because, according to the OIG report, FBI leadership doesn’t ask the ROU for its help until after testifying that FBI’s techs knew of no way in.

The OIG report concluded that Director Comey didn’t know that his testimony was false at the time he gave it. But it was false, and technical staff in FBI’s own ROU knew it was false.

Now, back to the timeline:

  • March 1, 2016: Director Comey again testifies that the FBI has been unable to access the contents of the phone without Apple’s help. Before the government applied for the All Writs Act order on February 11, Comey notes there were “a whole lot of conversations going on in that interim with companies, with other parts of the government, with other resources to figure out if there was a way to do it short of having to go to court.”

    In response to a question from Rep. Daryl Issa whether Comey was “testifying today that you and/or contractors that you employ could not achieve this without demanding an unwilling partner do it,” Comey replies “Correct.”

    The OIG report concluded that Director Comey didn’t know that his testimony was false at the time he gave it. But it was false, and technical staff in FBI’s own ROU knew it was false.

  • March 16, 2016: An outside vendor for the FBI completes its work on an exploit for the model in question, building on the work that, as of February 16, the ROU knew to be 90% complete.

    The head of the FBI’s Cryptologic and Electronics Analysis Unit (CEAU)—the unit whose initial inability to access the phone led to the FBI’s sworn statements that the Bureau knew of no method to do so—is pissed that others within FBI are even trying get into the phone without Apple’s help. In the words of the OIG report, “he expressed disappointment that the ROU Chief had engaged an outside vendor to assist with the Farook iPhone, asking the ROU Chief, ‘Why did you do that for?’”

    Why is the CEAU Chief angry? Because it means that the legal battle is over and the FBI won’t be able to get the legal precedent against Apple that it was looking for. Again, the OIG report confirms our suspicions: “the CEAU Chief ‘was definitely not happy’ that the legal proceeding against Apple could no longer go forward” after the ROU’s vendor succeeded.

  • March 20, 2016: The FBI’s outside vendor demonstrates the exploit for senior FBI leadership.
  • March 21, 2016: On the eve of the scheduled hearing, the Department of Justice notifies the court in California, that, despite previous statements under oath that there were no “other methods feasible for gaining access,” they’ve now somehow found a way.

    In response to the FBI’s eleventh-hour revelation, the court cancels the hearing and the legal battle between the FBI and Apple is over for now.

The OIG report comes on the heels of a report by the New York Times that the Department of Justice is renewing its decades-long fight for anti-encryption legislation. According to the Times, DOJ officials are “convinced that mechanisms allowing access to [encrypted] data can be engineered without intolerably weakening the devices’ security against hacking.”

That’s a bold claim, given that for years the consensus in the technical community has been exactly the opposite. In the 90’s, experts exposed serious flaws in proposed systems to give law enforcement access to encrypted data without compromising security, including the Clipper Chip. And, as the authors of the 2015 “Keys Under Doormats” paper put it, “today’s more complex, global information infrastructure” presents “far more grave security risks” for these approaches.

The Department’s blind faith in technologists’ ability to build a secure backdoor on encrypted phones is inspired by presentations by several security researchers as part of the recent National Academy of Sciences (NAS) report on encryption. But the NAS wrote that these proposals were not presented in “sufficient detail for a technical evaluation,” so they have yet to be rigorously tested by other security experts, let alone pass muster. Scientific and technical consensus is always open to challenge, but we—and the DOJ—should not abandon the longstanding view, backed by evidence, that deploying widespread special access mechanisms present insurmountable technical and practical challenges.

The Times article also suggests that even as DOJ officials tout the possibility of secure backdoors, they’re simultaneously lowering the bar, arguing that a solution need not be “foolproof” if it allows the government to catch “ordinary, less-savvy criminals.” The problem with that statement is at least two-fold:

First, according to the FBI, it is the savvy criminals (and terrorists) who present the biggest risk of using encryption to evade detection. By definition, less-savvy criminals will be easier for law enforcement to catch without guaranteed access to encrypted devices. Why is it acceptable to the FBI that the solutions they demand are necessarily incapable of stopping the very harms they claim they most need backdoors in order to stop?

Second, the history in this area demonstrates that “not foolproof” often actually means “completely insecure.” That’s because any system that is designed to allow law enforcement agencies all across the country to expeditiously decrypt devices pursuant to court order will be enormously complex, raising the likelihood of serious flaws in implementation. And, regardless of who holds them, the keys used to decrypt these devices will need to be used frequently, making it even harder to defend them from bad actors. These and other technical challenges mean that the risks of actually deploying an imperfect exceptional access mechanism to millions of phones are unacceptably high. And of course, any system implemented in the US will be demanded by repressive governments around the world.

The DOJ’s myopic focus on backdooring phones at the expense of the devices’ security is especially vexing in light of reports that law enforcement agencies are increasingly able to use commercial unlocking tools to break into essentially any device on the market. And if this is the status quo without mandated backdoor access and as vendors like Apple take steps to harden their devices against hacking, imagine how vulnerable devices could be with a legal mandate. The FBI likes to paint encryption in an apocalyptic light, suggesting that the technology drastically undermines the Bureau’s ability to do its job, but the evidence from the Apple fight and elsewhere is far less stark.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

Beyond Implementation: Policy Considerations for Secure Messengers

One of EFF’s strengths is that we bring together technologists, lawyers, activists, and policy wonks. And we’ve been around long enough to know that while good technology is necessary for success, it is rarely sufficient. Good policy and people who will adhere to it are also crucial. People write and maintain code, people run the servers that messaging platforms depend on, and people interface with governments and respond to pressure from them.

We could never get on board with a tool—even one that made solid technical choices—unless it were developed and had its infrastructure maintained by a trustworthy group with a history of responsible stewardship of the tool. Trusting the underlying technology isn’t enough; we have to be able to trust the people and organizations behind it. Even open source tools that function in a distributed manner, rather than using a central server, have to be backed up by trustworthy developers who address technical problems in a timely manner.

Here are a few of the factors beyond technical implementation that we consider for any messenger:

  • Developers should have a solid history of responding to technical problems with the platform. This one is critical. Developers must not only patch known issues in a timely manner, they must also respond to particularly sensitive users’ issues particularly quickly. For instance, it was reported that in 2016, Telegram failed to protect its Iranian users in a timely manner in response to state-sponsored attacks. That history gives us more than a little pause.
  • Developers should have a solid history of responding to legal threats to their platform. This is also critical. Developers must not only protect their users from technical threats, but from legal threats as well. Two positive examples come readily to mind: Apple and Open Whisper Systems, the developers of iMessage and Signal respectively. Apple famously stood up for the security of their users in 2016 in response to an FBI call for a backdoor in their iPhone device encryption, and Open Whisper Systems successfully fought back against a grand jury subpoena gag order.
  • Developers should have a realistic and transparent attitude toward government and law enforcement. This is part of the criteria by which we evaluate companies in our annual Who Has Your Back? report. We’re strongly of the opinion that developers can’t just stick their heads in the sand and hope that the cops never show up. They have to have a plan, law enforcement guidelines, and a transparency report. Any tool lacking those is asking for trouble.

We discuss these concerns here to highlight the undeniable fact that developing and maintaining secure tools is a team sport. It’s not enough that an encrypted messaging app use reliable and trusted encryption primitives. It’s not enough that the tool implement those primitives well, wrap them in a good UX, and keep the product maintained. Beyond all that, the team responsible for the app must be versed in law and technology policy, be available and responsive to their users’ real-world threats, and make a real effort to address the security trade-offs their products present.

This post is part of a series on secure messaging.
Find the full series here.

Building A Secure Messenger

Given different people’s and community’s security needs, it’s hard to arrive at a consensus of what a “secure” messenger must provide. In this post, we discuss various options for developers to consider when working towards the goal of improving a messenger’s security. A messenger that’s perfectly secure for every single person is unlikely to exist, but there are still steps that developers can take to work towards that goal.

Messengers in the real world reflect a series of compromises by their creators. Technologists often think of those compromises in terms of what encryption algorithms or protocols are chosen. But the choices that undermine security in practice often lie far away from the encryption engine.

Encryption is the Easy Part

The most basic building block towards a secure messenger is end-to-end encryption. End-to-end encryption means that a messenger must encrypt messages in a way that nobody besides the intended recipient(s)—not messaging service providers, government authorities, or third-party hackers—can read them.

The choices that undermine security in practice often lie far away from the encryption engine.

The actual encryption is not the hard part. Most tools use very similar crypto primitives (e.g. AES, SHA2/3, and ECDH). The differences in algorithm choice rarely matter. Apps have evolved to use very similar encryption protocols (e.g. Signal's Double Ratchet). We expect that any application making a good-faith effort to provide this functionality will have published a documented security design.

Beyond encryption, all that’s left are the remaining details of product trade-offs and implementation difficulties, which is where the hottest debate over secure messaging lies.

Next: Important Details That Are Hard to Perfect

Every secure messaging app has to worry about code quality, user experience, and service availability. These features are hard to perfect, but putting no effort into them will render an application unusable.

When it comes to encrypted messaging apps, there’s a big difference between the theoretical security they provide and the practical security they provide. That’s because while the math behind an encryption algorithm may be flawless, a programmer may still make mistakes when translating that math into actual code. As a result, messengers vary in their implementation quality—i.e. how well the code was written, and how likely the code is to have bugs that could introduce security vulnerabilities.

Every secure messaging app has to worry about code quality, user experience, and service availability.

Code quality is particularly hard to assess, even by professional engineers. Whether the server and client codebases are open or closed source, they should be regularly audited by security specialists. When a codebase is open source, it can be reviewed by anyone, but it doesn’t mean that it has been, so being open source is not necessarily a guarantee of security. Nor is it a guarantee of insecurity: modern cryptography doesn’t require the encryption algorithm to be kept secret to function.

User experience quality can also vary, specifically with regard to the user’s ability to successfully send and receive encrypted messages. The goalpost here is that the user always sends messages encrypted such that only the intended recipients can read them. The user experience should be designed around reaching that goal.

While this goal seems straightforward, it’s difficult to achieve in practice. For example, say Alice sends a message to Bob. Before Bob can read it, he gets a new phone and has to update his encryption keys. Should Alice’s phone re-send the message to Bob’s new phone automatically, or wait for Alice to approve Bob’s new encryption keys before resending the message? What if Bob’s sister shoulder-surfed his password and signed into his account on her phone? If Alice’s phone re-sends the message on her behalf, she risks accidentally sending it to Bob’s sister instead of Bob.

Different applications might have different priorities about message delivery versus sticking to encryption guarantees, but we expect applications to make thoughtful choices and give advanced users the option to choose for themselves.

Like seatbelts and two-factor authentication, the biggest failure mode of secure messengers is not using them at all. If a tool fails to reliably deliver messages in congested or hostile network conditions, users may be forced to fall back to less secure channels. Building a reliable service takes consistent, applied engineering effort that smaller services might not have the resources for, but it’s essential to ensuring the user’s security.

Adding Security Features

Past these basic tenets of implementation and user experience, the discussion becomes thornier. Security benefits can get left behind when they’re deemed to not be worth the cost to implement or when they’re judged as detrimental to ease of use. We recommend considering some features in particular to create a more robust secure messenger.

There's a big difference between the theoretical and practical security messengers provide.

Modern messengers store conversation histories in the cloud. If a secure messenger stores the conversation history unencrypted in the cloud (or encrypted under information that the service provider can access), then the messenger might as well not have been end-to-end encrypted. Messengers can choose to encrypt the backups under a key kept on the user’s device or a password that only the users know, or it can choose to not encrypt the backups. If backups aren’t encrypted, they should be off by default and the user should have an opportunity to understand the implications of turning them on.

A secure application should have secure auto-updating mechanisms to quickly mitigate security problems. An out-of-date messenger with known security flaws is potentially vulnerable to more potent attacks than an up-to-date unencrypted messenger.

Perhaps surprisingly, a messenger being marketed as secure can undermine the goals of security. If having the application on a user’s phone is a marker that the user is trying to stay secure, that could make the person’s situation more dangerous. Say an outside party discovers that a person has a “secure” app on their phone. That app could be used as evidence that the person is engaging in an activity that the outside party doesn’t approve of, and invite retribution. The ideal secure messenger is primarily a messenger of sufficiently high popularity that its use is not suspicious.

An app may also choose to provide reliable indicators of compromise that are recognizable to an end-user, including in the event of machine-in-the-middle attacks or a compromised service provider. The application should also allow users to verify all their communications are encrypted to the correct person (i.e. fingerprint verification).

Finally, we recommend allowing users to choose an alias, instead of requiring that their only identifier be a phone number. For vulnerable users, a major identifier like a phone number could be private information; they shouldn’t have to give it out to get the benefits that a secure messenger provides.

Features to Look Forward To

In this section, we discuss the stretch goals that no major app has managed to implement yet, but that less popular apps or academic prototypes might have.

We’d love to see academics and experimental platforms begin working on delivering these in a scalable manner, but we don’t expect a major application to worry about these just yet.

While protecting the contents of messages is important, so too is protecting the metadata of who is talking to whom and when. When messages go through a central server, this is hard to mask. Hiding the network metadata is a feature we’d like to see grow past the experimental phase. Until then, we expect to see services retain only the metadata absolutely necessary to make the service function, and for the minimum possible time.

There's no consensus on what the best combination of features is, and there may never be.

Most messengers let users discover which of their existing contacts are already using the service, but they do so in a way that reveals the entire contents of a contact list to the service. This means that to find out which of your friends are using a service, you have to tell the service provider every person whose contact info you’ve saved to your phone, and you have no guarantee that they’re not saving this data to figure out who’s friends with whom—even for people who don’t use the service. Some messengers are already taking steps to make this information leakage a thing of the past.

Pushing reliable security updates is of prime importance to security. But automatically accepting new versions of applications means that users might inadvertently download a backdoored update onto this device. Using reproducible builds and binary transparency, users can at least ensure that the same update gets pushed to every user, so that targeted attacks become infeasible. Then, there’s a better chance that the backdooring will get noticed.

When a messenger allows group messaging, advanced security properties like future secrecy are lost. New protocols aim to fix these holes and give group messaging the security properties that users deserve.

In the secure messaging community, there's no consensus on what the best combination of features is, and there may never be. So while there will never be one perfectly secure messenger to rule them all, technical questions and conversations like the ones described above can move us towards better messengers providing more types of security.

This post is part of a series on secure messaging.
Find the full series here.

Thinking About What You Need In A Secure Messenger

All the features that determine the security of a messaging app can be confusing and hard to keep track of. Beyond the technical jargon, the most important question is: What do you need out of a messenger? Why are you looking for more security in your communications in the first place?

The goal of this post is not to assess which messenger provides the best “security” features by certain technical standards, but to help you think about precisely the kind of security you need.

Here are some examples of questions to guide you through potential concerns and line them up with certain secure messaging features. These questions are by no means comprehensive, but they can help get you into the mindset of evaluating messengers in terms of your specific needs.

Are you worried about your messages being intercepted by governments or service providers?

Are you worried about people in your physical environment reading your messages?

Do you want to avoid giving out your phone number?

How risky would a mistake be? Do you need a “foolproof” encrypted messenger?

Are you more concerned about the the “Puddle Test” or the “Hammer Test”?

Do you need features to help you verify the identity of the person you’re talking to?

We can’t capture every person’s concerns or every secure messaging feature with a handful questions. Other important issues might include corporate ownership, country-specific considerations, or background information on a company’s security decisions.

The more clearly you understand what you want and need out of a messenger, the easier it will be to navigate the wealth of extensive, conflicting, and sometimes outdated information out there. When recommendations conflict, you can use these kinds of questions to decide what direction is right for you. And when conditions change, they can help you decide whether it’s time to change your strategy and find new secure apps or tools.

This post is part of a series on secure messaging.
Find the full series here.

Are you worried about your messages being intercepted by governments or service providers?

End-to-end encryption ensures that a message is turned into a secret message by its original sender (the first “end”), and decoded only by its final recipient (the second “end”). This means that no one can “listen in” and eavesdrop on your messages in the middle, including the messaging service provider itself. Somewhat counter-intuitively, just because you have messages in an app on your phone does not mean that the app company itself can see it. This is a core characteristic of good encryption: even the people who design and deploy it cannot themselves break it.

Do not confuse end-to-end encryption with transport-layer encryption (also known as “network encryption”). While end-to-end encryption protects your messages all the way from your device to your recipient’s device, transport-layer encryption only protects them as they travel from your device to the app’s servers and from the app’s servers to your recipient’s device. In the middle, your messaging service provider can see unencrypted copies of your messages—and, in the case of legal requests, has them available to hand over to law enforcement.

One way to think about the difference between end-to-end and transport-layer encryption is the concept of trust. Transport-layer encryption requires you to trust a lot of different parties with the contents of your messages: the app or service you are using, the government of the country where the service is incorporated, the government of the country where its servers sit. However, you shouldn’t have to trust corporations or governments with your messages in order to communicate. With end-to-end encryption, you don’t have to. As a matter of general privacy hygiene, it is generally better to go with services that support end-to-end encryption whenever possible.

Are you worried about people in your physical environment reading your messages?

If you are concerned that someone in your physical environment—maybe a spouse, teacher, parent, or employer—might try to take your device and read your messages off the screen directly, ephemeral or “disappearing” messages might be an important feature for you. This generally means you are able to set messages to automatically disappear after a certain amount of time, leaving less content on your device for others to see.

It’s important to remember, though, that just because messages disappear on your device doesn’t mean they disappear everywhere. Your recipient could always take a screenshot of the message before it disappears. And if the app doesn’t use end-to-end encryption (see above), the app provider might also have a copy of your message.

(Outside of messenger choice, you can also make your device more physically secure by enabling full-disk encryption with a password.)

Do you want to avoid giving out your phone number?

Using your phone number as your messenger “username” can be convenient. It’s simple to remember, and makes it easy to find friends using the same service. However, a phone number is often a personally identifying piece of information, and you might not want to give it out to professional contacts, new acquaintances, or other people you don’t necessarily trust.

This can be a concern for women worried about harassment in particular. Activists and others involved in subversive work can also have a problem with this, as it can be dangerous to link the same phone number to both the messenger one uses for activism and the messenger one uses for communicating with friends and family.

Messengers that allow aliases can help. This usually means letting you choose a “username” or identifier that is not your phone number. Some apps also let you create multiple aliases. Even if a messenger requires your phone number to sign up, it may still allow you to use a non-phone number alias as your public-facing username.

How risky would a mistake be? Do you need a “foolproof” encrypted messenger?

Depending on your situation, it’s likely that the last thing you want is to send information unencrypted that you meant to send encrypted. If this is important to you, messengers that encrypt by default or only support encrypted communication are worth looking into.

When a messenger does not encrypt by default and instead offers a special “secret” encrypted mode, users may make mistakes and send unencrypted messages without realizing it. This can also happen because of service issues; when connectivity poses a problem, some apps may provide an unencrypted “fallback” option for messages rather than wait until an encrypted message can be sent.

Are you more concerned about the the “Puddle Test” or the “Hammer Test”?

Are you more worried about the possibility of losing your messages forever, or about someone else being able to read them? The “Puddle Test” reflects the first concern, and the “Hammer Test” reflects the second.

Messaging developers sometimes talk about the “Puddle Test”: If you accidentally dropped your phone in a Puddle and ruined it, would your messages be lost forever? Would you be able to recover them? Conversely, there’s the “Hammer Test”: If you and a contact intentionally took a Hammer to your phones or otherwise tried to delete all your messages, would they really be deleted? Would someone else be able to recover them?

There is a tension between these two potential situations: accidentally losing your messages, and intentionally deleting them. Is it more important to you that your messages be easy to recover if you accidentally lose them, or difficult to recover if you intentionally delete them?

If the hypothetical “Hammer Test” reflects your concerns, you may want to learn about a security property called forward secrecy. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Even if they had been surveilling you externally and managed to compromise the encryption keys protecting your messages, they still would not be able to read your past messages.

Cloud backups of your messages can throw a wrench in the “Hammer Test” described above. Backups help you pass the “Puddle Test,” but make it much harder to intentionally "hammer" your old messages out of existence. Apps that backup your messages unencrypted store a plaintext copy of your messages outside your device. An unencrypted copy like this can defeat the purpose of forward secrecy, and can stop your deleted messages from really being deleted. For people who are more worried about the “Puddle Test,” this can be a desirable feature. For others, it can be a serious danger.

Do you need features to help you verify the identity of the person you’re talking to?

Most people can be reasonably sure that the contact they are messaging with is who they think it is. For targeted people in high-risk situations situations, however, it can be critical to be absolutely certain that no one else is viewing or intercepting your conversation. Therefore, this question is for those most high-risk users.

Apps with contact verification can help you be certain that no one outside the intended recipient(s) are viewing your conversation. This feature lets you confirm your recipient’s unique cryptographic “fingerprint” and thus their identity. Usually this takes the form of an in-real-life check; you might scan QR codes on each other’s phones, or you might call or talk to your friend to make sure that the fingerprint code you have for them matches the one they have for you.

When one of your contacts’ fingerprints changes, that is an indicator that something about their cryptographic identity has changed. Someone else might have tricked your app into accepting their cryptographic keys instead—or it might also just mean that they got a new phone. Apps can deal with this in two ways: key change notifications, which alert you to the change while not interfering with messages, or key change confirmations, which require you to acknowledge the change before any messages are sent. The latter generally offers a higher level of protection for vulnerable users who cannot risk misfired messages.

This post is part of a series on secure messaging.
Find the full series here.


JavaScript license information