Surveillance
Self-Defense

Блог

Congressmembers Raise Doubts About the “Going Dark” Problem

In the wake of a damning report by the DOJ Office of Inspector General (OIG), Congress is asking questions about the FBI’s handling of the locked iPhone in the San Bernardino case and its repeated claims that widespread encryption is leading to a “Going Dark” problem. For years, DOJ and FBI officials have claimed that encryption is thwarting law enforcement and intelligence operations, pointing to large numbers of encrypted phones that the government allegedly cannot access as part of its investigations. In the San Bernardino case specifically, the FBI maintained that only Apple could assist with unlocking the shooter’s phone.

But the OIG report revealed that the Bureau had other resources at its disposal, and on Friday members of the House Judiciary Committee sent a letter to FBI Director Christopher Wray that included several questions to put the FBI’s talking points to the test. Not mincing words, committee members write that they have “concerns that the FBI has not been forthcoming about the extent of the ‘Going Dark’ problem.”

In court filings, testimony to Congress, and in public comments by then-FBI Director James Comey and others, the agency claimed that it had no possible way of accessing the San Bernardino shooter’s iPhone. But the letter, signed by 10 representatives from both parties, notes that the OIG report  “undermines statements that the FBI made during the litigation and consistently since then, that only the device manufacturer could provide a solution.” The letter also echoes EFF’s concerns that the FBI saw the litigation as a test case: “Perhaps most disturbingly, statements made by the Chief of the Cryptographic and Electronic Analysis Unit appear to indicate that the FBI was more interested in forcing Apple to comply than getting into the device.”

Now, more than two years after the Apple case, the FBI continues to make similar arguments. Wray recently claimed that the FBI confronted 7,800 phones it could not unlock in 2017 alone. But as the committee letter points out, in light of recent reports about “the availability of unlocking tools developed by third-parties and the OIG report’s findings that the Bureau was uninterested in seeking available third-party options, these statistics appear highly questionable.” For example, a recent Motherboard investigation revealed that law enforcement agencies across the United States have purchased—or have at least shown interest in purchasing—devices developed by a company called Grayshift. The Atlanta-based company sells a device called GrayKey, a roughly 4x4 inch box that has allegedly been used to successfully crack several iPhone models, including the most recent iPhone X.

The letter ends by posing several questions to Wray designed to probe the FBI’s Going Dark talking points—in particular whether it has actually consulted with outside vendors to unlock encrypted phones it says are thwarting its investigations and whether third-party solutions are in any way insufficient for the task.

EFF welcomes this line of questioning from House Judiciary Committee members and we hope members will continue to put pressure on the FBI to back up its rhetoric about encryption with actual facts.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

To #DeleteFacebook or Not to #DeleteFacebook? That Is Not the Question

Since the Cambridge Analytica news hit headlines, calls for users to ditch the platform have picked up speed. Whether or not it has a critical impact on the company’s user base or bottom line, the message from #DeleteFacebook is clear: users are fed up.

EFF is not here to tell you whether or not to delete Facebook or any other platform. We are here to hold Facebook accountable no matter who’s using it, and to push it and other tech companies to do better for users.

Users should have better options when they decide where to spend their time and attention online.

The problems that Facebook’s Cambridge Analytica scandal highlight—sweeping data collection, indiscriminate sharing of that data, and manipulative advertising—are also problems with much of the surveillance-based, advertising-powered popular web. And there are no shortcuts to solving those problems.

Users should have better options when they decide where to spend their time and attention online. So rather than asking if people should delete Facebook, we are asking: What privacy protections should users have a right to expect, whether they decide to leave or use a platform like Facebook?

If it makes sense for you to delete Facebook or any other account, then you should have full control over deleting your data from the platform and bringing it with you to another. If you stay on Facebook, then you should be able to expect it to respect your privacy rights.

To Leave

As a social media user, you should have the right to leave a platform that you are not satisfied with. That means you should have the right to delete your information and your entire account. And we mean really delete: not just disabling access, but permanently eliminating your information and account from the service’s servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service.

Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab your data for unsavory purposes.

Not To Leave

Deleting Facebook is not a choice that most of its 2 billion users can feasibly make. It’s also not a choice that everyone wants to make, and that’s okay too. Everyone deserves privacy, whether they delete Facebook or stay on it (or never used it in the first place!).

Deleting Facebook is not a choice that most of its users can feasibly make.

For many, the platform is the only way to stay in touch with friends, family, and businesses. It’s sometimes the only way to do business, reach out to customers, and practice a profession that requires access to a large online audience. Facebook also hosts countless communities and interest groups that are simply not available in many users’ cities and areas. Without viable general alternatives, Facebook’s massive user base and associated network effects mean that the costs of leaving it may not outweigh the benefits.

In addition to the right to leave described above, any responsible social media company should ensure users’ privacy rights: the right to informed decision-making, the right to control one’s information, the right to notice, and the right of redress.

Facebook and other companies must respect user privacy by default and by design. If you want to use a platform or service that you enjoy and that adds value to your life, you shouldn't have to leave your privacy rights at the door.

Ethiopia Backslides: the Continuing Harassment of Eskinder Nega

On March 25, bloggers, journalists and activists gathered at a private party in Addis Ababa—the capital of Ethiopia—to celebrate the new freedom of their colleagues. Imprisoned Ethiopian writers and reporters had been released in February under a broad amnesty: some attended the private event, including Eskinder Nega, a blogger and publisher whose detention EFF has been tracking in our Offline series.

But the celebration was interrupted, with the event raided by the authorities. Eskinder, together with Zone 9 bloggers Mahlet Fantahun and Fekadu Mehatemework, online writers Zelalem Workagegnhu and Befiqadu Hailu, and six others were seized and detained without charge.

The eleven have now finally been released, after 12 days of custody. It remains a disturbing example of just how far Ethiopian police are willing to go to intimidate critical voices even in a time of supposed tolerance.

During their detention, the prisoners could be seen through narrow windows in Addis Ababa's Gotera police station, held in tiny stalls crowded with other detainees, in conditions Eskinder described as "inhuman".

...Better to call it jam-packed than imprisoned. About 200 of us are packed in a 5 by 8 meter room divided in three sections. Unable to sit or lay down comfortably, and with limited access to a toilet. Not a single human being deserves this regardless of the crime, let alone us who were captured unjustly. The global community should be aware of such case and use every possible means to bring an end to our suffering immediately.

After a brief Spring of prisoner releases and officially-sanctioned tolerance of anti-government protests and criticism, Ethiopia's autocratic regime appears to be returning to its old ways. A new state of emergency was declared shortly after the resignation of the country's Prime Minister in mid-February. While the government tells the world that it is continuing its policy of re-engagement with its critics, the state of emergency grants unchecked powers to quash dissent, including a wide prohibition on public meetings.

Reporters say that the bloggers were questioned at the party about the display of a traditional Ethiopian flag. The Addis Standard quoted an unnamed politician who attended the event as saying “This has nothing to with the flag, but everything to do with the idea of these individuals... coming together.”

The authorities cannot continue their hair-trigger surveillance and harassment of those documenting its chaotic present online. The country's stability depends on reasonable treatment of its online voices.

Data Privacy Policy Must Empower Users and Innovation

As the details continue to emerge regarding Facebook's failure to protect its users' data from third-party misuse, a growing chorus is calling for new regulations. Mark Zuckerberg will appear in Washington to answer to Congress next week, and we expect lawmakers and others will be asking not only what happened, but what needs to be done to make sure it doesn't happen again.

As recent revelations from Grindr and Under Armour remind us, Facebook is hardly alone in its failure to protect user privacy, and we're glad to see the issue high on the national agenda. At the same time, it’s crucial that we ensure that privacy protections for social media users reinforce, rather than undermine, equally important values like free speech and innovation. We must also be careful not to unintentionally enshrine the current tech powerhouses by making it harder for others to enter those markets. Moreover, we shouldn’t lose sight of the tools we already have for protecting user privacy.

With all of this in mind, here are some guideposts for U.S. users and policymakers looking to figure out what should happen next.

Users Have Rights

Any responsible social media company must ensure users’ privacy rights on its platform and make those rights enforceable. These five principles are a place to start:

Right to Informed Decision-Making

Users have the right to a clear user interface that allows them to make informed choices about who sees their data and how it is used. Any company that gathers data on a user should be prepared to disclose what they’ve collected and with whom they have shared it. Users should never be surprised by a platform’s practices, because the user interface showed them exactly how it would work.

A free and open Internet must be built on respect for the rights of all users. 

Right to Control

Social media platforms must ensure that users retain control over the use and disclosure of their own data, particularly data that can be used to target or identify them. When a service wants to make a secondary use of the data, it must obtain explicit permission from the user. Platforms should also ask their users' permission before making any change that could share new data about users, share users' data with new categories of people, or use that data in a new way.

Above all, data usage should be "opt-in" by default, not "opt-out," meaning that users' data is not collected or shared unless a user has explicitly authorized it. If a social network needs user data to offer a functionality that its users actually want, then it should not have to resort to deception to get them to provide it.

Right to Leave

One of the most basic ways that users can protect their privacy is by leaving a social media platform that fails to protect it. Therefore, a user should have the right to delete data or her entire account. And we mean really delete: not just disabling access but permanently eliminating it from the service's servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service. Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab all of your data for unsavory purposes.

Right to Notice

If users’ data has been mishandled or a company has suffered a data breach, users should be notified as soon as possible. While brief delays are sometimes necessary in order to help remedy the harm before it is made public, any such delay should be no longer than strictly necessary.

Right of Redress

Rights are not meaningful if there’s no way to enforce them. Avenues for meaningful legal redress start with (1) clear, public, and unambiguous commitments that, if breached, would subject social media platforms to unfair advertising, competition, or other legal claims with real remedies; and (2) elimination of tricky terms-of-service provisions that make it impossible for a user to ever hold the service accountable in court.

Many companies will say they support some version of all of these rights, but we have little reason to trust them to live up to their promises. So how do we give these rights teeth?

Start with the Tools We Have

We already have some ways to enforce these user rights, which can point the way for us—including the courts and current regulators at the state and federal level—to go further. False advertising laws, consumer protection regulations, and (to a lesser extent) unfair competition rules have all been deployed by private citizens, and ongoing efforts in that area may find a more welcome response from the courts now that the scope of these problems is more widely understood.

The Federal Trade Commission (FTC) has challenged companies with sloppy or fraudulent data practices. The FTC is currently investigating whether Facebook violated a 2011 consent decree on handling of user data. If it has, Facebook is looking at a fine that could genuinely hurt its bottom line. We should all expect the FTC to fulfill its duty to stand in for the rest of us.

But there is even more we could do.

Focus on Empowering Users and Toolmakers

First, policymakers should consider making it easier for users to have their day in court. As we explained in connection with the Equifax breach, too often courts dismiss data breach lawsuits based on a cramped view of what constitutes "harm." These courts mistakenly require actual or imminent loss of money due to the misuse of information that is directly traceable to a single security breach. If the fear caused by an assault can be actionable (which it can), so should the fear caused by the loss of enough personal data for a criminal to take out a mortgage in your name.

There are also worthwhile ideas about future and contingent harms in other consumer protection areas as well as medical malpractice and pollution cases, just to name a few. If the political will is there, both federal and state legislatures can step up and create greater incentives for security and steeper downsides for companies that fail to take the necessary steps to protect our data. These incentives should include a prohibition on waivers in the fine print of terms of service, so that companies can’t trick or force users into giving up their legal rights in advance.

Second, let’s empower the toolmakers. If we want companies to reconsider their surveillance-based business model, we should put mechanisms in place to discourage that model. When a programmer at Facebook makes a tool that allows the company to harvest the personal information of everyone who visits a page with a "Like" button on it, another programmer should be able to write a browser plugin that blocks this button on the pages you visit. But too many platforms impose technical and legal barriers to writing such a program, effectively inhibiting third parties’ ability to give users more control over how they interact with those services. EFF has long raised concerns about the barriers created by overbroad readings of the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and contractual prohibitions on interoperability.  Removing those barriers would be good for both privacy and innovation.

Third, the platforms themselves should come up with clear and meaningful standards for portability—that is, the user’s ability to meaningfully leave a platform and take her data with her. This is an area where investors and the broader funding and startup community should have a voice, since so many future innovators depend on an Internet with strong interconnectivity where the right to innovate doesn’t require getting permission from the current tech giants. It’s also an area where standards may be difficult to legislate. Even well-meaning legislators are unlikely to have the technical competence or foresight to craft rules that will be flexible enough to adapt over time but tough enough to provide users with real protection. But fostering competition in this space could be one of the most powerful incentives for the current set of companies to do the right thing, spurring a race to the top for social networks.

Finally, transparency, transparency, transparency.  Facebook, Google, and others should allow truly independent researchers access to work with, black box test, and audit their systems. Users should not have to take companies’ word on how data is being collected, stored, and used.

Watch Out for Unintended Effects on Speech and Innovation

As new proposals bubble up, we all need to watch for ways they could backfire.

First, heavy-handed requirements, particularly requirements tied to specific kinds of technology (i.e. tech mandates) could stifle competition and innovation. Used without care, they could actually give even more power to today’s tech giants by ensuring that no new competitor could ever get started. 

Second, we need to make sure that transparency and control provisions don’t undermine online speech. For example, any disclosure rules must take care to protect user anonymity. And the right to control your data should not turn into an unfettered right to control what others say about you—as so-called "right to be forgotten" approaches can often become. If true facts, especially facts that could have public importance, have been published by a third party, requiring their removal may mean impinging on others’ rights to free speech and access to information. A free and open Internet must be built on respect for the rights of all users. 

Asking the Right Questions

Above all, the guiding question should not be, "What legislation do we need to make sure there is never another Cambridge Analytica?" Rather, we should be asking, "What privacy protections are missing, and how can we fill that gap while respecting other important values?" Once we ask the right question, we can look for answers in existing laws, pressure from users and investors, and focused legislative steps where necessary. We need to be both creative and judicious— and take care that today’s solutions don’t become tomorrow’s unexpected problems.

HTTPS Everywhere Introduces New Feature: Continual Ruleset Updates

Today we're proud to announce the launch of a new version of HTTPS Everywhere, 2018.4.3, which brings with it exciting new features. With this newest update, you'll receive our list of HTTPS-supporting sites more regularly, bundled as a package that is delivered to the extension on a continual basis. This means that your HTTPS-Everywhere-protected browser will have more up-to-date coverage for sites that offer HTTPS, and you'll encounter fewer sites that break due to bugs in our list of supported sites. It also means that in the future, third parties can create their own list of URL redirects for use in the extension. This could be useful, for instance, in the Tor Browser to improve the user experience for .onion URLs. This new version is the same old extension you know and love, now with a cleaner behind-the-scenes process to ensure that it's protecting you better than ever before.

How does it work?

You may be familiar with our popular browser extension, available for Firefox, Chrome, Opera, and the Tor Browser. The idea is simple: whenever a user visits a site that we know offers HTTPS, we ensure that their browser connects to that site with the security of HTTPS rather than insecure HTTP. This means that users will have the best security available, avoiding subtle attacks that can downgrade their connections and compromise their data. But knowing is half the battle. Keeping the list of sites that offer HTTPS updated is an enormous effort, comprising a collaboration between hundreds of contributors to the extension and a handful of active maintainers to craft what are known as HTTPS Everywhere's "rulesets." At the time of writing, there are over 23,000 ruleset files - each containing at least one domain name (or FQDN, like sub.example.com).

We've modified the extension to periodically check in with EFF to see if a new list is available.

Why go through all this trouble to maintain a list of sites supporting HTTPS, instead of just defaulting to HTTPS? Because a lot of sites still only offer HTTP. Without knowing that a site supports HTTPS, we'd have to try HTTPS first, and then downgrade your connection if it's not available. And for a network attacker, it's easy to fake the browser into thinking that a site does not offer HTTPS. That's why downgrading connections can be dangerous - you can fall right into the trap of an attacker. HTTPS Everywhere forces your browser to use the secure endpoint if it's on our list, thus ensuring that you'll have the highest level of security available for these sites.

Ordinarily, we'll deliver this ruleset list bundled with the extension when you install or update it. But it's a lot of work to release a new version just to deliver a new list of rulesets to you! So we've modified the extension to periodically check in with EFF to see if a new list is available. That way you'll get the newest ruleset list in a timely manner, without having to wait for a new version to be released. In order to verify that these are the authentic EFF rulesets, we've signed them so that your browser can check that they're legitimate, using the Web Crypto API. We've also made it easy for developers and third parties to publish their own rulesets, signed with their own key, and build that into a custom-made edition of HTTPS Everywhere. We've called these "update channels," and the extension is capable of digesting multiple update channels at the same time.

This is just the start

In the future, we plan to build on this feature, making it easy for users to modify the set of update channels they digest in their own HTTPS Everywhere instance. This will entail building out a nicer user experience to modify, delete, and edit update channels.

The fact is that only a small subset of the ruleset files change in a given time. So we'll also be researching how to safely deliver to your browser only the changes between one edition of the rulesets and the next. This will save you a lot of bandwidth, which is especially important in contexts where your ISP provides a slow or throttled connection.

Today, as always, we aim to better your browsing experience by protecting your data with this latest release. We're excited to use bring you these new features, just as we've been glad to keep your browsing safe ever since we launched HTTPS Everywhere in 2010.

We'd like to thank Fastly for providing the bandwidth necessary to deliver our ruleset updates.

If you haven't already, please install and contribute to HTTPS Everywhere, and consider donating to EFF to support our work!

The FBI Could Have Gotten Into the San Bernardino Shooter’s iPhone, But Leadership Didn’t Say That

The Department of Justice’s Office of the Inspector General (OIG) last week released a new report that supports what EFF has long suspected: that the FBI’s legal fight with Apple in 2016 to create backdoor access to a San Bernardino shooter’s iPhone was more focused on creating legal precedent than it was on accessing the one specific device.

The report, called a “special inquiry,” details the FBI’s failure to be completely forthright with Congress, the courts, and the American public. While the OIG report concludes that neither former FBI Director James Comey, nor the FBI officials who submitted sworn statements in court had “testified inaccurately or made false statements” during the roughly month-long saga, it illustrates just how close they came to lying under oath. 

From the onset, we suspected that the FBI’s primary goal in its effort to access to an iPhone found in the wake of the December 2015 mass shootings in San Bernardino wasn’t simply to unlock the device at issue. Rather, we believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security mechanisms. Among other disturbing revelations, the new OIG report confirms our suspicion: senior leaders within the FBI were “definitely not happy” when the agency realized that another solution to access the contents of the phone had been found through an outside vendor and the legal proceeding against Apple couldn’t continue.

By way of digging into the OIG report, let’s take a look at the timeline of events:

  • December 2, 2015: a shooting in San Bernardino results in the deaths of 14 people, including the two shooters. The shooters destroy their personal phones but leave a third phone—owned by their employer—untouched.
  • February 9, 2016: Comey testifies that the FBI cannot access the contents of the shooters’ remaining phone.
  • February 16, 2016: the FBI applies for (and Magistrate Judge Pym grants the same day) an application for an order compelling Apple to develop a new method to unlock the phone.

    As part of that application, the FBI Supervisory Special Agent in charge of the investigation of the phone swears under oath that the FBI had “explored other means of obtaining [access] . . . and we have been unable to identify any other methods feasible for gaining access” other than compelling Apple to create a custom, cryptographically signed version of iOS to bypass a key security feature and allow the FBI to access the device.

    At the same time, according to the OIG report, the chief of the FBI’s Remote Operations Unit (the FBI’s elite hacking team, called ROU) knows “that one of the vendors that he worked closely with was almost 90 percent of the way toward a solution that the vendor had been working on for many months.”

Let’s briefly step out of the timeline to note the discrepancies between what the FBI was saying in early 2016 and what they actually knew. How is it that senior FBI officials testified that the agency had no capability to access the contents of the locked device when, the agency’s own premier hacking team knew capability was within reach? Because, according to the OIG report, FBI leadership doesn’t ask the ROU for its help until after testifying that FBI’s techs knew of no way in.

The OIG report concluded that Director Comey didn’t know that his testimony was false at the time he gave it. But it was false, and technical staff in FBI’s own ROU knew it was false.

Now, back to the timeline:

  • March 1, 2016: Director Comey again testifies that the FBI has been unable to access the contents of the phone without Apple’s help. Before the government applied for the All Writs Act order on February 11, Comey notes there were “a whole lot of conversations going on in that interim with companies, with other parts of the government, with other resources to figure out if there was a way to do it short of having to go to court.”

    In response to a question from Rep. Daryl Issa whether Comey was “testifying today that you and/or contractors that you employ could not achieve this without demanding an unwilling partner do it,” Comey replies “Correct.”

    The OIG report concluded that Director Comey didn’t know that his testimony was false at the time he gave it. But it was false, and technical staff in FBI’s own ROU knew it was false.

  • March 16, 2016: An outside vendor for the FBI completes its work on an exploit for the model in question, building on the work that, as of February 16, the ROU knew to be 90% complete.

    The head of the FBI’s Cryptologic and Electronics Analysis Unit (CEAU)—the unit whose initial inability to access the phone led to the FBI’s sworn statements that the Bureau knew of no method to do so—is pissed that others within FBI are even trying get into the phone without Apple’s help. In the words of the OIG report, “he expressed disappointment that the ROU Chief had engaged an outside vendor to assist with the Farook iPhone, asking the ROU Chief, ‘Why did you do that for?’”

    Why is the CEAU Chief angry? Because it means that the legal battle is over and the FBI won’t be able to get the legal precedent against Apple that it was looking for. Again, the OIG report confirms our suspicions: “the CEAU Chief ‘was definitely not happy’ that the legal proceeding against Apple could no longer go forward” after the ROU’s vendor succeeded.

  • March 20, 2016: The FBI’s outside vendor demonstrates the exploit for senior FBI leadership.
  • March 21, 2016: On the eve of the scheduled hearing, the Department of Justice notifies the court in California, that, despite previous statements under oath that there were no “other methods feasible for gaining access,” they’ve now somehow found a way.

    In response to the FBI’s eleventh-hour revelation, the court cancels the hearing and the legal battle between the FBI and Apple is over for now.

The OIG report comes on the heels of a report by the New York Times that the Department of Justice is renewing its decades-long fight for anti-encryption legislation. According to the Times, DOJ officials are “convinced that mechanisms allowing access to [encrypted] data can be engineered without intolerably weakening the devices’ security against hacking.”

That’s a bold claim, given that for years the consensus in the technical community has been exactly the opposite. In the 90’s, experts exposed serious flaws in proposed systems to give law enforcement access to encrypted data without compromising security, including the Clipper Chip. And, as the authors of the 2015 “Keys Under Doormats” paper put it, “today’s more complex, global information infrastructure” presents “far more grave security risks” for these approaches.

The Department’s blind faith in technologists’ ability to build a secure backdoor on encrypted phones is inspired by presentations by several security researchers as part of the recent National Academy of Sciences (NAS) report on encryption. But the NAS wrote that these proposals were not presented in “sufficient detail for a technical evaluation,” so they have yet to be rigorously tested by other security experts, let alone pass muster. Scientific and technical consensus is always open to challenge, but we—and the DOJ—should not abandon the longstanding view, backed by evidence, that deploying widespread special access mechanisms present insurmountable technical and practical challenges.

The Times article also suggests that even as DOJ officials tout the possibility of secure backdoors, they’re simultaneously lowering the bar, arguing that a solution need not be “foolproof” if it allows the government to catch “ordinary, less-savvy criminals.” The problem with that statement is at least two-fold:

First, according to the FBI, it is the savvy criminals (and terrorists) who present the biggest risk of using encryption to evade detection. By definition, less-savvy criminals will be easier for law enforcement to catch without guaranteed access to encrypted devices. Why is it acceptable to the FBI that the solutions they demand are necessarily incapable of stopping the very harms they claim they most need backdoors in order to stop?

Second, the history in this area demonstrates that “not foolproof” often actually means “completely insecure.” That’s because any system that is designed to allow law enforcement agencies all across the country to expeditiously decrypt devices pursuant to court order will be enormously complex, raising the likelihood of serious flaws in implementation. And, regardless of who holds them, the keys used to decrypt these devices will need to be used frequently, making it even harder to defend them from bad actors. These and other technical challenges mean that the risks of actually deploying an imperfect exceptional access mechanism to millions of phones are unacceptably high. And of course, any system implemented in the US will be demanded by repressive governments around the world.

The DOJ’s myopic focus on backdooring phones at the expense of the devices’ security is especially vexing in light of reports that law enforcement agencies are increasingly able to use commercial unlocking tools to break into essentially any device on the market. And if this is the status quo without mandated backdoor access and as vendors like Apple take steps to harden their devices against hacking, imagine how vulnerable devices could be with a legal mandate. The FBI likes to paint encryption in an apocalyptic light, suggesting that the technology drastically undermines the Bureau’s ability to do its job, but the evidence from the Apple fight and elsewhere is far less stark.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

Beyond Implementation: Policy Considerations for Secure Messengers

One of EFF’s strengths is that we bring together technologists, lawyers, activists, and policy wonks. And we’ve been around long enough to know that while good technology is necessary for success, it is rarely sufficient. Good policy and people who will adhere to it are also crucial. People write and maintain code, people run the servers that messaging platforms depend on, and people interface with governments and respond to pressure from them.

We could never get on board with a tool—even one that made solid technical choices—unless it were developed and had its infrastructure maintained by a trustworthy group with a history of responsible stewardship of the tool. Trusting the underlying technology isn’t enough; we have to be able to trust the people and organizations behind it. Even open source tools that function in a distributed manner, rather than using a central server, have to be backed up by trustworthy developers who address technical problems in a timely manner.

Here are a few of the factors beyond technical implementation that we consider for any messenger:

  • Developers should have a solid history of responding to technical problems with the platform. This one is critical. Developers must not only patch known issues in a timely manner, they must also respond to particularly sensitive users’ issues particularly quickly. For instance, it was reported that in 2016, Telegram failed to protect its Iranian users in a timely manner in response to state-sponsored attacks. That history gives us more than a little pause.
  • Developers should have a solid history of responding to legal threats to their platform. This is also critical. Developers must not only protect their users from technical threats, but from legal threats as well. Two positive examples come readily to mind: Apple and Open Whisper Systems, the developers of iMessage and Signal respectively. Apple famously stood up for the security of their users in 2016 in response to an FBI call for a backdoor in their iPhone device encryption, and Open Whisper Systems successfully fought back against a grand jury subpoena gag order.
  • Developers should have a realistic and transparent attitude toward government and law enforcement. This is part of the criteria by which we evaluate companies in our annual Who Has Your Back? report. We’re strongly of the opinion that developers can’t just stick their heads in the sand and hope that the cops never show up. They have to have a plan, law enforcement guidelines, and a transparency report. Any tool lacking those is asking for trouble.

We discuss these concerns here to highlight the undeniable fact that developing and maintaining secure tools is a team sport. It’s not enough that an encrypted messaging app use reliable and trusted encryption primitives. It’s not enough that the tool implement those primitives well, wrap them in a good UX, and keep the product maintained. Beyond all that, the team responsible for the app must be versed in law and technology policy, be available and responsive to their users’ real-world threats, and make a real effort to address the security trade-offs their products present.


This post is part of a series on secure messaging.
Find the full series here.

Building A Secure Messenger

Given different people’s and community’s security needs, it’s hard to arrive at a consensus of what a “secure” messenger must provide. In this post, we discuss various options for developers to consider when working towards the goal of improving a messenger’s security. A messenger that’s perfectly secure for every single person is unlikely to exist, but there are still steps that developers can take to work towards that goal.

Messengers in the real world reflect a series of compromises by their creators. Technologists often think of those compromises in terms of what encryption algorithms or protocols are chosen. But the choices that undermine security in practice often lie far away from the encryption engine.

Encryption is the Easy Part

The most basic building block towards a secure messenger is end-to-end encryption. End-to-end encryption means that a messenger must encrypt messages in a way that nobody besides the intended recipient(s)—not messaging service providers, government authorities, or third-party hackers—can read them.

The choices that undermine security in practice often lie far away from the encryption engine.

The actual encryption is not the hard part. Most tools use very similar crypto primitives (e.g. AES, SHA2/3, and ECDH). The differences in algorithm choice rarely matter. Apps have evolved to use very similar encryption protocols (e.g. Signal's Double Ratchet). We expect that any application making a good-faith effort to provide this functionality will have published a documented security design.

Beyond encryption, all that’s left are the remaining details of product trade-offs and implementation difficulties, which is where the hottest debate over secure messaging lies.

Next: Important Details That Are Hard to Perfect

Every secure messaging app has to worry about code quality, user experience, and service availability. These features are hard to perfect, but putting no effort into them will render an application unusable.

When it comes to encrypted messaging apps, there’s a big difference between the theoretical security they provide and the practical security they provide. That’s because while the math behind an encryption algorithm may be flawless, a programmer may still make mistakes when translating that math into actual code. As a result, messengers vary in their implementation quality—i.e. how well the code was written, and how likely the code is to have bugs that could introduce security vulnerabilities.

Every secure messaging app has to worry about code quality, user experience, and service availability.

Code quality is particularly hard to assess, even by professional engineers. Whether the server and client codebases are open or closed source, they should be regularly audited by security specialists. When a codebase is open source, it can be reviewed by anyone, but it doesn’t mean that it has been, so being open source is not necessarily a guarantee of security. Nor is it a guarantee of insecurity: modern cryptography doesn’t require the encryption algorithm to be kept secret to function.

User experience quality can also vary, specifically with regard to the user’s ability to successfully send and receive encrypted messages. The goalpost here is that the user always sends messages encrypted such that only the intended recipients can read them. The user experience should be designed around reaching that goal.

While this goal seems straightforward, it’s difficult to achieve in practice. For example, say Alice sends a message to Bob. Before Bob can read it, he gets a new phone and has to update his encryption keys. Should Alice’s phone re-send the message to Bob’s new phone automatically, or wait for Alice to approve Bob’s new encryption keys before resending the message? What if Bob’s sister shoulder-surfed his password and signed into his account on her phone? If Alice’s phone re-sends the message on her behalf, she risks accidentally sending it to Bob’s sister instead of Bob.

Different applications might have different priorities about message delivery versus sticking to encryption guarantees, but we expect applications to make thoughtful choices and give advanced users the option to choose for themselves.

Like seatbelts and two-factor authentication, the biggest failure mode of secure messengers is not using them at all. If a tool fails to reliably deliver messages in congested or hostile network conditions, users may be forced to fall back to less secure channels. Building a reliable service takes consistent, applied engineering effort that smaller services might not have the resources for, but it’s essential to ensuring the user’s security.

Adding Security Features

Past these basic tenets of implementation and user experience, the discussion becomes thornier. Security benefits can get left behind when they’re deemed to not be worth the cost to implement or when they’re judged as detrimental to ease of use. We recommend considering some features in particular to create a more robust secure messenger.

There's a big difference between the theoretical and practical security messengers provide.

Modern messengers store conversation histories in the cloud. If a secure messenger stores the conversation history unencrypted in the cloud (or encrypted under information that the service provider can access), then the messenger might as well not have been end-to-end encrypted. Messengers can choose to encrypt the backups under a key kept on the user’s device or a password that only the users know, or it can choose to not encrypt the backups. If backups aren’t encrypted, they should be off by default and the user should have an opportunity to understand the implications of turning them on.

A secure application should have secure auto-updating mechanisms to quickly mitigate security problems. An out-of-date messenger with known security flaws is potentially vulnerable to more potent attacks than an up-to-date unencrypted messenger.

Perhaps surprisingly, a messenger being marketed as secure can undermine the goals of security. If having the application on a user’s phone is a marker that the user is trying to stay secure, that could make the person’s situation more dangerous. Say an outside party discovers that a person has a “secure” app on their phone. That app could be used as evidence that the person is engaging in an activity that the outside party doesn’t approve of, and invite retribution. The ideal secure messenger is primarily a messenger of sufficiently high popularity that its use is not suspicious.

An app may also choose to provide reliable indicators of compromise that are recognizable to an end-user, including in the event of machine-in-the-middle attacks or a compromised service provider. The application should also allow users to verify all their communications are encrypted to the correct person (i.e. fingerprint verification).

Finally, we recommend allowing users to choose an alias, instead of requiring that their only identifier be a phone number. For vulnerable users, a major identifier like a phone number could be private information; they shouldn’t have to give it out to get the benefits that a secure messenger provides.

Features to Look Forward To

In this section, we discuss the stretch goals that no major app has managed to implement yet, but that less popular apps or academic prototypes might have.

We’d love to see academics and experimental platforms begin working on delivering these in a scalable manner, but we don’t expect a major application to worry about these just yet.

While protecting the contents of messages is important, so too is protecting the metadata of who is talking to whom and when. When messages go through a central server, this is hard to mask. Hiding the network metadata is a feature we’d like to see grow past the experimental phase. Until then, we expect to see services retain only the metadata absolutely necessary to make the service function, and for the minimum possible time.

There's no consensus on what the best combination of features is, and there may never be.

Most messengers let users discover which of their existing contacts are already using the service, but they do so in a way that reveals the entire contents of a contact list to the service. This means that to find out which of your friends are using a service, you have to tell the service provider every person whose contact info you’ve saved to your phone, and you have no guarantee that they’re not saving this data to figure out who’s friends with whom—even for people who don’t use the service. Some messengers are already taking steps to make this information leakage a thing of the past.

Pushing reliable security updates is of prime importance to security. But automatically accepting new versions of applications means that users might inadvertently download a backdoored update onto this device. Using reproducible builds and binary transparency, users can at least ensure that the same update gets pushed to every user, so that targeted attacks become infeasible. Then, there’s a better chance that the backdooring will get noticed.

When a messenger allows group messaging, advanced security properties like future secrecy are lost. New protocols aim to fix these holes and give group messaging the security properties that users deserve.

In the secure messaging community, there's no consensus on what the best combination of features is, and there may never be. So while there will never be one perfectly secure messenger to rule them all, technical questions and conversations like the ones described above can move us towards better messengers providing more types of security.


This post is part of a series on secure messaging.
Find the full series here.

Thinking About What You Need In A Secure Messenger

All the features that determine the security of a messaging app can be confusing and hard to keep track of. Beyond the technical jargon, the most important question is: What do you need out of a messenger? Why are you looking for more security in your communications in the first place?

The goal of this post is not to assess which messenger provides the best “security” features by certain technical standards, but to help you think about precisely the kind of security you need.

Here are some examples of questions to guide you through potential concerns and line them up with certain secure messaging features. These questions are by no means comprehensive, but they can help get you into the mindset of evaluating messengers in terms of your specific needs.

Are you worried about your messages being intercepted by governments or service providers?

Are you worried about people in your physical environment reading your messages?

Do you want to avoid giving out your phone number?

How risky would a mistake be? Do you need a “foolproof” encrypted messenger?

Are you more concerned about the the “Puddle Test” or the “Hammer Test”?

Do you need features to help you verify the identity of the person you’re talking to?

We can’t capture every person’s concerns or every secure messaging feature with a handful questions. Other important issues might include corporate ownership, country-specific considerations, or background information on a company’s security decisions.

The more clearly you understand what you want and need out of a messenger, the easier it will be to navigate the wealth of extensive, conflicting, and sometimes outdated information out there. When recommendations conflict, you can use these kinds of questions to decide what direction is right for you. And when conditions change, they can help you decide whether it’s time to change your strategy and find new secure apps or tools.


This post is part of a series on secure messaging.
Find the full series here.

Are you worried about your messages being intercepted by governments or service providers?

End-to-end encryption ensures that a message is turned into a secret message by its original sender (the first “end”), and decoded only by its final recipient (the second “end”). This means that no one can “listen in” and eavesdrop on your messages in the middle, including the messaging service provider itself. Somewhat counter-intuitively, just because you have messages in an app on your phone does not mean that the app company itself can see it. This is a core characteristic of good encryption: even the people who design and deploy it cannot themselves break it.

Do not confuse end-to-end encryption with transport-layer encryption (also known as “network encryption”). While end-to-end encryption protects your messages all the way from your device to your recipient’s device, transport-layer encryption only protects them as they travel from your device to the app’s servers and from the app’s servers to your recipient’s device. In the middle, your messaging service provider can see unencrypted copies of your messages—and, in the case of legal requests, has them available to hand over to law enforcement.

One way to think about the difference between end-to-end and transport-layer encryption is the concept of trust. Transport-layer encryption requires you to trust a lot of different parties with the contents of your messages: the app or service you are using, the government of the country where the service is incorporated, the government of the country where its servers sit. However, you shouldn’t have to trust corporations or governments with your messages in order to communicate. With end-to-end encryption, you don’t have to. As a matter of general privacy hygiene, it is generally better to go with services that support end-to-end encryption whenever possible.

Are you worried about people in your physical environment reading your messages?

If you are concerned that someone in your physical environment—maybe a spouse, teacher, parent, or employer—might try to take your device and read your messages off the screen directly, ephemeral or “disappearing” messages might be an important feature for you. This generally means you are able to set messages to automatically disappear after a certain amount of time, leaving less content on your device for others to see.

It’s important to remember, though, that just because messages disappear on your device doesn’t mean they disappear everywhere. Your recipient could always take a screenshot of the message before it disappears. And if the app doesn’t use end-to-end encryption (see above), the app provider might also have a copy of your message.

(Outside of messenger choice, you can also make your device more physically secure by enabling full-disk encryption with a password.)

Do you want to avoid giving out your phone number?

Using your phone number as your messenger “username” can be convenient. It’s simple to remember, and makes it easy to find friends using the same service. However, a phone number is often a personally identifying piece of information, and you might not want to give it out to professional contacts, new acquaintances, or other people you don’t necessarily trust.

This can be a concern for women worried about harassment in particular. Activists and others involved in subversive work can also have a problem with this, as it can be dangerous to link the same phone number to both the messenger one uses for activism and the messenger one uses for communicating with friends and family.

Messengers that allow aliases can help. This usually means letting you choose a “username” or identifier that is not your phone number. Some apps also let you create multiple aliases. Even if a messenger requires your phone number to sign up, it may still allow you to use a non-phone number alias as your public-facing username.

How risky would a mistake be? Do you need a “foolproof” encrypted messenger?

Depending on your situation, it’s likely that the last thing you want is to send information unencrypted that you meant to send encrypted. If this is important to you, messengers that encrypt by default or only support encrypted communication are worth looking into.

When a messenger does not encrypt by default and instead offers a special “secret” encrypted mode, users may make mistakes and send unencrypted messages without realizing it. This can also happen because of service issues; when connectivity poses a problem, some apps may provide an unencrypted “fallback” option for messages rather than wait until an encrypted message can be sent.

Are you more concerned about the the “Puddle Test” or the “Hammer Test”?

Are you more worried about the possibility of losing your messages forever, or about someone else being able to read them? The “Puddle Test” reflects the first concern, and the “Hammer Test” reflects the second.

Messaging developers sometimes talk about the “Puddle Test”: If you accidentally dropped your phone in a Puddle and ruined it, would your messages be lost forever? Would you be able to recover them? Conversely, there’s the “Hammer Test”: If you and a contact intentionally took a Hammer to your phones or otherwise tried to delete all your messages, would they really be deleted? Would someone else be able to recover them?

There is a tension between these two potential situations: accidentally losing your messages, and intentionally deleting them. Is it more important to you that your messages be easy to recover if you accidentally lose them, or difficult to recover if you intentionally delete them?

If the hypothetical “Hammer Test” reflects your concerns, you may want to learn about a security property called forward secrecy. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Even if they had been surveilling you externally and managed to compromise the encryption keys protecting your messages, they still would not be able to read your past messages.

Cloud backups of your messages can throw a wrench in the “Hammer Test” described above. Backups help you pass the “Puddle Test,” but make it much harder to intentionally "hammer" your old messages out of existence. Apps that backup your messages unencrypted store a plaintext copy of your messages outside your device. An unencrypted copy like this can defeat the purpose of forward secrecy, and can stop your deleted messages from really being deleted. For people who are more worried about the “Puddle Test,” this can be a desirable feature. For others, it can be a serious danger.

Do you need features to help you verify the identity of the person you’re talking to?

Most people can be reasonably sure that the contact they are messaging with is who they think it is. For targeted people in high-risk situations situations, however, it can be critical to be absolutely certain that no one else is viewing or intercepting your conversation. Therefore, this question is for those most high-risk users.

Apps with contact verification can help you be certain that no one outside the intended recipient(s) are viewing your conversation. This feature lets you confirm your recipient’s unique cryptographic “fingerprint” and thus their identity. Usually this takes the form of an in-real-life check; you might scan QR codes on each other’s phones, or you might call or talk to your friend to make sure that the fingerprint code you have for them matches the one they have for you.

When one of your contacts’ fingerprints changes, that is an indicator that something about their cryptographic identity has changed. Someone else might have tricked your app into accepting their cryptographic keys instead—or it might also just mean that they got a new phone. Apps can deal with this in two ways: key change notifications, which alert you to the change while not interfering with messages, or key change confirmations, which require you to acknowledge the change before any messages are sent. The latter generally offers a higher level of protection for vulnerable users who cannot risk misfired messages.


This post is part of a series on secure messaging.
Find the full series here.

Why We Can’t Give You A Recommendation

No single messaging app can perfectly meet everyone’s security and communication needs, so we can’t make a recommendation without considering the details of a particular person’s or group’s situation. Straightforward answers are rarely correct for everyone—and if they’re correct now, they might not be correct in the future.

At time of writing, if we were locked in a room and told we could only leave if we gave a simple, direct answer to the question of what messenger the average person should use, the answer we at EFF would reluctantly give is, “Probably Signal or WhatsApp.” Both employ the well-regarded Signal protocol for end-to-end encryption. Signal stands out for collecting minimal metadata on users, meaning it has little to nothing to hand over if law enforcement requests user information. WhatsApp’s strength is that it is easy to use, making secure messaging more accessible for people of varying skill levels and interests.

No single messaging app can perfectly meet everyone’s security and communication needs.

However, once let out of the room, we would go on to describe the significant trade-offs. While Signal offers strong security features, its reliability can be inconsistent. Using it in preference to a more mainstream tool might attract unwanted attention and scrutiny, and pointing high-risk users exclusively to Signal could make that problem worse. And although WhatsApp’s user-friendly features produce a smooth user experience, they can also undermine encryption; settings prompts like automatic cloud backups, for example, can store unencrypted message content with a third party and effectively defeat the purpose of end-to-end encryption.

Any of these pros or cons can change suddenly or even imperceptibly. WhatsApp could change its policies around sharing user data with its parent company Facebook, like it did in 2016. Signal could be forcibly coerced into secret legal processes requiring it to log users’ metadata without notifying them. A newly discovered flaw in the design of either messenger could make all of their protections useless in the future. An unpublicized flaw might mean that none of those protections work right now.

More generally, security features are not the only variables that matter in choosing a secure messenger. An app with great security features is worthless if none of your friends and contacts use it, and the most popular and widely used apps can vary significantly by country and community. Poor quality of service or having to pay for an app can also make a messenger unsuitable for some people. And device selection also plays a role; for an iPhone user who communicates mostly with other iPhone users, for example, iMessage may be a great option (since iMessages between iPhones are end-to-end encrypted by default).

Security features are not the only variables that matter in choosing a secure messenger.

The question of who or what someone is worried about also influences which messenger is right for them. End-to-end encryption is great for preventing companies and governments from accessing your messages. But for many people, companies and governments are not the biggest threat, and therefore end-to-end encryption might not be the biggest priority. For example, if someone is worried about a spouse, parent, or employer with physical access to their device, the ability to send ephemeral, “disappearing” messages might be their deciding factor in choosing a messenger.

Most likely, even a confident recommendation to one person might include more than one messenger. It’s not unusual to use a number of different tools for different contexts, such as work, family, different groups of friends, or activism and community organizing.

Based on all of these factors and more, any recommendation is much more like a reasonable guess than an indisputable fact. A messenger recommendation must acknowledge all of these factors—and, most importantly, the ways they change over time. It’s hard enough to do that for a specific individual, and nearly impossible to do it for a general audience.


This post is part of a series on secure messaging.
Find the full series here.

Страницы

JavaScript license information