Surveillance
Self-Defense

Blog

Attention PGP Users: New Vulnerabilities Require You To Take Action Now

A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages.

The full details will be published in a paper on Tuesday at 07:00 AM UTC (3:00 AM Eastern, midnight Pacific). In order to reduce the short-term risk, we and the researchers have agreed to warn the wider PGP user community in advance of its full publication.

Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email. Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email.

Please refer to these guides on how to temporarily disable PGP plug-ins in:

Thunderbird with Enigmail
Apple Mail with GPGTools
Outlook with Gpg4win
 

These steps are intended as a temporary, conservative stopgap until the immediate risk of the exploit has passed and been mitigated against by the wider community.

We will release more detailed explanation and analysis when more information is publicly available.

Bring In The Nerds: EFF Introduces Actual Encryption Experts to U.S. Senate Staff

Earlier today in the U.S. Capitol Visitor Center, EFF convened a closed-door briefing for Senate staff about the realities of device encryption. While policymakers hear frequently from the FBI and the Department of Justice about the dangers of encryption and the so-called Going Dark problem, they very rarely hear from actual engineers, cryptographers, and computer scientists. Indeed, the usual suspects testifying before Congress on encryption are nearly the antithesis of technical experts.

The all-star lineup of panelists included Dr. Matt Blaze, professor of computer science at the University of Pennsylvania, Dr. Susan Landau, professor of cybersecurity and policy at Tufts University; Erik Neuenschwander, Apple’s manager of user privacy; and EFF’s tech policy director Dr. Jeremy Gillula.

EFF Tech Policy Director Dr. Jeremy Gillula (far left) and Legislative Analyst India McKinney (far right) joined an all-star lineup of panelists to brief Senate staff on encryption.

The discussion focused on renewed calls by the FBI and DOJ to create mechanisms to enable “exceptional access” to encrypted devices. EFF's legislative analyst India McKinney opened the briefing by assuring staff that the goal of the panel was not to attack the FBI’s proposals from the perspective of policy or ideology. Instead, our goal was to give a technical description of how device encryption actually works and answer staff questions about the risks that exceptional access mechanisms necessarily introduce into the ecosystem.

Dr. Blaze framed his remarks around what he called an undeniable “cybersecurity crisis” gripping the critical information systems we all rely on. Failures and data breaches are a daily occurrence that only come to the public’s attention when they reach the catastrophic scale of the Equifax breach. As Blaze pointed out, “security is hard,” and the presence of bugs and unintended behavior in software is one of the oldest and most fundamental problems in computer science. These issues only become more intense as systems get complex, giving rise to an “arms race” between those who find and fix vulnerabilities in software and those who exploit them.

According to Blaze, the one bright spot is the increasing deployment of encryption to protect sensitive data, but these encryption mechanisms remain “fragile.” Implementing encryption at scale remains an incredibly complex engineering task. Blaze said that computer scientists “barely have their heads above water;” and proposals that would mandate law enforcement access to encrypted data would effectively take away one of the very few tools for managing the security of infrastructure that our country has come to depend on. These proposals make the system more complex and drastically increase the surface for outside attackers.

Blaze noted the CLEAR key escrow system put forth by former Microsoft CTO Ray Ozzie recently written up in Wired only covers a cryptographic protocol—”the easy part”—which itself has already been demonstrated to be flawed. Even if those flaws could be satisfactorily addressed, it would still leave the enormous difficulty of developing and implementing it in complex systems. Surmounting these challenges, Blaze said, would require a breakthrough so momentous that would it lead to the creation of a Nobel Prize in computer science just so it could be adequately recognized.

Professor Landau began her remarks by pointing out that this was not at all a new debate. And she noted that Professor Blaze was one of the technical experts who broke the NSA’s Clipper Chip proposal of the 1990s. And key escrow, as it was described by the Clipper Chip, really isn’t much different from modern calls for extraordinary access. Turning to the most current key escrow proposal, Ozzie’s CLEAR, Professor Landau noted that the way crypto algorithms get built is by exhaustive peer review. However, CLEAR had its most public presentation in Wired Magazine and has yet to be subjected to rigorous peer review, even though only a tiny portion of the systems problem that “exceptional access” presents are actually addressed by CLEAR, and the proposal has already been substantially discredited.

Professor Landau concluded by noting that the National Academies of Sciences study showed that the very first two questions that we need to ask about an “extraordinary access” mechanism are: does it work at scale, and what security risks does it impose. The FBI has steadfastly ignored both those problems.

“Complexity is the enemy of security. If you want a phone that’s unlockable by any government, you might as well not lock the phone in the first place.” - Professor Susan Landau

“We’re not looking at privacy versus security. Instead, we’re looking at efficiency of law enforcement investigations versus security, and there are other ways of improving the efficiency of investigations without harming security,” Landau said. “Complexity is the enemy of security. If you want a phone that’s unlockable by any government, you might as well not lock the phone in the first place.”

Apple’s Neuenschwander presented an on-the-ground look at how Apple weighs tradeoffs between functionality and user privacy. In the case of encryption of iPhones, he echoed the concerns raised by both Blaze and Landau about the complexity of implementing secure systems, noting that Apple must continually work to improve security as attackers become more sophisticated. As a result, Apple determined that the best—and only—way to secure user data was to simply take itself out of the equation by not maintaining control of any encryption keys. By contrast, if Apple were to have a store of keys to decrypt users’ phones, that vault would immediately become a massive target, no matter what precautions Apple took to protect it. Though the days of the Wild West are long gone, Neuenschwander pointed out that bank robberies remain quite prevalent, 4,200 in 2016 alone. Why? Because that’s where the money is. All exceptional access proposals would take Apple from a regime of storing zero keys to holding many keys and making itself ripe for digital bank robbery.

EFF’s Dr. Gillula spoke last. He opened by explaining that getting encryption right is hard. Really hard. That’s not because cryptographers spend years working on a particular cryptographic mechanism and succeeding. Rather they spend years and years on working systems that other cryptographers are able to break in mere minutes. Sometimes those flaws are in the encryption algorithm, but much more often in the engineering implementation of that algorithm.

And that’s what companies like Cellebrite and Grayshift do. They sell devices that break device security—not by breaking the encryption on the device—but by finding flaws in implementation. Indeed, there are commercial tools available that can break into every phone on the market today. The recent OIG report acknowledged exactly that: there were elements within the FBI that knew that there were options other than forcing Apple to build an exceptional access system.

In conclusion, Gillula noted that in the cat-and-mouse game that is computer security, mandating exceptional access would freeze the defenders’ state of the art, while allowing attackers to progress without limit.

We were impressed by the questions the Senate staffers asked and by their high level of engagement. Despite the fact that we’ve entered the third decade of the “Crypto Wars,” this appears to be a debate that’s not going away any time soon. But we were glad for the opportunity to bring such powerful panel of experts to give Senate staff the unfiltered technical lowdown on encryption.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

There is No Middle Ground on Encryption

Encryption is back in the headlines again, with government officials insisting that they still need to compromise our security via a backdoor for law enforcement. Opponents of encryption imagine that there is a “middle ground” approach that allows for strong encryption but with “exceptional access” for law enforcement. Government officials claim that technology companies are creating a world where people can commit crimes without fear of detection.

Despite this renewed rhetoric, most experts continue to agree that exceptional access, no matter how you implement it, weakens security. The terminology might have changed, but the essential question has not: should technology companies be forced to develop a system that inherently harms their users? The answer hasn’t changed either: no.

Let us count the reasons why. First, if mandated by the government, exceptional access would violate the First Amendment under the compelled speech doctrine, which prevents the government from forcing an individual, company, or organization to make a statement, publish certain information, or even salute the flag.

Second, mandating that tech companies weaken their security puts users at risk. In the 1990s, the White House introduced the Clipper Chip, a plan for building backdoors into communications technologies. A security researcher found enormous security flaws in the system, showing that a brute-force attack could likely compromise the technology.

Third, exceptional access would harm U.S. businesses and chill innovation. The United States government can’t stop development on encryption technologies; it can merely push it overseas.

Finally, exceptional access fails at its one stated task—stopping crime. No matter what requirements the government placed on U.S. companies, sophisticated criminals could still get strong encryption from non-U.S. sources that aren’t subject to that type of regulation.

There’s No Such Thing as a Safe Backdoor

Despite the broad consensus among technology experts, some policymakers keep trying to push an impossible “middle ground.” Last month, after years of research, the National Academy of Sciences released a report on encryption and exceptional access that collapsed the question of whether the government should mandate ‘exceptional access’ to the contents of encrypted communications with how the government could possibly accomplish this mandate without compromising user security. Noted crypto expert Susan Landau worried that some might misinterpret the report as providing evidence that an exceptional access system is close to being securely built:

"The Academies report does discuss approaches to ‘building ... secure systems’ that provide exceptional access—but these are initial approaches only…The presentations to the Academies committee were brief descriptions of ideas by three smart computer scientists, not detailed architectures of how such systems would work. There's a huge difference between a sketch of an idea and an actual implementation—Leonardo da Vinci’s drawings for a flying machine as opposed to the Wright brothers’ plane at Kitty Hawk."

And it didn’t stop with the NAS. Also last month, the international think-tank EastWest Institute published a report that proposed “two balanced, risk-informed, middle-ground encryption policy regimes in support of more constructive dialogue.”

Finally, just last week, Wired published a story featuring Microsoft’s previous chief technology officer Ray Ozzie and his attempt to find an exceptional access model for phones that can supposedly satisfy “both law enforcement and privacy purists.” While Ozzie may have meant well, experts like Matt Green, Steve Bellovin, Matt Blaze, Rob Graham and others were quick to point out its substantial flaws. No system is perfect, but a backdoor system for billions of phones magnifies the consequences of a flaw, and the best and the brightest in computer security don’t know how to make a system bug-free.

The reframing keeps coming, but the truth remains. Any efforts for “constructive dialogue” neglect a major obstacle: the government’s starting point for this dialogue is diametrically opposed to the very purpose of encryption. To see why, read on.

Encryption: A User’s Guide to Keys

Encryption is frequently described using analogies to “keys”—whoever has a key can decrypt, or read, information that is behind a “lock.” But if we back up, we can see the problems with that metaphor.

In ancient times, encryption was achieved using sets of instructions that we now call “unkeyed ciphers,” that explained how to both scramble and unscramble messages. These ciphers sometimes used simple rules, like taking alphanumeric text and then rotating every letter or number forward by one, so A becomes B, B becomes C, and so on. Ciphers can also use more complex rules, like translating a message’s letters to numbers, and then running those numbers through a mathematical equation to get a new string of numbers that—so long as the cipher is unknown—is indecipherable once seen by an outside party.

As encryption progressed, early cryptographers started to use “keyed ciphers” with ever-stronger security. These ciphers use secret information called a “key” to control the ability to encrypt and decrypt.

Keys continue to play a major role in modern encryption, but there is more than one kind of key.

Some digital devices encrypt stored data, and the password entered to operate the device unlocks the random key used to encrypt that data. But for messages between people—like emails, or chats—all modern encryption systems are based on “public key encryption.” The advantage of this form of encryption is that the people communicating don’t have to have a secret (like a password) in common ahead of time.

In public key encryption, each user—which can be a person or an entity, like a company, a website, or a network server—gets two related keys. (Sometimes, more pairs are generated than just one.) There is one key to encrypt data, and another key to decrypt data. The key that encrypts data is called the “public key,” and it can be shared with anyone. It’s sort of like a public instruction set—anyone that wishes to send encrypted messages to a person can use their public instruction set to encrypt data according to those rules. The second key is called a “private key,” and it is never shared. This private key decrypts data that has been encrypted using a corresponding public key.

In modern encryption, these keys aren’t used for encrypting and decrypting messages themselves. Instead, the keys are used to encrypt and decrypt an entirely separate key that, itself, both encrypts and decrypts data. This separate key, called a session key, is used with a traditional symmetric cipher—it represents a secret set of instructions that can be used by a message sender and receiver to scramble and unscramble a message.

Public key encryption ensures that a session key is secure and can’t be intercepted and used by outsiders. Private keys hold the secret to session keys, which hold the secret to encrypted messages. The fewer opportunities for private encryption keys to be stolen or accidentally released, the greater the security.

Yet this is precisely what exceptional access demands—more keys, more access, and more vulnerability. Exceptional access, at its core, erodes encryption security, granting law enforcement either its own set of private keys for every encrypted device and individual who sends and receives encrypted messages, or requiring the creation—and secure storage—of duplicate keys to be handed over.

And that’s why law enforcement’s proposals for a “responsible solution” are irresponsible. Any system that includes a separate channel for another party to access it is inherently less secure than a system that does not have that channel. In encryption systems, the very existence of duplicated or separate, devoted keys makes those keys attractive for bad actors. It would be like creating duplicate, physical keys for a bank vault—the risk of one of those keys getting lost, or stolen, is bad enough. Copying that key (for law enforcement agencies in the U.S. and potentially around the globe) multiplies the risk.

There is no good faith compromise in the government’s exceptional access request. The “middle ground” between what law enforcement agencies want—bad encryption—and what users want—good encryption—is still just bad encryption.

In a 2017 interview with Politico (paywall), Deputy Attorney General Rod Rosenstein conceded that a device with exceptional access “would be less secure than a product that didn’t have that ability.” He continued:

“And that may be, that’s a legitimate issue that we can debate—how much risk are we willing to take in return for the reward?”

The answer to that question has to be informed by solid information about what we risk when we give up strong encryption. So this week EFF is bringing the nerds (aka technologists) to Washington, D.C. to host an informative briefing for Senate staffers. We need all policymakers to get this right, and not fall prey to rhetoric over reality.

 

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

We’re in the Uncanny Valley of Targeted Advertising

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Text from Facebook explaining why an ad has been shown to the user.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Congressmembers Raise Doubts About the “Going Dark” Problem

In the wake of a damning report by the DOJ Office of Inspector General (OIG), Congress is asking questions about the FBI’s handling of the locked iPhone in the San Bernardino case and its repeated claims that widespread encryption is leading to a “Going Dark” problem. For years, DOJ and FBI officials have claimed that encryption is thwarting law enforcement and intelligence operations, pointing to large numbers of encrypted phones that the government allegedly cannot access as part of its investigations. In the San Bernardino case specifically, the FBI maintained that only Apple could assist with unlocking the shooter’s phone.

But the OIG report revealed that the Bureau had other resources at its disposal, and on Friday members of the House Judiciary Committee sent a letter to FBI Director Christopher Wray that included several questions to put the FBI’s talking points to the test. Not mincing words, committee members write that they have “concerns that the FBI has not been forthcoming about the extent of the ‘Going Dark’ problem.”

In court filings, testimony to Congress, and in public comments by then-FBI Director James Comey and others, the agency claimed that it had no possible way of accessing the San Bernardino shooter’s iPhone. But the letter, signed by 10 representatives from both parties, notes that the OIG report  “undermines statements that the FBI made during the litigation and consistently since then, that only the device manufacturer could provide a solution.” The letter also echoes EFF’s concerns that the FBI saw the litigation as a test case: “Perhaps most disturbingly, statements made by the Chief of the Cryptographic and Electronic Analysis Unit appear to indicate that the FBI was more interested in forcing Apple to comply than getting into the device.”

Now, more than two years after the Apple case, the FBI continues to make similar arguments. Wray recently claimed that the FBI confronted 7,800 phones it could not unlock in 2017 alone. But as the committee letter points out, in light of recent reports about “the availability of unlocking tools developed by third-parties and the OIG report’s findings that the Bureau was uninterested in seeking available third-party options, these statistics appear highly questionable.” For example, a recent Motherboard investigation revealed that law enforcement agencies across the United States have purchased—or have at least shown interest in purchasing—devices developed by a company called Grayshift. The Atlanta-based company sells a device called GrayKey, a roughly 4x4 inch box that has allegedly been used to successfully crack several iPhone models, including the most recent iPhone X.

The letter ends by posing several questions to Wray designed to probe the FBI’s Going Dark talking points—in particular whether it has actually consulted with outside vendors to unlock encrypted phones it says are thwarting its investigations and whether third-party solutions are in any way insufficient for the task.

EFF welcomes this line of questioning from House Judiciary Committee members and we hope members will continue to put pressure on the FBI to back up its rhetoric about encryption with actual facts.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

To #DeleteFacebook or Not to #DeleteFacebook? That Is Not the Question

Since the Cambridge Analytica news hit headlines, calls for users to ditch the platform have picked up speed. Whether or not it has a critical impact on the company’s user base or bottom line, the message from #DeleteFacebook is clear: users are fed up.

EFF is not here to tell you whether or not to delete Facebook or any other platform. We are here to hold Facebook accountable no matter who’s using it, and to push it and other tech companies to do better for users.

Users should have better options when they decide where to spend their time and attention online.

The problems that Facebook’s Cambridge Analytica scandal highlight—sweeping data collection, indiscriminate sharing of that data, and manipulative advertising—are also problems with much of the surveillance-based, advertising-powered popular web. And there are no shortcuts to solving those problems.

Users should have better options when they decide where to spend their time and attention online. So rather than asking if people should delete Facebook, we are asking: What privacy protections should users have a right to expect, whether they decide to leave or use a platform like Facebook?

If it makes sense for you to delete Facebook or any other account, then you should have full control over deleting your data from the platform and bringing it with you to another. If you stay on Facebook, then you should be able to expect it to respect your privacy rights.

To Leave

As a social media user, you should have the right to leave a platform that you are not satisfied with. That means you should have the right to delete your information and your entire account. And we mean really delete: not just disabling access, but permanently eliminating your information and account from the service’s servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service.

Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab your data for unsavory purposes.

Not To Leave

Deleting Facebook is not a choice that most of its 2 billion users can feasibly make. It’s also not a choice that everyone wants to make, and that’s okay too. Everyone deserves privacy, whether they delete Facebook or stay on it (or never used it in the first place!).

Deleting Facebook is not a choice that most of its users can feasibly make.

For many, the platform is the only way to stay in touch with friends, family, and businesses. It’s sometimes the only way to do business, reach out to customers, and practice a profession that requires access to a large online audience. Facebook also hosts countless communities and interest groups that are simply not available in many users’ cities and areas. Without viable general alternatives, Facebook’s massive user base and associated network effects mean that the costs of leaving it may not outweigh the benefits.

In addition to the right to leave described above, any responsible social media company should ensure users’ privacy rights: the right to informed decision-making, the right to control one’s information, the right to notice, and the right of redress.

Facebook and other companies must respect user privacy by default and by design. If you want to use a platform or service that you enjoy and that adds value to your life, you shouldn't have to leave your privacy rights at the door.

Ethiopia Backslides: the Continuing Harassment of Eskinder Nega

On March 25, bloggers, journalists and activists gathered at a private party in Addis Ababa—the capital of Ethiopia—to celebrate the new freedom of their colleagues. Imprisoned Ethiopian writers and reporters had been released in February under a broad amnesty: some attended the private event, including Eskinder Nega, a blogger and publisher whose detention EFF has been tracking in our Offline series.

But the celebration was interrupted, with the event raided by the authorities. Eskinder, together with Zone 9 bloggers Mahlet Fantahun and Fekadu Mehatemework, online writers Zelalem Workagegnhu and Befiqadu Hailu, and six others were seized and detained without charge.

The eleven have now finally been released, after 12 days of custody. It remains a disturbing example of just how far Ethiopian police are willing to go to intimidate critical voices even in a time of supposed tolerance.

During their detention, the prisoners could be seen through narrow windows in Addis Ababa's Gotera police station, held in tiny stalls crowded with other detainees, in conditions Eskinder described as "inhuman".

...Better to call it jam-packed than imprisoned. About 200 of us are packed in a 5 by 8 meter room divided in three sections. Unable to sit or lay down comfortably, and with limited access to a toilet. Not a single human being deserves this regardless of the crime, let alone us who were captured unjustly. The global community should be aware of such case and use every possible means to bring an end to our suffering immediately.

After a brief Spring of prisoner releases and officially-sanctioned tolerance of anti-government protests and criticism, Ethiopia's autocratic regime appears to be returning to its old ways. A new state of emergency was declared shortly after the resignation of the country's Prime Minister in mid-February. While the government tells the world that it is continuing its policy of re-engagement with its critics, the state of emergency grants unchecked powers to quash dissent, including a wide prohibition on public meetings.

Reporters say that the bloggers were questioned at the party about the display of a traditional Ethiopian flag. The Addis Standard quoted an unnamed politician who attended the event as saying “This has nothing to with the flag, but everything to do with the idea of these individuals... coming together.”

The authorities cannot continue their hair-trigger surveillance and harassment of those documenting its chaotic present online. The country's stability depends on reasonable treatment of its online voices.

Data Privacy Policy Must Empower Users and Innovation

As the details continue to emerge regarding Facebook's failure to protect its users' data from third-party misuse, a growing chorus is calling for new regulations. Mark Zuckerberg will appear in Washington to answer to Congress next week, and we expect lawmakers and others will be asking not only what happened, but what needs to be done to make sure it doesn't happen again.

As recent revelations from Grindr and Under Armour remind us, Facebook is hardly alone in its failure to protect user privacy, and we're glad to see the issue high on the national agenda. At the same time, it’s crucial that we ensure that privacy protections for social media users reinforce, rather than undermine, equally important values like free speech and innovation. We must also be careful not to unintentionally enshrine the current tech powerhouses by making it harder for others to enter those markets. Moreover, we shouldn’t lose sight of the tools we already have for protecting user privacy.

With all of this in mind, here are some guideposts for U.S. users and policymakers looking to figure out what should happen next.

Users Have Rights

Any responsible social media company must ensure users’ privacy rights on its platform and make those rights enforceable. These five principles are a place to start:

Right to Informed Decision-Making

Users have the right to a clear user interface that allows them to make informed choices about who sees their data and how it is used. Any company that gathers data on a user should be prepared to disclose what they’ve collected and with whom they have shared it. Users should never be surprised by a platform’s practices, because the user interface showed them exactly how it would work.

A free and open Internet must be built on respect for the rights of all users. 

Right to Control

Social media platforms must ensure that users retain control over the use and disclosure of their own data, particularly data that can be used to target or identify them. When a service wants to make a secondary use of the data, it must obtain explicit permission from the user. Platforms should also ask their users' permission before making any change that could share new data about users, share users' data with new categories of people, or use that data in a new way.

Above all, data usage should be "opt-in" by default, not "opt-out," meaning that users' data is not collected or shared unless a user has explicitly authorized it. If a social network needs user data to offer a functionality that its users actually want, then it should not have to resort to deception to get them to provide it.

Right to Leave

One of the most basic ways that users can protect their privacy is by leaving a social media platform that fails to protect it. Therefore, a user should have the right to delete data or her entire account. And we mean really delete: not just disabling access but permanently eliminating it from the service's servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service. Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab all of your data for unsavory purposes.

Right to Notice

If users’ data has been mishandled or a company has suffered a data breach, users should be notified as soon as possible. While brief delays are sometimes necessary in order to help remedy the harm before it is made public, any such delay should be no longer than strictly necessary.

Right of Redress

Rights are not meaningful if there’s no way to enforce them. Avenues for meaningful legal redress start with (1) clear, public, and unambiguous commitments that, if breached, would subject social media platforms to unfair advertising, competition, or other legal claims with real remedies; and (2) elimination of tricky terms-of-service provisions that make it impossible for a user to ever hold the service accountable in court.

Many companies will say they support some version of all of these rights, but we have little reason to trust them to live up to their promises. So how do we give these rights teeth?

Start with the Tools We Have

We already have some ways to enforce these user rights, which can point the way for us—including the courts and current regulators at the state and federal level—to go further. False advertising laws, consumer protection regulations, and (to a lesser extent) unfair competition rules have all been deployed by private citizens, and ongoing efforts in that area may find a more welcome response from the courts now that the scope of these problems is more widely understood.

The Federal Trade Commission (FTC) has challenged companies with sloppy or fraudulent data practices. The FTC is currently investigating whether Facebook violated a 2011 consent decree on handling of user data. If it has, Facebook is looking at a fine that could genuinely hurt its bottom line. We should all expect the FTC to fulfill its duty to stand in for the rest of us.

But there is even more we could do.

Focus on Empowering Users and Toolmakers

First, policymakers should consider making it easier for users to have their day in court. As we explained in connection with the Equifax breach, too often courts dismiss data breach lawsuits based on a cramped view of what constitutes "harm." These courts mistakenly require actual or imminent loss of money due to the misuse of information that is directly traceable to a single security breach. If the fear caused by an assault can be actionable (which it can), so should the fear caused by the loss of enough personal data for a criminal to take out a mortgage in your name.

There are also worthwhile ideas about future and contingent harms in other consumer protection areas as well as medical malpractice and pollution cases, just to name a few. If the political will is there, both federal and state legislatures can step up and create greater incentives for security and steeper downsides for companies that fail to take the necessary steps to protect our data. These incentives should include a prohibition on waivers in the fine print of terms of service, so that companies can’t trick or force users into giving up their legal rights in advance.

Second, let’s empower the toolmakers. If we want companies to reconsider their surveillance-based business model, we should put mechanisms in place to discourage that model. When a programmer at Facebook makes a tool that allows the company to harvest the personal information of everyone who visits a page with a "Like" button on it, another programmer should be able to write a browser plugin that blocks this button on the pages you visit. But too many platforms impose technical and legal barriers to writing such a program, effectively inhibiting third parties’ ability to give users more control over how they interact with those services. EFF has long raised concerns about the barriers created by overbroad readings of the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and contractual prohibitions on interoperability.  Removing those barriers would be good for both privacy and innovation.

Third, the platforms themselves should come up with clear and meaningful standards for portability—that is, the user’s ability to meaningfully leave a platform and take her data with her. This is an area where investors and the broader funding and startup community should have a voice, since so many future innovators depend on an Internet with strong interconnectivity where the right to innovate doesn’t require getting permission from the current tech giants. It’s also an area where standards may be difficult to legislate. Even well-meaning legislators are unlikely to have the technical competence or foresight to craft rules that will be flexible enough to adapt over time but tough enough to provide users with real protection. But fostering competition in this space could be one of the most powerful incentives for the current set of companies to do the right thing, spurring a race to the top for social networks.

Finally, transparency, transparency, transparency.  Facebook, Google, and others should allow truly independent researchers access to work with, black box test, and audit their systems. Users should not have to take companies’ word on how data is being collected, stored, and used.

Watch Out for Unintended Effects on Speech and Innovation

As new proposals bubble up, we all need to watch for ways they could backfire.

First, heavy-handed requirements, particularly requirements tied to specific kinds of technology (i.e. tech mandates) could stifle competition and innovation. Used without care, they could actually give even more power to today’s tech giants by ensuring that no new competitor could ever get started. 

Second, we need to make sure that transparency and control provisions don’t undermine online speech. For example, any disclosure rules must take care to protect user anonymity. And the right to control your data should not turn into an unfettered right to control what others say about you—as so-called "right to be forgotten" approaches can often become. If true facts, especially facts that could have public importance, have been published by a third party, requiring their removal may mean impinging on others’ rights to free speech and access to information. A free and open Internet must be built on respect for the rights of all users. 

Asking the Right Questions

Above all, the guiding question should not be, "What legislation do we need to make sure there is never another Cambridge Analytica?" Rather, we should be asking, "What privacy protections are missing, and how can we fill that gap while respecting other important values?" Once we ask the right question, we can look for answers in existing laws, pressure from users and investors, and focused legislative steps where necessary. We need to be both creative and judicious— and take care that today’s solutions don’t become tomorrow’s unexpected problems.

HTTPS Everywhere Introduces New Feature: Continual Ruleset Updates

Today we're proud to announce the launch of a new version of HTTPS Everywhere, 2018.4.3, which brings with it exciting new features. With this newest update, you'll receive our list of HTTPS-supporting sites more regularly, bundled as a package that is delivered to the extension on a continual basis. This means that your HTTPS-Everywhere-protected browser will have more up-to-date coverage for sites that offer HTTPS, and you'll encounter fewer sites that break due to bugs in our list of supported sites. It also means that in the future, third parties can create their own list of URL redirects for use in the extension. This could be useful, for instance, in the Tor Browser to improve the user experience for .onion URLs. This new version is the same old extension you know and love, now with a cleaner behind-the-scenes process to ensure that it's protecting you better than ever before.

How does it work?

You may be familiar with our popular browser extension, available for Firefox, Chrome, Opera, and the Tor Browser. The idea is simple: whenever a user visits a site that we know offers HTTPS, we ensure that their browser connects to that site with the security of HTTPS rather than insecure HTTP. This means that users will have the best security available, avoiding subtle attacks that can downgrade their connections and compromise their data. But knowing is half the battle. Keeping the list of sites that offer HTTPS updated is an enormous effort, comprising a collaboration between hundreds of contributors to the extension and a handful of active maintainers to craft what are known as HTTPS Everywhere's "rulesets." At the time of writing, there are over 23,000 ruleset files - each containing at least one domain name (or FQDN, like sub.example.com).

We've modified the extension to periodically check in with EFF to see if a new list is available.

Why go through all this trouble to maintain a list of sites supporting HTTPS, instead of just defaulting to HTTPS? Because a lot of sites still only offer HTTP. Without knowing that a site supports HTTPS, we'd have to try HTTPS first, and then downgrade your connection if it's not available. And for a network attacker, it's easy to fake the browser into thinking that a site does not offer HTTPS. That's why downgrading connections can be dangerous - you can fall right into the trap of an attacker. HTTPS Everywhere forces your browser to use the secure endpoint if it's on our list, thus ensuring that you'll have the highest level of security available for these sites.

Ordinarily, we'll deliver this ruleset list bundled with the extension when you install or update it. But it's a lot of work to release a new version just to deliver a new list of rulesets to you! So we've modified the extension to periodically check in with EFF to see if a new list is available. That way you'll get the newest ruleset list in a timely manner, without having to wait for a new version to be released. In order to verify that these are the authentic EFF rulesets, we've signed them so that your browser can check that they're legitimate, using the Web Crypto API. We've also made it easy for developers and third parties to publish their own rulesets, signed with their own key, and build that into a custom-made edition of HTTPS Everywhere. We've called these "update channels," and the extension is capable of digesting multiple update channels at the same time.

This is just the start

In the future, we plan to build on this feature, making it easy for users to modify the set of update channels they digest in their own HTTPS Everywhere instance. This will entail building out a nicer user experience to modify, delete, and edit update channels.

The fact is that only a small subset of the ruleset files change in a given time. So we'll also be researching how to safely deliver to your browser only the changes between one edition of the rulesets and the next. This will save you a lot of bandwidth, which is especially important in contexts where your ISP provides a slow or throttled connection.

Today, as always, we aim to better your browsing experience by protecting your data with this latest release. We're excited to use bring you these new features, just as we've been glad to keep your browsing safe ever since we launched HTTPS Everywhere in 2010.

We'd like to thank Fastly for providing the bandwidth necessary to deliver our ruleset updates.

If you haven't already, please install and contribute to HTTPS Everywhere, and consider donating to EFF to support our work!

The FBI Could Have Gotten Into the San Bernardino Shooter’s iPhone, But Leadership Didn’t Say That

The Department of Justice’s Office of the Inspector General (OIG) last week released a new report that supports what EFF has long suspected: that the FBI’s legal fight with Apple in 2016 to create backdoor access to a San Bernardino shooter’s iPhone was more focused on creating legal precedent than it was on accessing the one specific device.

The report, called a “special inquiry,” details the FBI’s failure to be completely forthright with Congress, the courts, and the American public. While the OIG report concludes that neither former FBI Director James Comey, nor the FBI officials who submitted sworn statements in court had “testified inaccurately or made false statements” during the roughly month-long saga, it illustrates just how close they came to lying under oath. 

From the onset, we suspected that the FBI’s primary goal in its effort to access to an iPhone found in the wake of the December 2015 mass shootings in San Bernardino wasn’t simply to unlock the device at issue. Rather, we believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security mechanisms. Among other disturbing revelations, the new OIG report confirms our suspicion: senior leaders within the FBI were “definitely not happy” when the agency realized that another solution to access the contents of the phone had been found through an outside vendor and the legal proceeding against Apple couldn’t continue.

By way of digging into the OIG report, let’s take a look at the timeline of events:

  • December 2, 2015: a shooting in San Bernardino results in the deaths of 14 people, including the two shooters. The shooters destroy their personal phones but leave a third phone—owned by their employer—untouched.
  • February 9, 2016: Comey testifies that the FBI cannot access the contents of the shooters’ remaining phone.
  • February 16, 2016: the FBI applies for (and Magistrate Judge Pym grants the same day) an application for an order compelling Apple to develop a new method to unlock the phone.

    As part of that application, the FBI Supervisory Special Agent in charge of the investigation of the phone swears under oath that the FBI had “explored other means of obtaining [access] . . . and we have been unable to identify any other methods feasible for gaining access” other than compelling Apple to create a custom, cryptographically signed version of iOS to bypass a key security feature and allow the FBI to access the device.

    At the same time, according to the OIG report, the chief of the FBI’s Remote Operations Unit (the FBI’s elite hacking team, called ROU) knows “that one of the vendors that he worked closely with was almost 90 percent of the way toward a solution that the vendor had been working on for many months.”

Let’s briefly step out of the timeline to note the discrepancies between what the FBI was saying in early 2016 and what they actually knew. How is it that senior FBI officials testified that the agency had no capability to access the contents of the locked device when, the agency’s own premier hacking team knew capability was within reach? Because, according to the OIG report, FBI leadership doesn’t ask the ROU for its help until after testifying that FBI’s techs knew of no way in.

The OIG report concluded that Director Comey didn’t know that his testimony was false at the time he gave it. But it was false, and technical staff in FBI’s own ROU knew it was false.

Now, back to the timeline:

  • March 1, 2016: Director Comey again testifies that the FBI has been unable to access the contents of the phone without Apple’s help. Before the government applied for the All Writs Act order on February 11, Comey notes there were “a whole lot of conversations going on in that interim with companies, with other parts of the government, with other resources to figure out if there was a way to do it short of having to go to court.”

    In response to a question from Rep. Daryl Issa whether Comey was “testifying today that you and/or contractors that you employ could not achieve this without demanding an unwilling partner do it,” Comey replies “Correct.”

    The OIG report concluded that Director Comey didn’t know that his testimony was false at the time he gave it. But it was false, and technical staff in FBI’s own ROU knew it was false.

  • March 16, 2016: An outside vendor for the FBI completes its work on an exploit for the model in question, building on the work that, as of February 16, the ROU knew to be 90% complete.

    The head of the FBI’s Cryptologic and Electronics Analysis Unit (CEAU)—the unit whose initial inability to access the phone led to the FBI’s sworn statements that the Bureau knew of no method to do so—is pissed that others within FBI are even trying get into the phone without Apple’s help. In the words of the OIG report, “he expressed disappointment that the ROU Chief had engaged an outside vendor to assist with the Farook iPhone, asking the ROU Chief, ‘Why did you do that for?’”

    Why is the CEAU Chief angry? Because it means that the legal battle is over and the FBI won’t be able to get the legal precedent against Apple that it was looking for. Again, the OIG report confirms our suspicions: “the CEAU Chief ‘was definitely not happy’ that the legal proceeding against Apple could no longer go forward” after the ROU’s vendor succeeded.

  • March 20, 2016: The FBI’s outside vendor demonstrates the exploit for senior FBI leadership.
  • March 21, 2016: On the eve of the scheduled hearing, the Department of Justice notifies the court in California, that, despite previous statements under oath that there were no “other methods feasible for gaining access,” they’ve now somehow found a way.

    In response to the FBI’s eleventh-hour revelation, the court cancels the hearing and the legal battle between the FBI and Apple is over for now.

The OIG report comes on the heels of a report by the New York Times that the Department of Justice is renewing its decades-long fight for anti-encryption legislation. According to the Times, DOJ officials are “convinced that mechanisms allowing access to [encrypted] data can be engineered without intolerably weakening the devices’ security against hacking.”

That’s a bold claim, given that for years the consensus in the technical community has been exactly the opposite. In the 90’s, experts exposed serious flaws in proposed systems to give law enforcement access to encrypted data without compromising security, including the Clipper Chip. And, as the authors of the 2015 “Keys Under Doormats” paper put it, “today’s more complex, global information infrastructure” presents “far more grave security risks” for these approaches.

The Department’s blind faith in technologists’ ability to build a secure backdoor on encrypted phones is inspired by presentations by several security researchers as part of the recent National Academy of Sciences (NAS) report on encryption. But the NAS wrote that these proposals were not presented in “sufficient detail for a technical evaluation,” so they have yet to be rigorously tested by other security experts, let alone pass muster. Scientific and technical consensus is always open to challenge, but we—and the DOJ—should not abandon the longstanding view, backed by evidence, that deploying widespread special access mechanisms present insurmountable technical and practical challenges.

The Times article also suggests that even as DOJ officials tout the possibility of secure backdoors, they’re simultaneously lowering the bar, arguing that a solution need not be “foolproof” if it allows the government to catch “ordinary, less-savvy criminals.” The problem with that statement is at least two-fold:

First, according to the FBI, it is the savvy criminals (and terrorists) who present the biggest risk of using encryption to evade detection. By definition, less-savvy criminals will be easier for law enforcement to catch without guaranteed access to encrypted devices. Why is it acceptable to the FBI that the solutions they demand are necessarily incapable of stopping the very harms they claim they most need backdoors in order to stop?

Second, the history in this area demonstrates that “not foolproof” often actually means “completely insecure.” That’s because any system that is designed to allow law enforcement agencies all across the country to expeditiously decrypt devices pursuant to court order will be enormously complex, raising the likelihood of serious flaws in implementation. And, regardless of who holds them, the keys used to decrypt these devices will need to be used frequently, making it even harder to defend them from bad actors. These and other technical challenges mean that the risks of actually deploying an imperfect exceptional access mechanism to millions of phones are unacceptably high. And of course, any system implemented in the US will be demanded by repressive governments around the world.

The DOJ’s myopic focus on backdooring phones at the expense of the devices’ security is especially vexing in light of reports that law enforcement agencies are increasingly able to use commercial unlocking tools to break into essentially any device on the market. And if this is the status quo without mandated backdoor access and as vendors like Apple take steps to harden their devices against hacking, imagine how vulnerable devices could be with a legal mandate. The FBI likes to paint encryption in an apocalyptic light, suggesting that the technology drastically undermines the Bureau’s ability to do its job, but the evidence from the Apple fight and elsewhere is far less stark.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

Páginas

JavaScript license information