Surveillance
Self-Defense

Blog

A Technical Deep Dive: Securing the Automation of ACME DNS Challenge Validation

Earlier this month, Let's Encrypt (the free, automated, open Certificate Authority EFF helped launch two years ago) passed a huge milestone: issuing over 50 million active certificates. And that number is just going to keep growing, because in a few weeks Let's Encrypt will also start issuing “wildcard” certificates—a feature many system administrators have been asking for.

What's A Wildcard Certificate?

In order to validate an HTTPS certificate, a user’s browser checks to make sure that the domain name of the website is actually listed in the certificate. For example, a certificate from www.eff.org has to actually list www.eff.org as a valid domain for that certificate. Certificates can also list multiple domains (e.g., www.eff.org, ssd.eff.org, sec.eff.org, etc.) if the owner just wants to use one certificate for all of her domains. A wildcard certificate is just a certificate that says “I'm valid for all of the subdomains in this domain” instead of explicitly listing them all off. (In the certificate, this is indicated by using a wildcard character, indicated by an asterisk. So if you examine the certificate for eff.org today, it will say it's valid for *.eff.org.) That way, a system administrator can get a certificate for their entire domain, and use it on new subdomains they hadn't even thought of when they got the certificate.

In order to issue wildcard certificates, Let's Encrypt is going to require users to prove their control over a domain by using a challenge based on DNS, the domain name system that translates domain names like www.eff.org into IP addresses like 69.50.232.54. From the perspective of a Certificate Authority (CA) like Let's Encrypt, there's no better way to prove that you control a domain than by modifying its DNS records, as controlling the domain is the very essence of DNS.

But one of the key ideas behind Let's Encrypt is that getting a certificate should be an automatic process. In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies. In the rest of this post, we'll take a deep dive into the components involved in that process, and what the options are for making it more secure.

How Does the DNS Challenge Work?

At a high level, the DNS challenge works like all the other automatic challenges that are part of the ACME protocol—the protocol that a Certificate Authority (CA) like Let's Encrypt and client software like Certbot use to communicate about what certificate a server is requesting, and how the server should prove ownership of the corresponding domain name. In the DNS challenge, the user requests a certificate from a CA by using ACME client software like Certbot that supports the DNS challenge type. When the client requests a certificate, the CA asks the client to prove ownership over the domain by adding a specific TXT record to its DNS zone. More specifically, the CA sends a unique random token to the ACME client, and whoever has control over the domain is expected to put this TXT record into its DNS zone, in the predefined record named "_acme-challenge" under the actual domain the user is trying to prove ownership of. As an example, if you were trying to validate the domain for *.eff.org, the validation subdomain would be "_acme-challenge.eff.org." When the token value is added to the DNS zone, the client tells the CA to proceed with validating the challenge, after which the CA will do a DNS query towards the authoritative servers for the domain. If the authoritative DNS servers reply with a DNS record that contains the correct challenge token, ownership over the domain is proven and the certificate issuance process can continue.

DNS Controls Digital Identity

What makes a DNS zone compromise so dangerous is that DNS is what users’ browsers rely on to know what IP address they should contact when trying to reach your domain. This applies to every service that uses a resolvable name under your domain, from email to web services. When DNS is compromised, a malicious attacker can easily intercept all the connections directed toward your email or other protected service, terminate the TLS encryption (since they can now prove ownership over the domain and get their own valid certificates for it), read the plaintext data, and then re-encrypt the data and pass the connection along to your server. For most people, this would be very hard to detect.

Separate and Limited Privileges

Strictly speaking, in order for the ACME client to handle updates in an automated fashion, the client only needs to have access to credentials that can update the TXT records for "_acme-challenge" subdomains. Unfortunately, most DNS software and DNS service providers do not offer granular access controls that allow for limiting these privileges, or simply do not provide an API to handle automating this outside of the basic DNS zone updates or transfers. This leaves the possible automation methods either unusable or insecure.

A simple trick can help maneuver past these kinds of limitations: using the CNAME record. CNAME records essentially act as links to another DNS record. Let's Encrypt follows the chain of CNAME records and will resolve the challenge validation token from the last record in the chain.

Ways to Mitigate the Issue

Even using CNAME records, the underlying issue exists that the ACME client will still need access to credentials that allow it to modify some DNS record. There are different ways to mitigate this underlying issue, with varying levels of complexity and security implications in case of a compromise. In the following sections, this post will introduce some of these methods while trying to explain the possible impact if the credentials get compromised. With one exception, all of them make use of CNAME records.

Only Allow Updates to TXT Records

The first method is to create a set of credentials with privileges that only allow updating of TXT records. In the case of a compromise, this method limits the fallout to the attacker being able to issue certificates for all domains within the DNS zone (since they could use the DNS credentials to get their own certificates), as well as interrupting mail delivery. The impact to mail delivery stems from mail-specific TXT records, namely SPFDKIM, its extension ADSP and DMARC. A compromise of these would also make it easy to deliver phishing emails impersonating a sender from the compromised domain in question.

Use a "Throwaway" Validation Domain

The second method is to manually create CNAME records for the "_acme-challenge" subdomain and point them towards a validation domain that would reside in a zone controlled by a different set of credentials. For example, if you want to get a certificate to cover yourdomain.tld and www.yourdomain.tld, you'd have to create two CNAME records—"_acme-challenge.yourdomain.tld" and "_acme-challenge.www.yourdomain.tld"—and point both of them to an external domain for the validation.

The domain used for the challenge validation should be in an external DNS zone or in a subdelegate DNS zone that has its own set of management credentials. (A subdelegate DNS zone is defined using NS records and it effectively delegates the complete control over a part of the zone to an external authority.)

The impact of compromise for this method is rather limited. Since the actual stored credentials are for an external DNS zone, an attacker who gets the credentials would only gain the ability to issue certificates for all the domains pointing to records in that zone.

However, figuring out which domains actually do point there is trivial: the attacker would just have to read Certificate Transparency logs and check if domains in those certificates have a magic subdomain pointing to the compromised DNS zone.

Limited DNS Zone Access

If your DNS software or provider allows for creating permissions tied to a subdomain, this could help you to mitigate the whole issue. Unfortunately, at the time of publication the only provider we have found that allows this is Microsoft Azure DNS. Dyn supposedly also has granular privileges, but we were not able to find a lower level of privileges in their service besides “Update records,” which still leaves the zone completely vulnerable.

Route53 and possibly others allow their users to create a subdelegate zone, new user credentials, point NS records towards the new zone, and point the "_acme-challenge" validation subdomains to them using the CNAME records. It’s a lot of work to do the privilege separation correctly using this method, as one would need to go through all of these steps for each domain they would like to use DNS challenges for.

Use ACME-DNS

As a disclaimer, the software discussed below is written by the author, and it’s used as an example of the functionality required to handle credentials required for DNS challenge automation in a secure fashion. The final method is a piece of software called ACME-DNS, written to combat this exact issue, and it's able to mitigate the issue completely. One downside is that it adds one more thing to your infrastructure to maintain as well as the requirement to have DNS port (53) open to the public internet. ACME-DNS acts as a simple DNS server with a limited HTTP API. The API itself only allows updating of TXT records of automatically generated random subdomains. There are no methods to request lost credentials, update or add other records. It provides two endpoints:

  • /register – This endpoint generates a new subdomain for you to use, accompanied by a username and password. As an optional parameter, the register endpoint takes a list of CIDR ranges to whitelist updates from.
  • /update – This endpoint is used to update the actual challenge token to the server.

In order to use ACME-DNS, you first have to create A/AAAA records for it, and then point NS records towards it to create a delegation node. After that, you simply create a new set of credentials via the /register endpoint, and point the CNAME record from the "_acme-challenge" validation subdomain of the originating zone towards the newly generated subdomain.

The only credentials saved locally would be the ones for ACME-DNS, and they are only good for updating the exact TXT records for the validation subdomains for the domains on the box. This effectively limits the impact of a possible compromise to the attacker being able to issue certificates for these domains. For more information about ACME-DNS, visit https://github.com/joohoi/acme-dns/.

Conclusion

To alleviate the issues with ACME DNS challenge validation, proposals like assisted-DNS to IETF’s ACME working group have been discussed, but are currently still left without a resolution. Since the only way to limit exposure from a compromise is to limit the DNS zone credential privileges to only changing specific TXT records, the current possibilities for securely implementing automation for DNS validation are slim. The only sustainable option would be to get DNS software and service providers to either implement methods to create more fine-grained zone credentials or provide a completely new type of credentials for this exact use case.

The False Teeth of Chrome's Ad Filter

Today Google launched a new version of its Chrome browser with what they call an "ad filter"—which means that it sometimes blocks ads but is not an "ad blocker." EFF welcomes the elimination of the worst ad formats. But Google's approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed. 

Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for "bad ad experiences." The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination. 

The Coalition for Better Ads Lacks a Consumer Voice

The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition.

This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves.

But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus. 

Stopping the "Biggest Boycott in History"

Some commentators have interpreted ad blocking as the "biggest boycott in history" against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads "whitelisted" under the Acceptable Ads program. These companies leverage their position as gatekeepers to the user's eyeballs, obliging Google to buy back access to the "blocked" part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers. 

Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers.

What Difference Will It Make?

Coverage of Chrome's new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition's bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only "filter" 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus.

User Protection or Monopoly Power?

The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets.

Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube ("instream pre-roll") were not user-tested and are exempted from the prohibition on "auto-play ads with sound." This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome's monitoring of sites associated with malware and related user protection notifications.

There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.)

Chrome Falls Behind on User Privacy 

This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacy-conscious users an easy opt-out. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users' concerns.

Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in "Private Browsing" mode with the option to enable it universally. Meanwhile, Apple's Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome's user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF's Privacy Badger or uBlock Origin to fill this gap.

(1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick's cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.

Let's Encrypt Hits 50 Million Active Certificates and Counting

In yet another milestone on the path to encrypting the web, Let’s Encrypt has now issued over 50 million active certificates. Depending on your definition of “website,” this suggests that Let’s Encrypt is protecting between about 23 million and 66 million websites with HTTPS (more on that below). Whatever the number, it’s growing every day as more and more webmasters and hosting providers use Let’s Encrypt to provide HTTPS on their websites by default.

Image of Let's Encrypt's statistics on a line graph, showing (roughly) Certificates Active reaching 66 million, Certificates at 50 million, and Registered Domains at 23 million

Source: https://letsencrypt.org/stats/ as of February 14, 2018

Let’s Encrypt is a certificate authority, or CA. CAs like Let’s Encrypt are crucial to secure, HTTPS-encrypted browsing. They issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to.

One of the things that sets Let’s Encrypt apart is that it issues these certificates for free. And, with the help of EFF’s Certbot client and a range of other automation tools, it’s easy for webmasters of varying skill and resource levels to get a certificate and implement HTTPS. In fact, HTTPS encryption has become an automatic part of many hosting providers’ offerings.

50 million active certificates represents the number of certificates that are currently valid and have not expired. (Sometimes we also talk about “total issuance,” which refers to the total number of certificates ever issued by Let’s Encrypt. That number is around 217 million now.) Relating these numbers to names of “websites” is a bit complicated. Some certificates, such as those issued by certain hosting providers, cover many different sites. Yet some certificates are also redundant with others, so there may be a handful of active certificates all covering precisely the same names.

Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

One way to count is by “fully qualified domains active”—in other words, different names covered by non-expired certificates. This is now at 66 million. This metric can overcount sites; while most people would say that eff.org and www.eff.org are the same website, they count as two different names here.

Another way to count the number of websites that Let’s Encrypt protects is by looking at “registered domains active,” of which Let’s Encrypt currently has about 26 million. This refers to the number of different top-level domain names among non-expired certificates. In this case, supporters.eff.org and www.eff.org would be counted as one name. In cases where pages under the same top-level domain are run by different people with different content, this metric may undercount different sites.

No matter how you slice it, Let’s Encrypt is one of the largest CAs. And it has grown largely by giving websites their first-ever certificate rather than by grabbing websites from other CAs. That means that, as Let’s Encrypt grows, the number of HTTPS-protected websites on the web tends to grow too. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

The Revolution and Slack

UPDATE (2/16/18): We have corrected this post to more accurately reflect the limits of Slack's encryption of user data at rest. We have also clarified that granular retention settings are only available on paid Slack workspaces.

The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.  

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community.

The Mismatch

Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it.

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them.

Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace.

We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack.

First, The Good News

Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas.

Slack also stores your data in encrypted form when it’s at rest. This method will protect against someone walking into one of the data centers Slack uses and stealing a hard drive. But Slack does not claim to encrypt that data while it is stored in memory, so it is not protected against attacks or data breaches. This is also not useful if you are worried about governments or other entities putting pressure on Slack to hand over your information.

Risks With Slack In Particular

And now the downsides. These are things that Slack could change, and EFF has called on them to do so.

Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement.

Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes

Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content.

Risks With Third-Party Platforms in General

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them.

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform.

Much of your sensitive information is not subject to a warrant requirement.  While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you. 

Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data.

Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations.

A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial.

Making Slack as Secure as Possible

If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible.

By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. If you are using a paid workspace, the lowest-hanging privacy fruit is to change a workspace’s retention settings. Workspace admins have the ability to set shorter retention periods, which can mean less content available for government requests or legal inquiries. Unfortunately, this kind of granular retention control is currently only available for paid workspaces.

Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages. 

The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace

However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers.  And all of us must stand together to push changes to the law.

Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.

The CLOUD Act: A Dangerous Expansion of Police Snooping on Cross-Border Data

This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.

The Clarifying Overseas Use of Data (CLOUD) Act expands American and foreign law enforcement’s ability to target and access people’s data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to federal agents in Immigration and Customs Enforcement) to access “the contents of a wire or electronic communication and any record or other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider—like Google, Facebook, or Snapchat—to hand over a user’s content and metadata, even if it is stored in a foreign country, without following that foreign country’s privacy laws.[1]

Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire users’ data stored in the other country, without following each other’s privacy laws.

For example, because U.S.-based companies host and carry much of the world’s Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is an enormous erosion of current data privacy laws.

This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the United States. The case, United States v. Microsoft (often called “Microsoft Ireland”), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.

Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed unanimous support in the House for the past two years.

The CLOUD Act and the US-UK Agreement

The CLOUD Act’s proposed language is not new. In 2016, the Department of Justice first proposed legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post broke the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In 2017, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.

In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter to Congress opposing the Justice Department’s revamped bill.

The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ’s 2017 bill. None of EFF’s concerns have been addressed. The legislation still:

  • Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
  • Fails to require foreign law enforcement to seek individualized and prior judicial review.
  • Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
  • Fails to place adequate limits on the category and severity of crimes for this type of agreement.
  • Fails to require notice on any level – to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)

The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the Stored Communications Act protects all members of the “public” from the unlawful disclosure of their personal communications.

An Expansion of U.S. Law Enforcement Capabilities

The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information – meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders, including data stored in the United States.

EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court amicus brief in the Microsoft Ireland case.

When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government’s laws. To do so, the company must object within 14 days, and undergo a complex “comity” analysis – a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.

Failure to Support Mutual Assistance

Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs). This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.

It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment’s warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the data privacy rules where the data is stored, which may include important “necessary and proportionate” standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.

While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.

The CLOUD Act raises dire implications for the international community, especially as the Council of Europe is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced legislation that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.

A growing chorus of privacy groups in the United States opposes the CLOUD Act’s broad expansion of U.S. and foreign law enforcement’s unilateral powers over cross-border data. For example, Sharon Bradford Franklin of OTI (and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities “in the wrong direction, by sacrificing digital rights.” CDT and Access Now also oppose the bill.

Sadly, some major U.S. technology companies and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a “good start.” Nor does it do a “remarkable job of balancing these interests in ways that promise long-term gains in both privacy and security.” Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law enforcement and U.S. technology companies.

Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people’s privacy. EFF strongly opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.

[1] The text of the CLOUD Act does not limit U.S. law enforcement to serving orders on U.S. companies or companies operating in the United States. The Constitution may prevent the assertion of jurisdiction over service providers with little or no nexus to the United States.

Related Cases: In re Warrant for Microsoft Email Stored in Dublin, Ireland

Twilio Demonstrates Why Courts Should Review Every National Security Letter

The list of companies who exercise their right to ask for judicial review when handed national security letter gag orders from the FBI is growing. Last week, the communications platform Twilio posted two NSLs after the FBI backed down from its gag orders. As Twilio’s accompanying blog post documents, the FBI simply couldn’t or didn’t want to justify its nondisclosure requirements in court. This might be the starkest public example yet of why courts should be involved in reviewing NSL gag orders in all cases.

National security letters are a kind of subpoena that give the FBI the power to require telecommunications and Internet providers to hand over private customer records—including  names, addresses, and financial records. The FBI nearly always accompanies these requests with a blanket gag order, shutting up the providers and keeping the practice in the shadows, away from public knowledge or criticism.

Although NSLs gag orders severely restrict the providers’ ability to talk about their involvement in government surveillance, the FBI can issue them without court oversight. Under the First Amendment, “prior restraints” like these gag orders are almost never allowed, which is why EFF and our clients CREDO Mobile and Cloudflare have for years been suing to have the NSL statute declared unconstitutional. In response to our suit, Congress included in the 2015 USA FREEDOM Act a process to allow providers to push back against those gag orders.

The new process (referred to as “reciprocal notice”) gives technology companies a right to request judicial review of the gag orders accompanying NSLs. When a company invokes the reciprocal notice process, the government is required to bring the gag order before a judge within 30 days. The judge then reviews the gag order and either approves, modifies, or invalidates it. The company can appear in that proceeding to argue its case, but is not required to do so.

Under the law, reciprocal notice is just an option. It’s no substitute for the full range of First Amendment protections against improper prior restraints, let alone mandatory judicial review of NSL gags in all cases. Nevertheless, EFF encourages all providers to invoke reciprocal notice because it’s the best mechanism available to Internet companies to voice their objections to NSLs. In our 2017 Who Has Your Back report, we awarded gold stars to companies that promised to tell the FBI to go to court for all NSLs, including giants like Apple and Dropbox.

Twilio is the latest company to follow this best practice. It received the two national security letters in May 2017, both of which included nondisclosure requirements preventing Twilio from notifying its users about the government request. And both times, Twilio successfully invoked reciprocal notice, leading to FBI to give permission to publish the letters. This might seem surprising, given that in order to issue a gag, the FBI is supposed to certify that disclosure of the NSL risks serious harm related to an investigation involving national security.

But rather than going to court to back up its certification, the FBI backed down. It retracted one of the NSLs entirely, so that Twilio was not forced to hand over any information at all. For the other, the FBI simply removed the gag order, allowing Twilio to inform its customer and publish the NSL.

This is not what the proper use of a surveillance tool looks like. Instead, it reveals a regime of censorship by attrition. The FBI imposes thousands of NSL gag orders a year, and by default, these gag orders remain in place indefinitely. Only when a company like Twilio objects, does the government have any minimal burden of showing its work. Without a legal obligation to do so in all cases, the FBI can simply hope most companies don’t speak up.

That’s why it’s so crucial that companies like Twilio take responsibility and invoke reciprocal notice. Better still,Twilio also published a list of best practices that companies can look to when responding to NSLs, including template language to push back on standard nondisclosure requirements. (Automattic, the company behind Wordpress, published a similar template last year.)

As the company explained, “The process for receiving and responding to national security letters has become less opaque, but there’s still more room for sunlight.”

We couldn’t agree more. Hopefully if more companies follow the lead of Apple, Dropbox, Twilio and the others who received stars on our report, the courts and Congress will see the need for further reform of the law.

Keep Border Spy Tech Out of Dreamer Protection Bills

If Congress votes this month on legislation to protect Dreamers from deportation, any bill it considers should not include invasive surveillance technologies like biometric screening, social media snooping, automatic license plate readers, and drones. Such high tech spying would unduly intrude on the privacy of immigrants and Americans who live near the border and travel abroad.

How We Got Here

In September 2017, President Trump announced that, effective March 2018, his administration would end the Obama administration’s Deferred Action for Childhood Arrivals (DACA) program, which protects from deportation some 800,000 young adults (often called Dreamers) brought to the United States as children. In January 2018, Senate Majority Leader Mitch McConnell (R-KY) promised to hold a vote in February 2018 on an immigration bill that protects Dreamers. In response to this promise, Democratic Party Senators voted with Republican Party Senators to end last month’s government shutdown. That immigration vote could occur as early as next week, before a short-term federal funding law expires on February 8.

President Trump’s recent framework for immigration legislation calls for unspecified “technology” to secure the border. That framework also calls for border wall funding, more immigration enforcement personnel, faster deportations, new limits on legal immigration, and a path to citizenship for Dreamers.

A bill recently filed by House Judiciary Committee Chair Bob Goodlatte (R-VA) and House Homeland Security Committee Chair Michael McCaul (R-TX) includes a similar blend of immigration policies. This bill (H.R. 4760) may be the vehicle for Sen. McConnell to try to keep his promise of an immigration vote this month.

This year’s Goodlatte-McCaul bill includes many high tech border spying provisions recycled from three bills filed last year: S. 1757, S. 2192, and H.R. 3548. EFF opposed these bills, and now opposes the Goodlatte-McCaul bill.

Biometric Screening at the Border

The Goodlatte-McCaul bill (section 2106) would require the U.S. Department of Homeland Security (DHS) to collect biometric information from people leaving the country, including both U.S. citizens and foreigners. The bill also requires collection of “multiple modes of biometrics.” Further, the new system must be “interoperable” with other systems, meaning together the systems can pool ever-larger sets of biometrics gathered for different purposes by different agencies.

The bill would codify and expand an existing DHS program of facial recognition screening of all travelers, U.S. citizens and foreigners alike, who take certain flights out of the country. Instead, Congress should simply end this invasive program. Biometric screening is a unique threat to our privacy: it is easy for other people to capture our biometrics, and once this happens, it is hard for us to do anything about it. Once the government collects our biometrics, data thieves might steal it, government employees might misuse it, and policy makers might deploy it to new government programs. Also, facial recognition has significant accuracy problems, especially for people of color.

Further, this bill’s border biometric screening must be understood as just the first step towards what DHS is already demanding: biometric screening throughout our domestic airports.

Social Media Snooping on Visa Applicants

The Goodlatte-McCaul bill (section 3105) would authorize DHS to snoop on the social media of visa applicants from so-called “high-risk countries.”

This would codify and expand existing DHS and State Department programs of screening the social media of certain visa applicants. EFF opposes these programs. Congress should end them. They threaten the digital privacy and freedom of expression of innocent foreign travelers, and the many U.S. citizens and lawful permanent residents who communicate with them.

The government permanently stores this captured social media information in a record system known as “Alien Files.” The government is now trying to build an artificial intelligence (AI) system to screen this social media information for signs of criminal intent. The government calls this planned system “extreme vetting.” Privacy and immigrant advocates call it a “digital Muslim ban.” Scores of AI experts concluded that this AI system will likely be “inaccurate and biased.”

Moreover, the bill would empower DHS to decide which countries are “high-risk,” based on “any” criteria it deems “appropriate.” DHS may use this broad authority to improperly target social media screening at nations with majority Muslim populations.

Drone Flights Near the Border

The Goodlatte-McCaul bill (sections 1112, 1113, and 1117) would expand drone flights near the border. Unfortunately, the bill does not limit the flight paths of these drones. Nor does it limit the collection, storage, and sharing of sensitive information about the whereabouts and activities of innocent bystanders.

Drones can capture personal information, including faces and license plates, from all of the people on the ground within the range and sightlines of a drone. Drones can do so secretly, thoroughly, inexpensively, and at great distances. Millions of U.S. citizens and immigrants live close to the U.S. border, and deployment of drones at the U.S. border will invariably capture personal information from vast numbers of innocent people.

ALPRs Near the Border

The Goodlatte-McCaul bill (section 2104) would require DHS to upgrade its automatic license plate readers (ALPRs) at the border, and authorize spending of $125 million to do this. It is unclear whether this provision applies only to ALPRs at border crossings, or also to ALPRs at interior checkpoints, some of which are located as far as 100 miles from the border.

Millions of U.S. citizens and immigrants who live near the U.S. border routinely drive through these interior checkpoints on their way to work and school, while avoiding any actual passage through the U.S. border itself. The federal government should not subject them to ALPR surveillance merely because they live near the border.

ALPRs collect highly sensitive location information. DHS already is using private ALPR databases to locate and deport undocumented immigrants. Likewise, it already is using its own ALPRs at interior checkpoints to enforce immigration laws.

Dreamers and Surveillance

For years, EFF has worked to protect immigrants from high tech spying. For example, we support legislation that would bar state and local police agencies from diverting their criminal justice databases to immigration enforcement. Some Dreamers fear a similar form of digital surveillance: diversion of the federal government’s DACA database, created to assist Dreamers, to instead locate and deport them.

New legislation to protect Dreamers from deportation should not come at the price of other high tech spying on immigrants and others, including biometric screening, social media monitoring, drones, and ALPRs.

How Congress’s Extension of Section 702 May Expand the NSA’s Warrantless Surveillance Authority

Last month, Congress reauthorized Section 702, the controversial law the NSA uses to conduct some of its most invasive electronic surveillance. With Section 702 set to expire, Congress had a golden opportunity to fix the worst flaws in the NSA’s surveillance programs and protect Americans’ Fourth Amendment rights to privacy. Instead, it reupped Section 702 for six more years.

But the bill passed by Congress and signed by the president, labeled S. 139, didn’t just extend Section 702’s duration. It also may expand the NSA’s authority in subtle but dangerous ways.

The reauthorization marks the first time that Congress passed legislation that explicitly acknowledges and codifies some of the most controversial aspects of the NSA’s surveillance programs, including “about” collection and “backdoor searches.” That will give the government more legal ammunition to defend these programs in court, in Congress, and to the public. It also suggests ways for the NSA to loosen its already lax self-imposed restraints on how it conducts surveillance.

Background: NSA Surveillance Under Section 702

First passed in 2008 as part of the FISA Amendments Act—and reauthorized last week until 2023—Section 702 is the primary legal authority that the NSA uses to conduct warrantless electronic surveillance against non-U.S. “targets” located outside the United States. The two publicly known programs operated under Section 702 are “upstream” and “downstream” (formerly known as “PRISM”).

Section 702 differs from other foreign surveillance laws because the government can pick targets and conduct the surveillance without a warrant signed by a judge. Instead, the Foreign Intelligence Surveillance Court (FISC) merely reviews and signs off on the government’s high-level plans once a year.

In both upstream and downstream surveillance, the intelligence community collects and searches communications it believes are related to “selectors.” Selectors are search terms that apply to a target, like an email address, phone number, or other identifier.

Under downstream, the government requires companies like Google, Facebook, and Yahoo to turn over messages “to” and “from” a selector—gaining access to things like emails and Facebook messages.

Under upstream, the NSA relies on Internet providers like AT&T to provide access to large sections of the Internet backbone, intercepting and scanning billions of messages rushing between people and through websites. Until recently, upstream resulted in the collection of communications to, from, or about a selector. More on “about” collection below.

The overarching problem with these programs is that they are far from “targeted.” Under Section 702, the NSA collects billions of communications, including those belonging to innocent Americans who are not actually targeted. These communications are then placed in databases that other intelligence and law enforcement agencies can access—for purposes unrelated to national security—without a warrant or any judicial review.

In countless ways, Section 702 surveillance violates Americans’ privacy and other constitutional rights, not to mention the millions of people around the world whose right to communications privacy is also ignored.

This is why EFF vehemently opposed the Section 702 reauthorization bill that the President recently signed into law. We’ve been suing since 2006 over the NSA’s mass surveillance of the Internet backbone and trying to end these practices in the courts. While S. 139 was described by some as a reform, the bill was really a total failure to address the problems with Section 702. Worse still, it may expand the NSA’s authority to conduct this intrusive surveillance.

Codified “About” Collection

One key area where the new reauthorization could expand Section 702 is the practice commonly known as “about” collection (or “abouts” collection in the language of the new law). For years, when the NSA conducted its upstream surveillance of the Internet backbone, it collected not just communications “to” and “from” a selector like an email address, but also messages that merely mentioned that selector in the message body.

This is a staggeringly broad dragnet tactic. Have you ever written someone’s phone number inside an email to someone else? If that number was an NSA selector, your email would have been collected, though neither you nor the email’s recipient was an NSA target. Have you ever mentioned someone’s email address through a chat service at work? If that email address was an NSA selector, your chat could have been collected, too.

“About” collection involves scanning and collecting the contents of Americans’ Fourth Amendment-protected communications without a warrant. That’s unconstitutional, and the NSA should never have been allowed to do it in the first place. Unfortunately, the FISC and other oversight bodies tasked with overseeing Section 702 surveillance often ignore major constitutional issues. 

So the FISC permitted “about” collection to go on for years, even though the collection continued to raise complex legal and technical problems. In 2011, the FISC warned the NSA against collecting too many “non-target, protected communications,” in part due to “about” collection. Then the court imposed limits on upstream, including in how “about” communications were handled. And when the Privacy and Civil Liberties Oversight Board issued its milquetoast report on Section 702 in 2014, it said that “about” collection pushed “the entire program close to the line of constitutional reasonableness.”

For its part, the NSA asserted that “about” collection was necessary technically to ensure the agency actually collected all the to/from communications it was supposedly entitled to.

In April 2017, we learned that the NSA’s technical and legal problems with “about” collection were even more pervasive than previously disclosed, and it had not been complying with the FISC’s already permissive limits. As a result, the NSA publicly announced it was ending “about” collection entirely. This was something of a victory, following years of criticism and pressure from civil liberties groups and internal government oversight. But the program suspension rested on technical and legal issues that may change over time, and not a change of heart or a controlling rule. Indeed, the suspension is not binding on the NSA in the future, since it could simply restart “about” collection once it figured out a “technical” solution to comply with the FISC’s limits.

Critically, as originally written, Section 702 did not mention “about” collection. Nor did Section 702 provide any rules on collecting, accessing, or sharing data obtained through “about” collection.

But the new reauthorization codifies this controversial NSA practice.

According to the new law, “The term ‘abouts communication’ means a communication that contains a reference to, but is not to or from, a target of an acquisition authorized under section 702(a) of the Foreign Intelligence Surveillance Act of 1978.”

Under the new law, if the intelligence community wants to restart “about” collection, it has a path to doing so that includes finding a way to comply with the FISC’s minimal limitations. Once that’s done, an affirmative act of Congress is required to prevent it. If Congress does not act, then the NSA is free to continue this highly invasive “about” collection.

Notably, by including collection of communications that merely “contain a reference to . . .  a target,” the new law may go further than the NSA’s prior practice of collecting communications content that contained specific selectors. The NSA might well argue that the new language allows them to collect emails that refer to targets by name or in other less specific ways, rather than actually containing a target’s email address, phone number, or other “selectors.”

Beyond that, the reauthorization codifies a practice that, up to now, has existed solely due to the NSA’s interpretation and implementation of the law. Before this year’s Section 702 reauthorization, the NSA could not credibly argue Congress had approved the practice. Now, if the NSA restarts “about” collection, it will argue it has express statutory authorization to do so. Explicitly codifying “about” collection is thus an expansion of the NSA’s spying authority.

Finally, providing a path to restart that practice absent further Congressional oversight, when that formal procedure did not exist before, is an expansion of the NSA’s authority.

For years, the NSA has pushed its boundaries. The NSA has repeatedly violated its own policies on collection, access, and retention, according to multiple, unsealed FISC opinions. Infamously, by relying on an unjustifiable interpretation of a separate statute—Section 215—the NSA illegally conducted bulk collection of Americans’ phone records for years. And even without explicit statutory approval, the NSA found a way to create this bulk phone record program and persuade the FISC to condone it, despite having begun the bulk collection without any court or statutory authority whatsoever. 

History teaches that when Congress gives the NSA an inch, the NSA will take a mile. So we fear that the new NSA spying law’s unprecedented language on “about” collection will contribute to an expansion of the already excessive Section 702 surveillance.

Codified Backdoor Searches

The Section 702 reauthorization provides a similar expansion of the intelligence community’s authority to conduct warrantless “backdoor searches” of databases of Americans’ communications. To review, the NSA’s surveillance casts an enormously wide net, collecting (and storing) billions of emails, chats, and other communications involving Americans who are not targeted for surveillance. The NSA calls this “incidental collection,” although it is far from unintended. Once collected, these communications are often stored in databases which can be accessed by other agencies in the intelligence community, including the FBI. The FBI routinely runs searches of these databases using identifiers belonging to Americans when starting—or even before officially starting—investigations into domestic crimes that may have nothing to do with foreign intelligence issues. As with the initial collection, government officials conduct backdoor searches of Section 702 communications content without getting a warrant or other individualized court oversight—which violates the Fourth Amendment.

Just as with "about" collection, nothing in the original text of Section 702 authorized or even mentioned the unconstitutional practice of backdoor searches. While that did not stop the FISC from approving backdoor searches under certain circumstances, it did lead other courts to uphold surveillance conducted under Section 702 and ignore whether these searches are constitutional.

Just as with "about" collection, the latest Section 702 reauthorization acknowledges backdoor searches for the first time. It imposes a warrant requirement only in very narrow circumstances: where the FBI runs a search in a “predicated criminal investigation” not connected to national security. Under FBI practice, a predicated investigation is a formal, advanced case. By all accounts, though, backdoor searches are normally used far earlier. In other words, the new warrant requirement will rarely, if ever, apply. It is unlikely to prevent a fishing expedition through Americans’ private communications. Even where a search is inspired by a tip about a serious domestic crime [.pdf], the FBI should not have warrantless access to a vast trove of intimate communications that would otherwise require complying with stringent warrant procedures.

But following the latest reauthorization, the government will probably argue that Congress gave its OK to the FBI searching sensitive data obtained through NSA spying under Section 702, and using it in criminal cases against Americans.

In sum, the latest reauthorization of Section 702 is best seen as an expansion of the government’s spying powers, and not just an extension of the number of years that the government may exercise these powers. Either way, the latest reauthorization is a massive disappointment. That’s why we’ve pledged to redouble our commitment to seek surveillance reform wherever we can: through the courts, through the development and spread of technology that protects our privacy and security, and through Congressional oversight.

Code Review Isn't Evil. Security Through Obscurity Is.

On January 25th, Reuters reported that software companies like McAfee, SAP, and Symantec allow Russian authorities to review their source code, and that "this practice potentially jeopardizes the security of computer networks in at least a dozen federal agencies." The article goes on to explain what source code review looks like and which companies allow source code reviews, and reiterates that "allowing Russia to review the source code may expose unknown vulnerabilities that could be used to undermine U.S. network defenses."

The spin of this article implies that requesting code reviews is malicious behavior. This is simply not the case. Reviewing source code is an extremely common practice conducted by regular companies as well as software and security professionals to ensure certain safety guarantees of the software being installed. The article also notes that “Reuters has not found any instances where a source code review played a role in a cyberattack.” At EFF, we routinely conduct code reviews of any software that we elect to use.

Just to be clear, we don’t want to downplay foreign threats to U.S. cybersecurity, or encourage the exploitation of security vulnerabilities— on the contrary, we want to promote open-source and code review practices as stronger security measures. EFF strongly advocates for the use and spread of free and open-source software for this reason.

Not only are software companies disallowing foreign governments from conducting source code reviews, trade agreements are now being used to prohibit countries from requiring the review of the source code of imported products. The first such prohibition in a completed trade agreement will be in the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP, formerly just the TPP), which is due to be signed in March this year. A similar provision is proposed for inclusion in the modernized North American Free Trade Agreement (NAFTA), and in Europe’s upcoming bilateral trade agreements. EFF has expressed our concern that such prohibitions on mandatory source code review could stand in the way of legitimate measures to ensure the safety and quality of software such as VPN and secure messaging apps, and devices such as routers and IP cameras.

The implicit assumption that "keeping our code secret makes us safer" is extremely dangerous. Security researchers and experts have made it explicit time and time again that relying solely on security through obscurity simply does not work. Even worse, it gives engineers a false sense of safety, and can encourage further bad security practices.

Even in times of political tension and uncertainty, we should keep our wits about us. Allowing code review is not a direct affront to national security— in fact, we desperately need more of it.

ETICAS Releases First Ever Evaluations of Spanish Internet Companies' Privacy and Transparency Practices

It’s Spain's turn to take a closer look at the practices of their local Internet companies, and how they treat their customers’ personal data.

Spain's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of ETICAS Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to shine a light on Internet privacy practices in Iberoamerica. The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities (A few months ago Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and Argentinean digital rights group ADC will be releasing a similar study this year).

ETICAS surveyed a total of nine Internet companies. These companies’ logs hold intimate records of the movements and relationships of the majority of the population in the country. The five telecommunications companies surveyed—Movistar, Orange, Vodafone-ONO, Jazztel, MásMóvil—together make up the vast majority of the fixed, mobile, and broadband market in Spain. ETICAS also surveyed the four most popular online platforms for buying and renting houses—Fotocasa, Idealista, Habitaclia, and Pisos.com. ETICAS, in the tradition of Who Has Your Back?, evaluated the companies for their commitment to privacy and transparency, and awarded stars based on their current practices and public behavior. Each company was given the opportunity to answer a questionnaire, to take part in a private interview, and to send any additional information they felt appropriate, all of which was incorporated into the final report. This approach is based on EFF’s earlier work with Who Has Your Back? in the United States, although the specific questions in ETICAS’ study were adapted to match Spain’s local laws and realities.

ETICAS rankings for Spanish ISPs and phone companies are below; the full report, which includes details about each company, is available at: https://eticasfoundation.org/qdtd

ETICAS reviewed each company in five categories:

  1. Privacy Policy: whether its privacy policy is linked from the main website, whether it tell users which data are being processed, how long these companies store their data, and if they notify users if they change their privacy policies.
  2. According to law: whether they publish their law enforcement guidelines and whether they hand over data according to the law.
  3. Notification: whether they provide prior notification to customers of government data demands.  
  4. Transparency: whether they publish transparency reports.
  5. Promote users’ privacy in courts or congress: whether they have publicly stood to promote privacy.

Conclusion

A chart describing the results of the ETICAS survey of nine Internet companies

Companies in Spain are off to a good start but still have a ways to go to fully protect their customers’ personal data and be transparent about who has access to it. This years' report shows Telefónica-Movistar taking the lead, followed closely by Orange, but both still have plenty of room for improvement, especially on Transparency Reports and Notification. For 2018, competitors could catch up with efforts to provide better user notification of surveillance, publish transparency reports, law enforcement guidelines, or publicly make clear data protection policies.

ETICAS is expected to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Spaniards will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope the report will shine with more stars next year.

Páginas

JavaScript license information