ETICAS Releases First Ever Evaluations of Spanish Internet Companies' Privacy and Transparency Practices

It’s Spain's turn to take a closer look at the practices of their local Internet companies, and how they treat their customers’ personal data.

Spain's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of ETICAS Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to shine a light on Internet privacy practices in Iberoamerica. The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities (A few months ago Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and Argentinean digital rights group ADC will be releasing a similar study this year).

ETICAS surveyed a total of nine Internet companies. These companies’ logs hold intimate records of the movements and relationships of the majority of the population in the country. The five telecommunications companies surveyed—Movistar, Orange, Vodafone-ONO, Jazztel, MásMóvil—together make up the vast majority of the fixed, mobile, and broadband market in Spain. ETICAS also surveyed the four most popular online platforms for buying and renting houses—Fotocasa, Idealista, Habitaclia, and ETICAS, in the tradition of Who Has Your Back?, evaluated the companies for their commitment to privacy and transparency, and awarded stars based on their current practices and public behavior. Each company was given the opportunity to answer a questionnaire, to take part in a private interview, and to send any additional information they felt appropriate, all of which was incorporated into the final report. This approach is based on EFF’s earlier work with Who Has Your Back? in the United States, although the specific questions in ETICAS’ study were adapted to match Spain’s local laws and realities.

ETICAS rankings for Spanish ISPs and phone companies are below; the full report, which includes details about each company, is available at:

ETICAS reviewed each company in five categories:

  1. Privacy Policy: whether its privacy policy is linked from the main website, whether it tell users which data are being processed, how long these companies store their data, and if they notify users if they change their privacy policies.
  2. According to law: whether they publish their law enforcement guidelines and whether they hand over data according to the law.
  3. Notification: whether they provide prior notification to customers of government data demands.  
  4. Transparency: whether they publish transparency reports.
  5. Promote users’ privacy in courts or congress: whether they have publicly stood to promote privacy.


A chart describing the results of the ETICAS survey of nine Internet companies

Companies in Spain are off to a good start but still have a ways to go to fully protect their customers’ personal data and be transparent about who has access to it. This years' report shows Telefónica-Movistar taking the lead, followed closely by Orange, but both still have plenty of room for improvement, especially on Transparency Reports and Notification. For 2018, competitors could catch up with efforts to provide better user notification of surveillance, publish transparency reports, law enforcement guidelines, or publicly make clear data protection policies.

ETICAS is expected to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Spaniards will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope the report will shine with more stars next year.

When Trading Track Records Means Less Privacy

Sharing your personal fitness goals—lowered heart rates, accurate calorie counts, jogging times, and GPS paths—sounds like a fun, competitive feature offered by today’s digital fitness trackers, but a recent report from The Washington Post highlights how this same feature might end up revealing not just where you are, where you’ve been, and how often you’ve traveled there, but sensitive national security information.

According to The Washington Post report, the fitness tracking software company Strava—whose software is implemented into devices made by Fitbit and Jawbone—posted a “heat map” in November 2017 showing activity of some of its 27 million users around the world. Unintentionally included in that map were the locations, daily routines, and possible supply routes of disclosed and undisclosed U.S. military bases and outposts, including what appear to be classified CIA sites.

Though the revealed information itself was anonymized—meaning map viewers could not easily determine identities of Strava customers with the map alone—when read collectively, the information resulted in a serious breach of privacy.

 Shared on Twitter, the map led to several discoveries, the report said.

“Adam Rawnsley, a Daily Beast journalist, noticed a lot of jogging activity on the beach near a suspected CIA base in Mogadishu, Somalia.

Another Twitter user said he had located a Patriot missile system site in Yemen.

Ben Taub, a journalist with the New Yorker, homed in on the location of U.S. Special Operations bases in the Sahel region of Africa.”

On Monday, according to a follow-up report by The Washington Post, the U.S. military said it was reviewing guidelines on how it uses wireless devices.

As the Strava map became more popular, the report said, Internet users were able to further de-anonymize the data, pairing it to information on Strava’s website.

According to The Washington Post's follow-up report:

“On one of the Strava sites, it is possible to click on a frequently used jogging route and see who runs the route and at what times. One Strava user demonstrated how to use the map and Google to identify by name a U.S. Army major and his running route at a base in Afghanistan.”

The media focused on one particular group affected by this privacy breach: the U.S. military. But of course, regular people’s privacy is impacted even more by privacy leaks such as this. For instance, according to a first-person account written in Quartz last year, one London jogger was surprised to learn that, even with strict privacy control settings on Strava, her best running times—along with her first and last name and photo—were still visible to strangers who peered into her digital exercise activity. These breaches came through an unintended bargain, in which customers traded their privacy for access to social fitness tracking features that didn’t exist several years ago.

And these breaches happened even though Strava attempted to anonymize its customers’ individual data. That clearly wasn’t enough. Often, our understanding of “anonymous” is wrong—invasive database cross-referencing can reveal all sorts of private information, dispelling any efforts at meaningful online anonymity.

While “gamified” fitness trackers, especially ones that have social competition built-in, are fun, they are really just putting a friendly face on big brother. When we give control over our personal data—especially sensitive data such as location history—to third parties, we expect it to be kept private. When companies betray that trust, even in “anonymized” form such as the Strava heat map, unintended privacy harms are almost guaranteed. Clearly communicated privacy settings can help us in situations like these, but so can company decisions to better protect the data they publish online.

It's Time to Make Student Privacy a Priority

Last month, the Federal Trade Commission and the U.S. Department of Education held a workshop in Washington, DC. The topic was “Student Privacy and Ed Tech.” We at EFF have been trying to get the FTC to focus on the privacy risks of educational technology (or “ed tech”) for over two years, so we eagerly filed formal comments.

We’ve long been concerned about how technology impacts student privacy. As schools and classrooms become increasingly wired, and as schools put more digital devices and services in the hands of students, we’ve been contacted by a large number of concerned students, parents, teachers, and even administrators.

They want to know: What data are ed tech providers collecting about our kids? How are they using it? How well do they disclose (if at all) the scope of their data collection? How much control (if any) do they give to schools and parents over the retention and use of the data they collect? Do they even attempt to obtain parental consent before collecting and using incredibly sensitive student data?

In the spring of 2017, we released the results of a survey that we conducted in order to plumb the depths of the confusion surrounding ed tech. And as it turns out, students, parents, teachers, and even administrators have lots of concerns—and very little clarity—over how ed tech providers protect student privacy.

Drawing from the results of our survey, our comments to the FTC and DOE touched on a broad set of concerns:

  • The FTC has ignored our student privacy complaint against Google. Despite signing a supposedly binding commitment to refrain from collecting student data without parental consent beyond that needed for school purposes, Google openly harvests student search and browsing behavior, and uses that data for its own purposes. We filed a formal complaint with the FTC more than two years ago but have heard nothing back.
  • There is a consistent lack of transparency in ed tech privacy policies and practices. Schools issue devices to students without their parents’ knowledge and consent. Parents are kept in the dark about what apps their kids are required to use and what data is being collected.
  • The investigative burden too often falls on students and parents. With no notice or help from schools, the investigative burden falls on parents and even students to understand the privacy implications of the technology students are using.
  • Data use concerns are unresolved. Parents have extensive concerns about student data collection, retention, and sharing. Many ed tech products and services have weak privacy policies. For instance, it took the lawyers at EFF months to get a clear picture of which privacy policies even applied to Google’s student offerings, much less how they interacted.
  • Lack of choice in ed tech is the norm. Parents who seek to opt their children out of device or software use face many hurdles, particularly those without the resources to provide their own alternatives. Some districts have even threatened to penalize students whose parents refuse to consent to what they believe are egregious ed tech privacy policies and practices.
  • Overreliance on “privacy by policy.” School districts generally rely on the privacy policies of ed tech companies to ensure student data protection. Parents and students, on the other hand, want concrete evidence that student data is protected in practice as well as in policy.
  • There is an unfilled need for better privacy training and education. Both students and teachers want better training in privacy-conscious technology use. Ed tech providers aren’t fulfilling their obligations to schools when they fail to provide even rudimentary privacy training.
  • Ed tech vendors treat existing privacy law as if it doesn’t apply to them. Because the Family Educational Rights and Privacy Act (“FERPA”) generally prohibits school districts from sharing student information with third parties without written parental consent, districts often characterize ed tech companies as “school officials.” However, districts may only do so if—among other things—providers give districts or schools direct control over all student data and refrain from using that data for any other purpose. Despite the fact that current ed tech offerings generally fail those criteria, vendors generally don’t even attempt to obtain parental consent.

We believe it is incumbent upon school districts to fully understand the data and privacy policies and practices of the ed tech products and services they wish to use, demand that ed tech vendors assent to contract terms that are favorable to the school districts and actually protect student privacy, and be ready not to do business with a company who does not engage in robust privacy practices.

While we understand that school budgets are often tight and that technology can actually enhance the learning experience, we urge regulators, school districts, and the ed tech companies themselves to make student privacy a priority. We hope the FTC and DOE listen to what we, and countless concerned students, parents, and teachers, have to say.

ICE Accesses a Massive Amount of License Plate Data. Will California Take Action?

The news that Immigrations & Customs Enforcement is using a massive database of license plate scans from a private company sent shockwaves through the civil liberties and immigrants’ rights community, who are already sounding the alarm about how mass surveillance will be used to fuel deportation efforts.

The concerns are certainly justified: the vendor, Vigilant Solutions, offers access to 6.5 billion data points, plus millions more collected by law enforcement agencies around the country. Using advanced algorithms, this information—often collected by roving vehicles equipped with automated license plate readers (ALPRs) that scan every license plate they pass—can be used to reveal a driver’s travel patterns and to track a vehicle in real time.

ICE announced the expansion of its ALPR program in December, but without disclosing what company would be supplying the data. While EFF had long suspected Vigilant Solutions won the contract, The Verge confirmed it in a widely circulated story published last week.

In California, this development raises many questions about whether the legislature has taken enough steps to protect immigrants, despite passing laws last year to protect residents from heavy-handed immigration enforcement.

But California lawmakers should have already seen this coming. Two years ago, The Atlantic branded these commercial ALPR databases, “an unprecedented threat to privacy.

Vigilant Solutions tells its law enforcement customers that accessing this data is “as easy as adding a friend on your favorite social media platform.” As a result, California agencies share their data wholesale with hundreds of entities, ranging from small towns in the Deep South to a variety of federal agencies.

An analysis by EFF of records obtained from local police has identified more than a dozen California agencies that have already been sharing ALPR data with ICE through their Vigilant Solutions accounts. The records show that ICE, through its Homeland Security Investigations offices in Newark, New Orleans, and Houston, has had access to data from more than a dozen California police departments for years.

At least one ICE office has access to ALPR data collected by the following police agencies:

  • Anaheim Police Department
  • Antioch Police Department
  • Bakersfield Police Department
  • Chino Police Department
  • Fontana Police Department
  • Fountain Valley Police Department
  • Glendora Police Department
  • Hawthorne Police Department
  • Montebello Police Department
  • Orange Police Department
  • Sacramento Police Department
  • San Diego Police Department
  • Simi Valley Police Department
  • Tulare Police Department

ICE agents have also obtained direct access to this data through user accounts provided by local law enforcement. For example, an ICE officer obtained access through the Long Beach Police Department’s system in November 2016 and ran 278 license plate searches over nine months. Two CBP officers further conducted 578 plate searches through Long Beach’s system during that same period.

It’s important to note that ALPR technology collects and stores data on millions of drivers without any connection to a criminal investigation. As EFF noted, this data can reveal sensitive information about a person, for example, if they visit reproductive health clinics, immigration resource centers, mosques, or LGBTQ clubs. Even attendees at gun shows have found their plates captured by CBP officers, according to the Wall Street Journal.

Police departments must take a hard look at their ALPR systems and un-friend DHS. But the California legislature also has a chance to offer a defense measure for drivers who want to protect their privacy.

Update: The California Senate voted down S.B. 712 on January 30, 2018. 

S.B. 712 would allow drivers to apply a removable cover to their license plates when they are lawfully parked, similar to how drivers are currently allowed to cover their entire vehicles with a tarp to protect their paint jobs from elements. While this would not prevent ALPRs from collecting data from moving vehicles, it would offer privacy for those who want to protect the confidentiality of their destinations.

Before the latest story broke, S.B. 712 was brought to the California Senate floor, where it initially failed on a tied vote, with many Republicans and Democrats—including Sens. Joel Anderson (R-Alpine) and Scott Wiener (D-San Francisco)—joining in support.

Unfortunately, several Democrats, such as Senate President Kevin de León and Sen. Connie Leyva, who have positioned themselves as immigrant advocates, voted against the bill the first time around. Others, such as Sens. Toni Atkins and Ricardo Lara, sat the vote out.

The Senate has one last chance to pass the bill and send it to the California Assembly by January 31. The bill is urgently necessary to protect the California driving public from surveillance.

Californians: join us today in urging your senator to stand up for privacy, not the interests of ICE or the myriad of financial institutions, insurance companies, and debt collectors who also abuse this mass data collection.

EFF's Fight to End Warrantless Device Searches at the Border: A Roundup of Our Advocacy

EFF has been working on multiple fronts to end a widespread violation of digital liberty—warrantless searches of travelers’ electronic devices at the border. Government policies allow border agents to search and confiscate our cell phones, tablets, and laptops at airports and border crossings for no reason, without explanation or any suspicion of wrongdoing. It’s as if our First and Fourth Amendment rights don’t exist at the border. This is wrong, which is why we’re working to challenge and end these unconstitutional practices.

EFF and the ACLU filed a brief today in our Alasaad v. Nielsen lawsuit to oppose the government’s attempt to dismiss our case. Our lawsuit, filed in September 2017 on behalf of 11 Americans whose devices were searched, takes direct aim at the illegal policies enforced by the U.S. Department of Homeland Security and its component agencies, U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE). In our brief we explain that warrantless searches of electronic devices at the border violate the First and Fourth Amendments, and that our 11 clients have every right to bring this case.

This is just the latest action we’ve taken in the fight for digital rights at the border. EFF is pushing back against the government’s invasive practices on three distinct fronts: litigation, legislation, and public education.

A Rampant Problem

Over the past few years there has been a dramatic increase in the number of searches of cell phones and other electronic devices conducted by border agents. CBP reported that in fiscal year 2012 the number of border device searches was 5,085. In fiscal year 2017, the number had increased to 30,200—a six-fold increase in just five years.

DHS claims the authority to ransack travelers’ cell phones and other devices and the massive troves of highly personal information they contain. ICE agents can do so for any reason or no reason. Under a new policy issued earlier this month, CBP agents can do so without a warrant or probable cause, and usually can do so without even reasonable suspicion.

Also, agents can and do confiscate devices for lengthy periods of time and subject them to extensive examination.

These practices are unconstitutional invasions of our privacy and free speech. Our electronic devices contain our emails, text messages, photos and browsing history. They document our travel patterns, shopping habits, and reading preferences. They expose our love lives, health conditions, and religious and political beliefs. They reveal whom we know and associate with. Warrantless device searches at the border violate travelers’ rights to privacy under the Fourth Amendment, and freedoms of speech, press, private association, and anonymity under the First Amendment.

These practices have existed at least since the George W. Bush administration and continued through the Obama administration. But given the recent dramatic uptick in the number of border device searches since President Trump took office, a former DHS chief privacy officer, Mary Ellen Callahan, concluded that the increase was “clearly a conscious strategy,” and not “happenstance.”

But the U.S. border is not a Constitution-free zone. The Fourth Amendment requires the government to obtain a probable cause warrant before conducting a border search of a traveler’s electronic device. This follows from the U.S. Supreme Court case Riley v. California (2014). The court held that police need a warrant to search the cell phones of people they arrest.

The warrant process is critical because it provides a check on government power and, specifically, a restraint on arbitrary invasions of privacy. In seeking a warrant, a government agent must provide sworn testimony before a neutral arbiter—a judge—asserting why the government believes there’s some likelihood (“probable cause”) that the cell phone or other thing to be searched contains evidence of criminality. If the judge is convinced, she will issue the search warrant, allowing the government to access your private information even if you don’t consent.

Right now, there are no such constraints on CBP and ICE agents—but we’re fighting in court and in Congress to change this.


On September 13, 2017, EFF along with ACLU filed our lawsuit, Alasaad v. Nielsen, against the federal government on behalf of ten U.S. citizens and one lawful permanent resident whose smartphones and other devices were searched without a warrant at the U.S. border. The plaintiffs include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. Several are Muslims or people of color. All were reentering the country after business or personal travel when border agents searched their devices. None were subsequently accused of any wrongdoing.

Each of the Alasaad plaintiffs suffered a substantial privacy invasion. Some plaintiffs were detained for several hours while agents searched their devices, while others had their devices confiscated and were not told when their belongings would be returned. One plaintiff was even placed in a chokehold after he refused to hand over his phone. You can read the detailed stories of all the Alasaad plaintiffs.

In the Alasaad lawsuit, we are asking the U.S. District Court for Massachusetts to find that the policies of CBP and ICE violate the Fourth Amendment. We also allege that the search policies violate the First Amendment. We are asking the court to enjoin the federal government from searching electronic devices at the border without first obtaining a warrant supported by probable cause, and from confiscating devices for lengthy periods without probable cause.

In the past year, EFF also has filed three amicus briefs in U.S. Courts of Appeals (in the Fourth, Fifth, and Ninth Circuits). In those briefs, we argued that border agents need a probable cause warrant to search electronic devices. There are extremely strong and unprecedented privacy interests in the highly sensitive information stored and accessible on electronic devices, and the narrow purposes of the border search exception—immigration and customs enforcement—are not served by warrantless searches of electronic data.


EFF is urging the U.S. Congress to pass the Protecting Data at the Border Act. The Act would require border agents to obtain a probable cause warrant before searching the electronic devices of U.S. citizens and legal permanent residents at the border.

The Senate bill (S. 823) is sponsored by Sen. Ron Wyden (D-OR) and Sen. Rand Paul (R-KY). Rep. Polis (D-CO), Rep. Smith (D-WA), and Rep. Farenthold (R-TX) are taking the lead on the House bill (H.R. 1899).

In addition to creating a warrant requirement, the Act would prohibit the government from delaying or denying entry or exit to a U.S. person based on that person’s refusal to hand over a device passcode, online account login credential, or social media handle.

You can read more about this critical bill in our call to action, and our op-ed in The Hill. Please contact your representatives in Congress and urge them to co-sponsor the Protecting Data at the Border Act.

Public Education

Finally, EFF published a travel guide that helps travelers understand their individual risks when crossing the U.S. border (which includes U.S. airports if flying from overseas), provides an overview of the law around border searches, and offers technical guidance for securing digital data.

Our travel guide recognizes that one size does not fit all, and it helps travelers make informed choices regarding their specific situation and risk tolerance. The guide is a useful resource for all travelers who want to keep their digital data safe.

You can download our full report as a PDF. Additionally, you can print EFF’s pocket guide to protecting digital privacy at the border.

Related Cases: Alasaad v. Nielsen

Europe's GDPR Meets WHOIS Privacy: Which Way Forward?

Europe's General Data Protection Regulation (GDPR) will come into effect in May 2018, and with it, a new set of tough penalties for companies that fail to adequately protect the personal data of European users. Amongst those affected are domain name registries and registrars, who are required by ICANN, the global domain name authority, to list the personal information of domain name registrants in publicly-accessible WHOIS directories. ICANN and European registrars have clashed over this long-standing contractual requirement, which does not comply [PDF] with European data protection law.

This was one of the highest profile topics at ICANN's 60th meeting in Abu Dhabi which EFF attended last year, with registries and registrars laying the blame on ICANN, either for their liability under the GDPR if they complied with their WHOIS obligations, or for their contractual liability to ICANN if they didn't. ICANN has recognized this and has progressively, if belatedly, being taking steps to remediate the clash between its own rules, and the data protection principles that European law upholds.

A Brief History of Domain Privacy at ICANN

ICANN's first step in improving domain privacy, which dates from 2008 and underwent minor revisions in 2015, was to create a very narrow and cumbersome process for a party bound by privacy laws that conflicted with its contractual requirements to seek an exemption from those requirements from ICANN. Next in 2015, ICANN commenced a Policy Development Process (PDP) for the development of a Next-Generation gTLD Registration Directory Services (RDS) to Replace WHOIS, whose work remains ongoing, with the intention that this new RDS would be more compatible with the privacy laws, probably by providing layered access to registrant data to various classes of authorized users.

Meanwhile, ICANN considered whether to limit registrants' access to a privacy workaround that allowed registrants to register their domain via a proxy, thereby keeping their real personal details private. Although it eventually concluded that access to privacy proxy registration services shouldn't be limited [PDF], these don't amount to a substitute for the new RDS that will incorporate privacy by design, because not all registrars provide this option, only do so as an opt-in service, or via a third party who charges money for it.

Meanwhile, effective July 2017, ICANN amended its contract with registries to require them to obtain the consent of registrants for their information to be listed online. But again, this is no substitute for the new RDS, because consent that is required as a condition of registering a domain wouldn't qualify as "freely given" under European law. ICANN followed up in November 2017 with a statement that it would abstain from taking enforcement action against registries or registrars who provided it with a "compliance model" that sought to reconcile their contractual obligations with the requirements of data protection law.

Three Interim Options

Finally, with the GDPR deadline hard approaching and with the work of the Next-Generation RDS group nowhere near completion, ICANN has issued a set of three possible stop-gap measures for public comment. These three models, based upon legal advice belatedly obtained by ICANN last year [PDF], are intended to protect registries and registrars from liability under the GDPR during the interim period between May 2018 and the final implementation of the recommendations of the Next-Generation RDS PDP. In simple terms the three options are:

  1. Allowing anyone who self-certifies that they have a legitimate interest in accessing personal data of an individual registrant to do so. 
  2. Setting up a formal accreditation/certification program under which only a defined set of third-party requestors would be authorized to gain access to individual registrants' personal data.
  3. Access to personal data of registrants would only be available under a subpoena or other order from a court or other judicial tribunal of competent jurisdiction. 

None of these are perfect solutions for retroactively enforcing new privacy on ICANN's old procedures. In EFF's comments on ICANN's proposals, we ended up supporting the third option; or actually, a variation of it. Whereas in ICANN's option 3 proposal a case by case evaluation of each field in each registration would be required to determine whether it contains personal data, this seems impractical. Instead, as with option 2, it should be assumed that the name, phone number, and address fields contain personal data, and these should be withheld from public display.1

ICANN's first option, which would allow anyone to claim that they have a legitimate interest in obtaining registrants' personal data, is unlikely to hold water against the GDPR —they could simply lie, or may be mistaken about what amounts to a legitimate interest. The second option is likely to be unworkable in practice, especially for implementation in such a short space of time. By requiring ICANN to make a legal evaluation of the legitimate interests of third parties in gaining access to personal information of registrants, ICANN's legal advisers acknowledge that this option would:

require the registrars to perform an assessment of interests in accordance with Article 6.1(f) GDPR on an individual case-by-case basis each time a request for access is made. This would put a significant organizational and administrative pressure on the registrars and also require them to obtain and maintain the competence required to make such assessments in order to deliver the requested data in a reasonably timely manner.

Moreover, the case most commonly made for third party access to registration data is for law enforcement authorities and intellectual property rights holders to be able to obtain this data. We already have a system for the formal evaluation of the claims of these parties to gain access to personal data; it's the legal system, through which they can obtain a warrant or a subpoena, either directly if they are in the same country as the registry or registrar, or via a treaty such as a Mutual Legal Assistance Treaty (MLAT) if they are not. This is exactly what ICANN's Model 3 allows, and it's the appropriate standard for ICANN to adopt.

Is the Sky Falling?

Many ICANN stakeholders are concerned that access to the public WHOIS database could change. Amongst the most vocal opponents of new privacy protections for registrants include some security researchers and anti-abuse experts, for whom it would be impractical to go to a court for a subpoena for that information, even if the court would grant one. Creating, as Model 2 would do, a separate class of Internet "super-users" who could use their good work as a reason to examine the personal information databases of the registrars seems a tempting solution. But we would have serious concerns about seeing ICANN installed as the gatekeeper of who is permitted to engage in security research or abuse mitigation, and thereby to obtain privileged access to registrant data.

Requiring a warrant or subpoena for access to personal data of registrants isn't as radical as its opponents make out. There are already a number of registries, including the country-code registries of most European countries (which are not subject to ICANN's WHOIS rules) that already operate in this way. Everyone who is involved in WHOIS research — be they criminals using domains for fraud, WHOIS scraping spammers, or anti-abuse researchers — is already well aware of these more privacy-protective services.  It's better for us all to create and support methods of investigation that accept this model of private domain registration, than open up ICANN or its contracted parties to the responsibility of deciding what they should do if, for example, the cyber-security wing of an oppressive government begins to search for the registration data of dissidents.

There are other cases in which it makes sense to allow members of the public to contact the owner of a domain, without having to obtain a court order. But this could be achieved very simply if ICANN were simply to provide something like a CAPTCHA-protected contact form, which would deliver email to the appropriate contact point with no need to reveal the registrant’s actual email address. There's no reason why this couldn't be required in conjunction with ICANN's Model 3, to address the legitimate concerns of those who need to contact domain owners for operational or business reasons, and who for whatever reason can't obtain contact details in any other way.

Comments on ICANN's proposals are being received until January 29, and may be sent to You can read our comment here.

  • 1. There are actually two versions of Model 2 presented; one that would only apply if the registrant, registry, or registrar is in Europe (which is also the suggested scope of Model 1), and the other that would apply globally. Similarly, options are given for Model 2 to apply either just to individual registrants, or to all registrants. Given that there are over 100 countries that have omnibus data protection laws (and this number is growing), many of which are based on the European model, there seems to be little sense for any of the proposals to be limited to Europe. Neither does it make sense to limit the proposals to individual registrants, because even if it were possible to draw a clear line between individual and organizational registrations (it often isn't), organizational registrations may contain personally identifiable information about corporate officers or contact persons.

Google’s Advanced Protection Program Offers Security Options For High-Risk Users

Security is not a one-size-fits-all proposition, and features that are prohibitively inconvenient for some could be critical for others. For most users, standard account security settings options are sufficient protection against common threats. But for the small minority of users who might be targeted individually—like journalists, policy makers, campaign staff, activists, people with abusive exes, or victims of stalking—standard security options won’t cut it.

For those users, Google recently added the option to add stronger protections to personal Google accounts with the Advanced Protection Program. Advanced Protection is a big step in the right direction to provide different levels of protection for different people, and other companies and platforms should follow suit.

An account with Advanced Protection turned on will change in three main ways. First, when you sign in, you’ll need to use a physical security key in addition to your password. Advanced Protection also requires you to have a second back-up key on hand. Second, you’ll only be able to use Gmail and other Google services on the Chrome browser, and third-party apps won’t be able to access your Gmail or Google Drive. And third, if you ever get locked out of your account, regaining access will take more time and require more types of identity verification. Respectively, these measures protect against phishing, malicious apps that try to trick you into granting them excessive permissions, and attackers who try to use the account recovery process to take over your account.

This adds up to the best option available to individuals who want to give their personal Google accounts the highest level of security without needing technical expertise or deep pockets.

Of course, Advanced Protection comes with significant trade-offs and limitations. Starting to use Advanced Protection requires two security keys and some set-up time. For people not used to carrying around and keeping track of security keys, that can pose an inconvenience when signing in. And once signed in, users who rely on non-Google apps or clients to use their Gmail or Google Calendar will lose some of that functionality. This is especially the case for Mac and iPhone users: since native Apple applications do not currently support two-factor authentication with security keys, iOS users will have to take arduous extra steps to make sure their apps and contacts are set up. Finally, if you ever lose your security keys or forget your password, the lengthy account recovery process will lock you out of your account for days. Expect the specifics to change, however, as Google updates the program’s protections and functionality going forward.

By definition, Advanced Protection won’t be for everyone. Using it means accepting more inconvenience in exchange for higher security. But if an account breach could threaten your reputation, career, or even your life, it is an option worth considering. If you turn on Advanced Protection on and it turns out to not be the right fit, it can be turned off at any time.

Dark Caracal: Good News and Bad News

Yesterday, EFF and Lookout announced a new report, Dark Caracal, that uncovers a new, global malware espionage campaign. One aspect of that campaign was the use of malicious, fake apps to impersonate legitimate popular apps like Signal and WhatsApp. Some readers had questions about what this means for them. This blog post is here to answer those questions and dive further into the Dark Caracal report.

Read the full Dark Caracal report here

First, the good news: Dark Caracal does not mean that Signal or WhatsApp themselves are compromised in any way. It only means that attackers found new, insidious ways to create and distribute fake Android versions of them. (iOS is not affected.) If you downloaded your apps from Google’s official app store, Google Play, then you are almost certainly in the clear. The threat uncovered in the Dark Caracal report referred to “trojanized” apps, which are fake apps that pretend to look like real, trusted ones. These malicious spoofs often ask for excessive permissions and carry malware. Such spoofed versions of Signal and WhatsApp were involved in the Dark Caracal campaign.

The malicious actors behind Dark Caracal got these fake, malicious apps onto people’s phones by spearphishing. Several types of phishing emails directed people—including military personnel, activists, journalists, and lawyers—to go to a fake app store-like page, where fake Android apps waited. There is even evidence that, in some cases, Dark Caracal used physical access to people’s phones to install the fake apps. Again, if you downloaded your apps from the official app store, you can rest easy that this has likely not affected you.

And now the bad news: Dark Caracal has wide-reaching implications for how state-sponsored surveillance and malware works. Most people do not have to worry about this very specific threat. But for the small minority of users who may be directly targeted by nation-states or other skilled, motivated adversaries—and for the malware researchers who try to track those adversaries down—the Dark Caracal report uncovers a new infrastructure that makes it even harder to attribute attacks and malware campaigns to a particular nation or actor. More details are available in the report.

Dark Caracal is also a reminder that most modern hacking requires the unwitting participation of the user. The most dangerous thing in the online environment is not necessarily complex, headline-grabbing vulnerabilities, but well-crafted phishing messages and fake apps that trick users into handing over log-in credentials and granting excessive permissions. Keep an eye out for links, attachments, and apps pretending to be something they’re not, and make sure your friends, neighbors, and others in your community are informed too.

An Open Letter to Our Community On Congress’s Vote to Extend NSA Spying From EFF Executive Director Cindy Cohn

Dear friends,

Today, the United States Congress struck a significant blow against the basic human right to read, write, learn, and associate free of government’s prying eyes. 

Goaded by those who let fear override democratic principles, some members of Congress shuttered public debate in order to pass a bill that extends the National Security Agency’s unconstitutional Internet surveillance for six years. 

This means six more years of warrantless surveillance under Section 702 of the FISA Amendments Act. This is a long-abused law marketed as targeting foreigners abroad but which—intentionally and by design—subjects a tremendous amount of our Internet activities to government review, as they pass through key Internet checkpoints, and as they are stored by providers like Google and Facebook. Ultimately, the NSA uses Section 702 to sweep in and retain the communications of countless non-suspect Americans. 

Today’s action also means six more years of FBI access to giant databases of these NSA-collected communications, for purposes of routine domestic law enforcement that stray far from the original justification of national security. 

We offer this response to the NSA and its allies in Congress: enjoy it while you can because it won’t last.

It didn’t have to be this way. Forward-thinking U.S. legislators from both sides of the aisle negotiated compromise bills that, while far from ideal, would have reined in some of the worst abuses of NSA surveillance powers while ensuring our intelligence agents could still do their jobs. But leadership from both Houses prevented the full Congress from considering these measures. For example, Senators were denied the opportunity to consider the USA Rights Act, and Representatives never had an opportunity to vote on the Poe-Lofgren Amendment during Thursday's floor vote. Both legislative vehicles offered sensible reforms that would have advanced the privacy of innocent American technology users. This procedural maneuvering also meant that your opportunity to make your voices heard was greatly truncated.   

While this debate took place in the halls of Washington, the ramifications are global. Millions of people around the world suffer under the NSA’s dragnet data collection. EFF fights for the rights of technology users everywhere, and our mission will not be complete until innocent users worldwide can communicate with dignity and privacy. Today Congress demonstrated its lack of regard for the human rights to privacy and association. And it shirked its duty to protect Americans’ rights under the Constitution.

We offer this response to the National Security Agency and its allies in Congress: enjoy it while you can because it won’t last. 

Today’s Congressional failure redoubles our commitment to seek justice through the courts and through the development and spread of technology that protects our privacy and security.

First, in the courts. We’ve actively litigated against NSA spying since 2005. Our flagship lawsuit against mass surveillance Jewel v. NSA is currently in discovery in the District Court, having survived multiple challenges by the government. The government even sought in October to indefinitely delay responding to demands from the court to turn over documentation of surveillance, but the court refused. Instead, they are facing a looming deadline to produce documents to the court: February 16, 2018. We’re also confronting NSA mass spying through use of the Freedom of Information Act, supporting the other cases against mass spying, and participating in the few criminal court cases where the government has admitted using evidence collected under Section 702.  

We also continue to search for new cases and arguments to challenge NSA mass spying in court—stepping up to the legal challenge of finding people who have admissible evidence that they have been surveilled and can pass the hurdle of standing that has blocked so many before. 

We aim to bring mass surveillance to the Supreme Court. By showcasing the unconstitutionality of the NSA’s collect-it-all approach to tapping the Internet, we’ll seek to end the dragnet surveillance of millions of innocent people. We know that the wheels of justice turn slowly, especially when it comes to impact litigation against the NSA, but we’re in this for the long run. 

Second, we’ll continue to harden digital platforms to make them resistant to surveillance and increase the ability of everyone to be digitally secure. We will promote widespread encryption through EFF tools like Certbot and HTTPS Everywhere, and we’ll promote the adoption of security tools through education and outreach. We’ll stand up to ongoing FBI efforts to block or deter our access to strong encryption. Together, we can make it more difficult and more costly for the NSA’s spying eyes to ensnare innocent people. And we will help technology users increase their digital security against bad actors.

Finally, we will continue to work with our allies in Congress to expose and restrain NSA surveillance. There is much to do on Capitol Hill, long before the next reauthorization debate in 2023.

Our vision is for a secure digital world, free from government surveillance and censorship. You deserve to have a private conversation online, just as you can have one offline. You deserve the right to associate and organize with others, as well as to read and research, free of government snooping. While Congress failed the American people today, EFF will not. With the support of our more than 40,000 members, we are stronger and more ready than ever to keep up this fight.

Cindy Cohn   
Executive Director
Electronic Frontier Foundation
January 16, 2018

Public domain image from Trevor Paglen

EFF to Supreme Court: Protect the Privacy of Cross-Border Data

The Electronic Frontier Foundation urged the Supreme Court today to hold that Microsoft cannot be forced by the U.S. government to disclose the contents of users’ emails stored on the company’s computers in Dublin, Ireland.

The stakes for user privacy in the court’s decision are extremely high. Governments around the world may feel empowered to snoop on the countless emails, chats, and other online communications that cross international boundaries if the court sides with the government.

At the center of the case, the U.S. government is attempting to overturn a Second Circuit decision holding that police cannot use U.S. warrants to compel U.S. Internet companies to disclose users’ email and digital content stored outside the United States. The appellate court reasoned that this extraterritorial application of a U.S. warrant would exceed the process Congress created — the Electronic Communications Privacy Act (ECPA) — to protect people’s privacy while allowing law enforcement access to emails. The case is titled United States v. Microsoft, and is often called “the Microsoft Ireland case.” EFF joined the ACLU, Brennan Center, Restore the Fourth, and R Street Institute to file the amicus brief with the Supreme Court.

The U.S. government’s unilateral approach to obtaining Microsoft users’ emails would bypass the international procedures that it has previously agreed to. Specifically, the U.S. has signed treaties with 65 individual countries and the European Union, called Mutual Legal Assistance Treaties (MLATs), that enable the U.S. to apply to foreign governments where evidence of a crime is located, and ask that country to assist in collecting the evidence under its own privacy laws. The countries the United States has partnered with can similarly request that the U.S. Department of Justice help them collect evidence stored in the United States. Under MLATs, foreign countries must follow the privacy rules established by U.S. law, including the requirement under the Fourth Amendment that law enforcement obtain a warrant to search and seize content. These MLATs recognize the importance of other countries’ privacy and human rights laws. Ireland has advised the U.S. Supreme Court that it believes the MLAT process is the most appropriate means for the U.S. government to obtain the emails that Microsoft stores in Ireland.

To evade using MLATs, and get around the fact that U.S. warrants typically do not have international reach, the U.S. government is arguing that a Fourth Amendment search and seizure only occurs when Microsoft, within the United States, delivers emails to officers of the U.S. government. That is simply not the case. Rather, if Microsoft copies or moves data from Ireland to the United States on demand from the U.S. government, that is a search and seizure, and it occurs abroad. As our amicus brief states:

Furthermore, the Government’s argument that such collection and copying does not “expand[ ] [Microsoft’s] authority over those emails” (id.) ignores that it does expand the government’s authority over them. A government-directed exercise of dominion over an individual’s private communications, by itself, is a Fourth Amendment seizure.

EFF has long worked to ensure the greatest privacy protection for cross-border data. In the Microsoft Ireland case, we filed amicus briefs before the district court and the appellate court. We are also fighting for privacy protections at the international level in the Council of Europe, where a  new treaty  could allow direct foreign law enforcement access to data stored in other countries’ territories.  And EFF is advocating against overbroad DOJ legislative proposals to access online content stored abroad.

We urge the Supreme Court to hold the government accountable for following the rules set by Congress, and by international treaty, when law enforcement agencies seek access to our private conversations stored outside the United States. The court is expected to decide this case during the spring 2018 term.

We thank our counsel Brett J. Williamson, Nathaniel Asher, David K. Lukmire, and Cara Gagliano of O’Melveny & Myers.

Related Cases: In re Warrant for Microsoft Email Stored in Dublin, Ireland


JavaScript license information