Responsibility Deflected, the CLOUD Act Passes

UPDATE, March 23, 2018: President Donald Trump signed the $1.3 trillion government spending bill—which includes the CLOUD Act—into law Friday morning.

“People deserve the right to a better process.”

Those are the words of Jim McGovern, representative for Massachusetts and member of the House of Representatives Committee on Rules, when, after 8:00 PM EST on Wednesday, he and his colleagues were handed a 2,232-page bill to review and approve for a floor vote by the next morning.

In the final pages of the bill—meant only to appropriate future government spending—lawmakers snuck in a separate piece of legislation that made no mention of funds, salaries, or budget cuts. Instead, this final, tacked-on piece of legislation will erode privacy protections around the globe.

This bill is the CLOUD Act. It was never reviewed or marked up by any committee in either the House or the Senate. It never received a hearing. It was robbed of a stand-alone floor vote because Congressional leadership decided, behind closed doors, to attach this un-vetted, unrelated data bill to the $1.3 trillion government spending bill. Congress has a professional responsibility to listen to the American people’s concerns, to represent their constituents, and to debate the merits and concerns of this proposal amongst themselves, and this week, they failed.

On Thursday, the House approved the omnibus government spending bill, with the CLOUD Act attached, in a 256-167 vote. The Senate followed up late that night with a 65-32 vote in favor. All the bill requires now is the president’s signature.

Make no mistake—you spoke up. You emailed your representatives. You told them to protect privacy and to reject the CLOUD Act, including any efforts to attach it to must-pass spending bills. You did your part. It is Congressional leadership—negotiating behind closed doors—who failed.

Because of this failure, U.S. and foreign police will have new mechanisms to seize data across the globe. Because of this failure, your private emails, your online chats, your Facebook, Google, Flickr photos, your Snapchat videos, your private lives online, your moments shared digitally between only those you trust, will be open to foreign law enforcement without a warrant and with few restrictions on using and sharing your information. Because of this failure, U.S. laws will be bypassed on U.S. soil.

As we wrote before, the CLOUD Act is a far-reaching, privacy-upending piece of legislation that will:

  • Enable foreign police to collect and wiretap people's communications from U.S. companies, without obtaining a U.S. warrant.
  • Allow foreign nations to demand personal data stored in the United States, without prior review by a judge.
  • Allow the U.S. president to enter "executive agreements" that empower police in foreign nations that have weaker privacy laws than the United States to seize data in the United States while ignoring U.S. privacy laws.
  • Allow foreign police to collect someone's data without notifying them about it.
  • Empower U.S. police to grab any data, regardless if it's a U.S. person's or not, no matter where it is stored.

And, as we wrote before, this is how the CLOUD Act could work in practice:

London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect those messages. The London police would not necessarily need prior judicial review for this request. The London police would not be required to notify U.S. law enforcement about this request. The London police would not need a probable cause warrant for this collection.

Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages, if shared with U.S. law enforcement, could be used to criminally charge the U.S. person in a U.S. court, even though a warrant was never issued.

This bill has large privacy implications both in the U.S. and abroad. It was never given the attention it deserved in Congress.

As Rep. McGovern said, the people deserve the right to a better process.

The New Frontier of E-Carceration: Trading Physical for Virtual Prisons

Criminal justice advocates have been working hard to abolish cash bail schemes and dismantle the prison industrial complex. And one of the many tools touted as an alternative to incarceration is electronic monitoring or “EM”: a form of digital incarceration, often using a wrist bracelet or ankle “shackle” that can monitor a subject’s location, blood alcohol level, or breath. But even as the use of this new incarceration technology expands, regulation and oversight over it—and the unprecedented amount of information it gathers—still lags behind.

There are many different kinds of electronic monitoring schemes:

  1. Active GPS tracking, where the transmitter monitors a person using satellites and reports location information in real time at set intervals.
  2. Passive GPS tracking, where the transmitter tracks a person's activity and stores location information for download the next day.
  3. Radio Frequency ("RF") is primarily used for “curfew monitoring.” A home monitoring unit is set to detect a bracelet within a specified range and then sends confirmation to a monitoring center.
  4. Secure Continuous Remote Alcohol Monitoring ("SCRAM") - analyzes a person's perspiration to extrapolate blood alcohol content once every hour.
  5. Breathalyzer monitoring reviews and tests a subject’s breath at random to estimate BAC and typically has a camera.

Monitors are commonly a condition of pre-trial release, or post-conviction supervision, like probation or parole. They are sometimes a strategy to reduce jail and prison populations. Recently, EM’s applications have widened to include juveniles, the elderly, individuals accused or convicted of DUIs or domestic violence, immigrants awaiting legal proceedings, and adults in drug programs.

This increasingly wide use of EM by law enforcement remains relatively unchecked. That’s why EFF, along with over 50 other organizations, has endorsed a set of Guidelines for Respecting the Rights of Individuals on Electronic Monitoring. The guidelines are a multi-stakeholder effort led by the Center for Media Justice's Challenging E-carceration project to outline the legal and policy considerations that law enforcement’s use of EM raises for monitored individuals’ digital rights and civil liberties.

For example, a paramount concern is the risk of racial discrimination. People of color tend to be placed on EM far more often than their white counterparts. For example, Black people in Cook County, IL make up 24% of the population, yet represent 70% of people on EM. This ratio mirrors the similarly skewed racial disparity in physical incarceration.

Another concern is cost shifting. People on EM often pay user fees ranging from $3-$35/day along with $100-$200 in setup charges, shifting the costs of electronic incarceration from the government to the monitored and their families. Usually, this disproportionately affects poor communities of color who are already over-policed and over-represented within the criminal justice and immigration systems.

Then there are the consequences to individual privacy that threaten the rights not just of the monitored, but also of those who interact with them. When children, friends, or family members rely on individuals on EM for transportation or housing, they often suffer privacy intrusions from the same mechanisms that monitor their loved ones.

Few jurisdictions have regulations limiting access to location tracking data and its attendant metadata, or specifying how long such information should be kept and for what purpose. Private companies that contract to provide EM to law enforcement typically store location data on monitored individuals and may share or sell clients’ information for a profit. This jeopardizes the safety and civil rights not just of the monitored, but also of their families, friends, and roommates who live, work, or socialize with them.

Just one example of how location information stored over time can provide an intimate portrait of someone’s life, and even be harvested by machine learning inferences to detect deviations in regular travel habits, is featured in this bi-analytics marketing video.

So, what do we do about EM? We must demand strict constitutional safeguards against its misuse, especially because “GPS monitoring generates [such] a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations” as the U.S. Supreme Court recognized in U.S. v. Jones. Recent studies by the Pew Research Center in 2014 show that 82% of Americans consider the details of their physical location over time to be sensitive information, including 50% of Americans who consider it to be “very sensitive.” Thus, law enforcement should be required to get a warrant or other court order before using EM to track an individual’s location information.

For criminal defense attorneys looking for more resources on fighting EM, review our one-pager explainer and practical advice. And if you seek amicus support in your case, email with the following information:

  1. Case name & jurisdiction
  2. Case timeline/pending deadlines
  3. Defense Attorney contact information
  4. Brief description of your EM issue 

Related Cases: US v. Jones

How Congress Censored the Internet

In Passing SESTA/FOSTA, Lawmakers Failed to Separate Their Good Intentions from Bad Law

Today was a dark day for the Internet.

The U.S. Senate just voted 97-2 to pass the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865), a bill that silences online speech by forcing Internet platforms to censor their users. As lobbyists and members of Congress applaud themselves for enacting a law tackling the problem of trafficking, let’s be clear: Congress just made trafficking victims less safe, not more.

The version of FOSTA that just passed the Senate combined an earlier version of FOSTA (what we call FOSTA 2.0) with the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). The history of SESTA/FOSTA—a bad bill that turned into a worse bill and then was rushed through votes in both houses of Congress—is a story about Congress’ failure to see that its good intentions can result in bad law. It’s a story of Congress’ failure to listen to the constituents who’d be most affected by the laws it passed. It’s also the story of some players in the tech sector choosing to settle for compromises and half-wins that will put ordinary people in danger.

Silencing Internet Users Doesn’t Make Us Safer

SESTA/FOSTA undermines Section 230, the most important law protecting free speech online. Section 230 protects online platforms from liability for some types of speech by their users. Without Section 230, the Internet would look very different. It’s likely that many of today’s online platforms would never have formed or received the investment they needed to grow and scale—the risk of litigation would have simply been too high. Similarly, in absence of Section 230 protections, noncommercial platforms like Wikipedia and the Internet Archive likely wouldn’t have been founded given the high level of legal risk involved with hosting third-party content.

The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.

Importantly, Section 230 does not shield platforms from liability under federal criminal law. Section 230 also doesn’t shield platforms across-the-board from liability under civil law: courts have allowed civil claims against online platforms when a platform directly contributed to unlawful speech. Section 230 strikes a careful balance between enabling the pursuit of justice and promoting free speech and innovation online: platforms can be held responsible for their own actions, and can still host user-generated content without fear of broad legal liability.

SESTA/FOSTA upends that balance, opening platforms to new criminal and civil liability at the state and federal levels for their users’ sex trafficking activities. The platform liability created by new Section 230 carve outs applies retroactively—meaning the increased liability applies to trafficking that took place before the law passed. The Department of Justice has raised concerns [.pdf] about this violating the Constitution’s Ex Post Facto Clause, at least for the criminal provisions.

The bill also expands existing federal criminal law to target online platforms where sex trafficking content appears. The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.

Finally, SESTA/FOSTA expands federal prostitution law to cover those who use the Internet to “promote or facilitate prostitution.”

The Internet will become a less inclusive place, something that hurts all of us.

It’s easy to see the impact that this ramp-up in liability will have on online speech: facing the risk of ruinous litigation, online platforms will have little choice but to become much more restrictive in what sorts of discussion—and what sorts of users—they allow, censoring innocent people in the process.

What forms that erasure takes will vary from platform to platform. For some, it will mean increasingly restrictive terms of service—banning sexual content, for example, or advertisements for legal escort services. For others, it will mean over-reliance on automated filters to delete borderline posts. No matter what methods platforms use to mitigate their risk, one thing is certain: when platforms choose to err on the side of censorship, marginalized voices are censored disproportionately. The Internet will become a less inclusive place, something that hurts all of us.

Big Tech Companies Dont Speak for Users

SESTA/FOSTA supporters boast that their bill has the support of the technology community, but it’s worth considering what they mean by “technology.” IBM and Oracle—companies whose business models don’t heavily rely on Section 230—were quick to jump onboard. Next came the Internet Association, a trade association representing the world’s largest Internet companies, companies that will certainly be able to survive SESTA while their smaller competitors struggle to comply with it.

Those tech companies simply don’t speak for the Internet users who will be silenced under the law. And tragically, the people likely to be censored the most are trafficking victims themselves.

SESTA/FOSTA Will Put Trafficking Victims in More Danger

Throughout the SESTA/FOSTA debate, the bills’ proponents provided little to no evidence that increased platform liability would do anything to reduce trafficking. On the other hand, the bills’ opponents have presented a great deal of evidence that shutting down platforms where sexual services are advertised exposes trafficking victims to more danger.

Freedom Network USA—the largest national network of organizations working to reduce trafficking in their communities—spoke out early to express grave concerns [.pdf] that removing sexual ads from the Internet would also remove the best chance trafficking victims had of being found and helped by organizations like theirs as well as law enforcement agencies.

Reforming [Section 230] to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking.

It is important to note that responsible website administration can make trafficking more visible—which can lead to increased identification. There are many cases of victims being identified online—and little doubt that without this platform, they would have not been identified. Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are forced into even more dangerous circumstances.

Freedom Network was far from alone. Since SESTA was introduced, many experts have chimed in to point out the danger that SESTA would put all sex workers in, including those who are being trafficked. Sex workers themselves have spoken out too, explaining how online platforms have literally saved their lives. Why didn’t Congress bring those experts to its deliberations on SESTA/FOSTA over the past year?

While we can’t speculate on the agendas of the groups behind SESTA, we can study those same groups’ past advocacy work. Given that history, one could be forgiven for thinking that some of these groups see SESTA as a mere stepping stone to banning pornography from the Internet or blurring the legal distinctions between sex work and trafficking.

In all of Congress’ deliberations on SESTA, no one spoke to the experiences of the sex workers that the bill will push off of the Internet and onto the dangerous streets. It wasn’t surprising, then, when the House of Representatives presented its “alternative” bill, one that targeted those communities more directly.

“Compromise” Bill Raises New Civil Liberties Concerns

In December, the House Judiciary Committee unveiled its new revision of FOSTA. FOSTA 2.0 had the same inherent flaw that its predecessor had—attaching more liability to platforms for their users’ speech does nothing to fight the underlying criminal behavior of traffickers.

In a way, FOSTA 2.0 was an improvement: the bill was targeted only at platforms that intentionally facilitated prostitution, and so would affect a narrower swath of the Internet. But the damage it would do was much more blunt: it would expand federal prostitution law such that online platforms would have to take down any posts that could potentially be in support of any sex work, regardless of whether there’s an indication of force or coercion, or whether minors were involved.

FOSTA 2.0 didn’t stop there. It criminalized using the Internet to “promote or facilitate” prostitution. Activists who work to reduce harm in the sex work community—by providing health information, for example, or sharing lists of dangerous clients—were rightly worried that prosecutors would attempt to use this law to put their work in jeopardy.

Regardless, a few holdouts in the tech world believed that their best hope of stopping SESTA was to endorse a censorship bill that would do slightly less damage to the tech industry.

They should have known it was a trap.

SESTA/FOSTA: The Worst of Both Worlds

That brings us to last month, when a new bill combining SESTA and FOSTA was rushed through congressional procedure and overwhelmingly passed the House.

When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.

Thousands of you picked up your phone and called your senators, urging them to oppose the new Frankenstein bill. And you weren’t alone: EFF, the American Civil Liberties Union, the Center for Democracy and Technology, and many other experts pleaded with Congress to recognize the dangers to free speech and online communities that the bill presented.

Even the Department of Justice wrote a letter urging Congress not to go forward with the hybrid bill [.pdf]. The DOJ said that the expansion of federal criminal law in SESTA/FOSTA was simply unnecessary, and could possibly undermine criminal investigations. When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.

Assuming that the president signs it into law, SESTA/FOSTA is the most significant rollback to date of the protections for online speech in Section 230. We hope that it’s the last, but it may not be. Over the past year, we’ve seen more calls than ever to create new exceptions to Section 230.

In any case, we will continue to fight back against proposals that undermine our right to speak and gather online. We hope you’ll stand with us.

How To Change Your Facebook Settings To Opt Out of Platform API Sharing

You shouldn't have to do this. You shouldn't have to wade through complicated privacy settings in order to ensure that the companies with which you've entrusted your personal information are making reasonable, legal efforts to protect it. But Facebook has allowed third parties to violate user privacy on an unprecedented scale, and, while legislators and regulators scramble to understand the implications and put limits in place, users are left with the responsibility to make sure their profiles are properly configured.

Over the weekend, it became clear that Cambridge Analytica, a data analytics company, got access to more than 50 million Facebook users' data in 2014. The data was overwhelmingly collected, shared, and stored without user consent. The scale of this violation of user privacy reflects how Facebook's terms of service and API were structured at the time. Make no mistake: this was not a data breach. This was exactly how Facebook's infrastructure was designed to work.

In addition to raising questions about Facebook's role in the 2016 presidential election, this news is a reminder of the inevitable privacy risks that users face when their personal information is captured, analyzed, indefinitely stored, and shared by a constellation of data brokers, marketers, and social media companies.

Tech companies can and should do more to protect users, including giving users far more control over what data is collected and how that data is used. That starts with meaningful transparency and allowing truly independent researchers—with no bottom line or corporate interest—access to work with, black-box test, and audit their systems. Finally, users need to be able to leave when a platform isn’t serving them — and take their data with them when they do.

Of course, you could choose to leave Facebook entirely, but for many that is not a viable solution. For now, if you'd like keep your data from going through Facebook's API, you can take control of your privacy settings. Keep in mind that this disables ALL platform apps (like Farmville, Twitter, or Instagram) and you will not be able to log into sites using your Facebook login.

Log into Facebook and visit the App Settings page (or go there manually via the Settings Menu > Apps ).

From there, click the "Edit" button under "Apps, Websites and Plugins." Click "Disable Platform."

A modal will appear called “Turn Platform Off,” with a description of the Platform features. Click the “Disable Platform” button.

If disabling platform entirely is too much, there is another setting that can help: limiting the personal information accessible by apps that others use. By default, other people who can see your info can bring it with them when they use apps, and your info becomes available to those apps. You can limit this as follows.

From the same page, click "Edit" under "Apps Others Use." Then uncheck the types of information that you don't want others' apps to be able to access. For most people reading this post, that will mean unchecking every category.

 From the App Settings page, find the section called "Apps Others Use." Click the “Edit” button. A modal will appear that has many checkboxes, including "Bio", "Birthday," "If I'm online," and so on. Uncheck the boxes, and click the "Save" button.

Advocating for Change: How Lucy Parsons Labs Defends Transparency in Chicago

Here at the Electronic Frontier Alliance, we’re lucky to have incredible member organizations engaging in advocacy on our issues across the U.S. One of those groups in Chicago, Lucy Parsons Labs (LPL), has done incredible work taking on a range of civil liberties issues. They’re a dedicated group of advocates volunteering to make their world (and the Windy City) a better, more equitable place.

We sat down with one of the founders of LPL, Freddy Martinez, to gain a better understanding of the Lab and how they use their collective powers for good. 

How would you describe Lucy Parsons Labs? How did the organization get started, and what need were you trying to fill?

The lab got started four years back when a few people doing digital security training in Chicago saw there was need for a more technical group that could bridge the gap between advocacy and technology. We each had areas of interest and expertise that we were doing activism around, and it grew pretty organically from there. For example, lawmakers would try to pass a bill without fully understanding the full implications that the piece of legislation would have, technologically or otherwise. We began to work together on these projects to educate lawmakers and inform the public on these issues as a friend group, and the organization grew out of that as we added or expanded projects. We do a lot of public records requests and work on police transparency, but our group has broad, varied interests. The common thread that runs through the work is that we have a lot of expertise in a lot of different advocacy areas, and we leverage that expertise to make the world better. It lets us sail in many different waters.

LPL participates in the Electronic Frontier Alliance (EFA), a network of grassroots digital rights groups around the country. Your work in Chicago runs the gamut from advocating for transparency in the criminal justice system, to investigating civil asset forfeiture, from operating a SecureDrop system for whistleblowers, to investigating the use of cell-simulators by the Chicago Police Department. Given that, how does the EFA play into your work?

I feel that the more the organization grows, the more having groups around the country who are building capacity is key to making sure that these projects get done. There’s such a huge amount of work to be done, and having other partners who are interested in various subsections of our work and can help us achieve our goals is really valuable. EFA provides us access to a diverse array of experts, from academics and lawyers to grassroots activists. It gives us a lot of leverage, and lets us share our subject matter expertise in ways we wouldn’t be able to if we were going it alone.

Let’s talk surveillance. LPL has done incredible work via the open records process to expose the use of cell-site simulators (sometimes referred to as “Stingrays” or IMSI Catchers) by the Chicago Police Department. Can you tell us about how you started investigating, and why these kinds of surveillance need to be brought into the public conversation?

I actually heard of this equipment through news reporting—you would see major cities buying these devices, and then troubling patterns began to emerge. Prosecutors would begin dropping cases because they didn’t want to tell defense attorneys where they got the information or how. There were cases of parallel construction. After noticing this trend, I sent my first public records request to get info on whether the Chicago Police Department had bought any. Instead of following the law, they decided to ignore the request until a judge ordered them to release the records. They were ostensibly used for the war on drugs, but usually they are used overseas in the war on terror. They test these technologies on black and brown populations in war zones, then bring them back to surveil their citizens. It’s an abuse of power and an invasion of privacy. We need to be talking about this. We think that there’s a reason that this stuff is acquired in secret, because people would not be okay with their government doing this if they knew.

LPL has done tons of community work in the anti-surveillance realm as well. Why do you believe educating people about how they can protect themselves from surveillance is important?

I think that you need to give people the breathing room to participate in society safely. Surveillance is usually thought of as an eye in the sky watching over your every move, but it’s so much more pervasive than that. We think about these things in abstract ways, with very little understanding of how they can affect our daily lives. A way to frame the importance of, say, encryption, is to use the example of medical correspondence. If you’re talking to your doctor, you don’t want your messages to be seen by anyone else. It’s critical to have these discussions and decisions made in public so that people can make informed decisions about their lives and privacy. This is a broader responsibility we have as a society, and to each other.

Do you have any advice for other community-based advocacy groups based on your experience?

I have found that being organized is extremely important. We’re a small team of volunteers, so we have to keep things really well documented, especially when dealing with something like public records requests. You also have to, and I can’t stress this enough, enjoy the work and make sure you don’t burn out. It’s a labor of love—you need to be invested in these projects and taking care of yourself in order to do effective activism. Otherwise the work will suffer.

LPL has partnered with other organizations and community groups in the past. What are some ways that you’ve found success in coalition building? What advice would you give to other groups that would like to work more collaboratively with their peer groups?

LPL is also part of a larger group called the Chicago Data Collaborative, where we are working on sharing and analyzing data on the criminal justice system. One of the most important pieces of information to know before embarking on a multi-organization enterprise is that you will have to do a lot of capacity building in order to work together effectively. You’ll need to set aside a lot of time and effort to context build for those not in the know. You must be “in the room” (whether that’s digital or physical) for dedicated, direct collaboration. This is what makes or breaks a good partnership.

Anything else you’d like to add?

I have a bit of advice for people who’d like to get involved in grassroots activism and advocacy, but aren’t sure where to start: You’ll never know when you’re going to come across these projects. Being curious and following your gut will take you down weird rabbit holes. Get started somewhere and follow your gut. You’ll be surprised how far that will take you.

If you’re advocating for digital rights within your community, please explore the Electronic Frontier Alliance and consider joining.

This interview has been lightly edited for length and readability.

A Smattering of Stars in Argentina's First "Who Has Your Back?" ISP Report

It’s Argentina's turn to take a closer look at the practices of their local Internet Service Providers, and how they treat their customers’ personal data when the government comes knocking.

Argentina's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of Asociación por los Derechos Civiles and the Electronic Frontier Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to turn a spotlight on how the policies of Internet Service Providers either advance or hinder the privacy rights of users.

The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities. Last year Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and ETICAS Foundation released a similar study earlier this year, part of a series across Latin America and Spain.

The report set out to examine which Argentine ISPs best defend their customers. Which are transparent about their policies regarding requests for data? Do any challenge disproportionate data demands for their users’ data? Which require a judicial order before handing over personal data? Do any of the companies notify their users when complying with judicial requests? ADC examined publicly posted information, including the privacy policies and codes of practice, from six of the biggest Argentine telecommunications access providers: Cablevisión (Fibertel), Telefónica (Speedy), Telecom (Arnet), Telecentro, IPLAN, and DirecTV (AT&T). Between them, these providers cover 90% of the fixed and broadband market.

Each company was given the opportunity to answer a questionnaire, to take part in a private interview and to send any additional information if they felt appropriate, all of which was incorporated into the final report. ADC’s rankings for Argentine ISPs are below; the full report, which includes details about each company, is available at:

Evaluation Criteria for ¿Quién Defiende tus Datos?

  1. Privacy Policy: whether its privacy policy is easy to understand, whether it tells users which data is being collected, how long these companies store their data, if they notify users if they change their privacy policies, if they publish a note regarding the right of access to personal data, and if they foresee how the right of access to a person's’ data may be exercised.
  2. Transparency: whether they publish transparency reports that are accessible to the public, and how many requests have been received, compiled and rejected, including details about the type of requests, the government agencies that made the requests and the reasons provided by the authority.
  3. Notification: whether they provide any kind of notification to customers of government data demands, and bonus points if they do the notification apriori.
  4. Judicial Court: Whether they require the government to obtain a court order before handing over data, and if they judicially resist data requests that are excessive and do not comply with legal requirements.
  5. Law Enforcement Guidelines: whether they publish their guidelines for law enforcement requests.

Companies in Argentina are off to a good start but still have a way to go to fully protect their customers’ personal data and be transparent about who has access to it. ADC and EFF expect to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Argentines will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope next year’s report will shine with more stars.

Offline/Online Project Highlights How the Oppression Marginalized Communities Face in the Real World Follows Them Online

People in marginalized communities who are targets of persecution and violence—from the Rohingya in Burma to Native Americans in North Dakota—are using social media to tell their stories, but finding that their voices are being silenced online.

This is the tragic and unjust consequence of content moderation policies of companies like Facebook, which is deciding on a daily basis what can be and can’t be said and shown online. Platform censorship has ratcheted up in these times of political strife, ostensibly to combat hate speech and online harassment. Takedowns and closures of neo-Nazi and white supremacist sites have been a matter of intense debate. Less visible is the effect content moderation is having on vulnerable communities.

Flawed rules against hate speech have shut down online conversations about racism and harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of aboriginal women in Australia, Dakota pipeline protestors and many others are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are being flagged for takedown by Facebook. The powerless struggle to be heard in the first place; online censorship further marginalizes vulnerable communities. This is not OK.

In response, EFF and Visualizing Impact launched an awareness project today that highlights the online censorship of communities across the globe that are struggling or in crisis. Offline/Online is a series of visuals demonstrating that the inequities and oppression these communities face in the physical world are being replicated online. The visuals can be downloaded and shared on Twitter, Facebook, and Snapchat, or printed out for distribution.

In one, the displacement of nearly 700,000 Rohingya Muslims from Myanmar because of state violence is represented in a photo showing Rohingya children trying to board a small boat. Rohingya refugees, many of whom are women and children, are arriving in Bangladesh with wounds from gunshot and fire, according to the United Nations.

And online? Facebook is an essential means of communication in Myanmar. Activists there and in the West have documented the violence against the Rohingya online, only to have their Facebook posts removed and accounts suspended.

Inequity offline, censorship online.

The EFF/Visualizing Impact project exposes this pattern among Palestinians, aboriginal women in Australia, Native Americans, Dakota pipeline protestors, and black Americans. We believe this is just the tip of the iceberg. We are already far down the slippery slope from judicious moderation of online content to outright censorship. With two billion Facebook users worldwide, there are likely more vulnerable communities being subject to online censorship.

Our hope is that activists, concerned citizens, and online communities will post and share Inequity Offline/Censorship Online visuals (found here) many times, raising awareness about the impact of censorship on marginalized communities—a story that is underreported. Sharing the visuals is a step all of us can take to combat online censorship. It may help restore the speech and voices being erased online.

Geek Squad's Relationship with FBI Is Cozier Than We Thought

After the prosecution of a California doctor revealed the FBI’s ties to a Best Buy Geek Squad computer repair facility in Kentucky, new documents released to EFF show that the relationship goes back years. The records also confirm that the FBI has paid Geek Squad employees as informants.

EFF filed a Freedom of Information Act (FOIA) lawsuit last year to learn more about how the FBI uses Geek Squad employees to flag illegal material when people pay Best Buy to repair their computers. The relationship potentially circumvents computer owners’ Fourth Amendment rights.

The documents released to EFF show that Best Buy officials have enjoyed a particularly close relationship with the agency for at least 10 years. For example, an FBI memo from September 2008 details how Best Buy hosted a meeting of the agency’s “Cyber Working Group” at the company’s Kentucky repair facility.

The memo and a related email show that Geek Squad employees also gave FBI officials a tour of the facility before their meeting and makes clear that the law enforcement agency’s Louisville Division “has maintained close liaison with the Geek Squad’s management in an effort to glean case initiations and to support the division’s Computer Intrusion and Cyber Crime programs.”

Another document records a $500 payment from the FBI to a confidential Geek Squad informant. This appears to be one of the same payments at issue in the prosecution of Mark Rettenmaier, the California doctor who was charged with possession of child pornography after Best Buy sent his computer to the Kentucky Geek Squad repair facility.

Other documents show that over the years of working with Geek Squad employees, FBI agents developed a process for investigating and prosecuting people who sent their devices to the Geek Squad for repairs. The documents detail a series of FBI investigations in which a Geek Squad employee would call the FBI’s Louisville field office after finding what they believed was child pornography.

The FBI agent would show up, review the images or video and determine whether they believe they are illegal content. After that, they would seize the hard drive or computer and send it to another FBI field office near where the owner of the device lived. Agents at that local FBI office would then investigate further, and in some cases try to obtain a warrant to search the device. 

Some of these reports indicate that the FBI treated Geek Squad employees as informants, identifying them as “CHS,” which is shorthand for confidential human sources. In other cases, the FBI identifies the initial calls as coming from Best Buy employees, raising questions as to whether certain employees had different relationships with the FBI.

In the case of the investigation into Rettenmaier’s computers, the documents released to EFF do not appear to have been made public in that prosecution. These raise additional questions about the level of cooperation between the company and law enforcement.

For example, documents reflect that Geek Squad employees only alert the FBI when they happen to find illegal materials during a manual search of images on a device and that the FBI does not direct those employees to actively find illegal content.

But some evidence in the case appears to show Geek Squad employees did make an affirmative effort to identify illegal material. For example, the image found on Rettenmaier’s hard drive was in an unallocated space, which typically requires forensic software to find. Other evidence showed that Geek Squad employees were financially rewarded for finding child pornography. Such a bounty would likely encourage Geek Squad employees to actively sweep for suspicious content.

Although these documents provide new details about the FBI’s connection to Geek Squad and its Kentucky repair facility, the FBI has withheld a number of other documents in response to our FOIA suit. Worse, the FBI has refused to confirm or deny to EFF whether it has similar relationships with other computer repair facilities or businesses, despite our FOIA specifically requesting those records. The FBI has also failed to produce documents that would show whether the agency has any internal procedures or training materials that govern when agents seek to cultivate informants at computer repair facilities.

We plan to challenge the FBI’s stonewalling in court later this spring. In the meantime, you can read the documents produced so far here and here.

Related Cases: FBI Geek Squad Informants FOIA Suit

A Technical Deep Dive: Securing the Automation of ACME DNS Challenge Validation

Earlier this month, Let's Encrypt (the free, automated, open Certificate Authority EFF helped launch two years ago) passed a huge milestone: issuing over 50 million active certificates. And that number is just going to keep growing, because in a few weeks Let's Encrypt will also start issuing “wildcard” certificates—a feature many system administrators have been asking for.

What's A Wildcard Certificate?

In order to validate an HTTPS certificate, a user’s browser checks to make sure that the domain name of the website is actually listed in the certificate. For example, a certificate from has to actually list as a valid domain for that certificate. Certificates can also list multiple domains (e.g.,,,, etc.) if the owner just wants to use one certificate for all of her domains. A wildcard certificate is just a certificate that says “I'm valid for all of the subdomains in this domain” instead of explicitly listing them all off. (In the certificate, this is indicated by using a wildcard character, indicated by an asterisk. So if you examine the certificate for today, it will say it's valid for * That way, a system administrator can get a certificate for their entire domain, and use it on new subdomains they hadn't even thought of when they got the certificate.

In order to issue wildcard certificates, Let's Encrypt is going to require users to prove their control over a domain by using a challenge based on DNS, the domain name system that translates domain names like into IP addresses like From the perspective of a Certificate Authority (CA) like Let's Encrypt, there's no better way to prove that you control a domain than by modifying its DNS records, as controlling the domain is the very essence of DNS.

But one of the key ideas behind Let's Encrypt is that getting a certificate should be an automatic process. In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies. In the rest of this post, we'll take a deep dive into the components involved in that process, and what the options are for making it more secure.

How Does the DNS Challenge Work?

At a high level, the DNS challenge works like all the other automatic challenges that are part of the ACME protocol—the protocol that a Certificate Authority (CA) like Let's Encrypt and client software like Certbot use to communicate about what certificate a server is requesting, and how the server should prove ownership of the corresponding domain name. In the DNS challenge, the user requests a certificate from a CA by using ACME client software like Certbot that supports the DNS challenge type. When the client requests a certificate, the CA asks the client to prove ownership over the domain by adding a specific TXT record to its DNS zone. More specifically, the CA sends a unique random token to the ACME client, and whoever has control over the domain is expected to put this TXT record into its DNS zone, in the predefined record named "_acme-challenge" under the actual domain the user is trying to prove ownership of. As an example, if you were trying to validate the domain for *, the validation subdomain would be "" When the token value is added to the DNS zone, the client tells the CA to proceed with validating the challenge, after which the CA will do a DNS query towards the authoritative servers for the domain. If the authoritative DNS servers reply with a DNS record that contains the correct challenge token, ownership over the domain is proven and the certificate issuance process can continue.

DNS Controls Digital Identity

What makes a DNS zone compromise so dangerous is that DNS is what users’ browsers rely on to know what IP address they should contact when trying to reach your domain. This applies to every service that uses a resolvable name under your domain, from email to web services. When DNS is compromised, a malicious attacker can easily intercept all the connections directed toward your email or other protected service, terminate the TLS encryption (since they can now prove ownership over the domain and get their own valid certificates for it), read the plaintext data, and then re-encrypt the data and pass the connection along to your server. For most people, this would be very hard to detect.

Separate and Limited Privileges

Strictly speaking, in order for the ACME client to handle updates in an automated fashion, the client only needs to have access to credentials that can update the TXT records for "_acme-challenge" subdomains. Unfortunately, most DNS software and DNS service providers do not offer granular access controls that allow for limiting these privileges, or simply do not provide an API to handle automating this outside of the basic DNS zone updates or transfers. This leaves the possible automation methods either unusable or insecure.

A simple trick can help maneuver past these kinds of limitations: using the CNAME record. CNAME records essentially act as links to another DNS record. Let's Encrypt follows the chain of CNAME records and will resolve the challenge validation token from the last record in the chain.

Ways to Mitigate the Issue

Even using CNAME records, the underlying issue exists that the ACME client will still need access to credentials that allow it to modify some DNS record. There are different ways to mitigate this underlying issue, with varying levels of complexity and security implications in case of a compromise. In the following sections, this post will introduce some of these methods while trying to explain the possible impact if the credentials get compromised. With one exception, all of them make use of CNAME records.

Only Allow Updates to TXT Records

The first method is to create a set of credentials with privileges that only allow updating of TXT records. In the case of a compromise, this method limits the fallout to the attacker being able to issue certificates for all domains within the DNS zone (since they could use the DNS credentials to get their own certificates), as well as interrupting mail delivery. The impact to mail delivery stems from mail-specific TXT records, namely SPFDKIM, its extension ADSP and DMARC. A compromise of these would also make it easy to deliver phishing emails impersonating a sender from the compromised domain in question.

Use a "Throwaway" Validation Domain

The second method is to manually create CNAME records for the "_acme-challenge" subdomain and point them towards a validation domain that would reside in a zone controlled by a different set of credentials. For example, if you want to get a certificate to cover yourdomain.tld and www.yourdomain.tld, you'd have to create two CNAME records—"_acme-challenge.yourdomain.tld" and "_acme-challenge.www.yourdomain.tld"—and point both of them to an external domain for the validation.

The domain used for the challenge validation should be in an external DNS zone or in a subdelegate DNS zone that has its own set of management credentials. (A subdelegate DNS zone is defined using NS records and it effectively delegates the complete control over a part of the zone to an external authority.)

The impact of compromise for this method is rather limited. Since the actual stored credentials are for an external DNS zone, an attacker who gets the credentials would only gain the ability to issue certificates for all the domains pointing to records in that zone.

However, figuring out which domains actually do point there is trivial: the attacker would just have to read Certificate Transparency logs and check if domains in those certificates have a magic subdomain pointing to the compromised DNS zone.

Limited DNS Zone Access

If your DNS software or provider allows for creating permissions tied to a subdomain, this could help you to mitigate the whole issue. Unfortunately, at the time of publication the only provider we have found that allows this is Microsoft Azure DNS. Dyn supposedly also has granular privileges, but we were not able to find a lower level of privileges in their service besides “Update records,” which still leaves the zone completely vulnerable.

Route53 and possibly others allow their users to create a subdelegate zone, new user credentials, point NS records towards the new zone, and point the "_acme-challenge" validation subdomains to them using the CNAME records. It’s a lot of work to do the privilege separation correctly using this method, as one would need to go through all of these steps for each domain they would like to use DNS challenges for.


As a disclaimer, the software discussed below is written by the author, and it’s used as an example of the functionality required to handle credentials required for DNS challenge automation in a secure fashion. The final method is a piece of software called ACME-DNS, written to combat this exact issue, and it's able to mitigate the issue completely. One downside is that it adds one more thing to your infrastructure to maintain as well as the requirement to have DNS port (53) open to the public internet. ACME-DNS acts as a simple DNS server with a limited HTTP API. The API itself only allows updating of TXT records of automatically generated random subdomains. There are no methods to request lost credentials, update or add other records. It provides two endpoints:

  • /register – This endpoint generates a new subdomain for you to use, accompanied by a username and password. As an optional parameter, the register endpoint takes a list of CIDR ranges to whitelist updates from.
  • /update – This endpoint is used to update the actual challenge token to the server.

In order to use ACME-DNS, you first have to create A/AAAA records for it, and then point NS records towards it to create a delegation node. After that, you simply create a new set of credentials via the /register endpoint, and point the CNAME record from the "_acme-challenge" validation subdomain of the originating zone towards the newly generated subdomain.

The only credentials saved locally would be the ones for ACME-DNS, and they are only good for updating the exact TXT records for the validation subdomains for the domains on the box. This effectively limits the impact of a possible compromise to the attacker being able to issue certificates for these domains. For more information about ACME-DNS, visit


To alleviate the issues with ACME DNS challenge validation, proposals like assisted-DNS to IETF’s ACME working group have been discussed, but are currently still left without a resolution. Since the only way to limit exposure from a compromise is to limit the DNS zone credential privileges to only changing specific TXT records, the current possibilities for securely implementing automation for DNS validation are slim. The only sustainable option would be to get DNS software and service providers to either implement methods to create more fine-grained zone credentials or provide a completely new type of credentials for this exact use case.

The False Teeth of Chrome's Ad Filter

Today Google launched a new version of its Chrome browser with what they call an "ad filter"—which means that it sometimes blocks ads but is not an "ad blocker." EFF welcomes the elimination of the worst ad formats. But Google's approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed. 

Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for "bad ad experiences." The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination. 

The Coalition for Better Ads Lacks a Consumer Voice

The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition.

This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves.

But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus. 

Stopping the "Biggest Boycott in History"

Some commentators have interpreted ad blocking as the "biggest boycott in history" against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads "whitelisted" under the Acceptable Ads program. These companies leverage their position as gatekeepers to the user's eyeballs, obliging Google to buy back access to the "blocked" part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers. 

Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers.

What Difference Will It Make?

Coverage of Chrome's new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition's bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only "filter" 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus.

User Protection or Monopoly Power?

The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets.

Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube ("instream pre-roll") were not user-tested and are exempted from the prohibition on "auto-play ads with sound." This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome's monitoring of sites associated with malware and related user protection notifications.

There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.)

Chrome Falls Behind on User Privacy 

This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacy-conscious users an easy opt-out. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users' concerns.

Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in "Private Browsing" mode with the option to enable it universally. Meanwhile, Apple's Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome's user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF's Privacy Badger or uBlock Origin to fill this gap.

(1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick's cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.


JavaScript license information