House Fails to Protect Americans from Unconstitutional NSA Surveillance

UPDATE, January 12, 2018: The Senate could vote Tuesday on a disastrous NSA surveillance extension bill that violates the Fourth Amendment. Click the link at the bottom of the page to email your Senator today and tell them to oppose bill S. 139.

The House of Representatives cast a deeply disappointing vote today to extend NSA spying powers for the next six years by a 256-164 margin. In a related vote, the House also failed to adopt meaningful reforms on how the government sweeps up large swaths of data that predictably include Americans’ communications.                                                                 

Because of these votes, broad NSA surveillance of the Internet will likely continue, and the government will still have access to Americans’ emails, chat logs, and browsing history without a warrant. Because of these votes, this surveillance will continue to operate in a dark corner, routinely violating the Fourth Amendment and other core constitutional protections.      

This is a disappointment to EFF and all our supporters who, for weeks, have spoken to defend privacy. And this is a disappointment for the dozens of Congress members who have tried to rein NSA surveillance in, asking that the intelligence community merely follow the Constitution.

Today’s House vote concerned S. 139, a bill to extend Section 702 of the Foreign Intelligence Surveillance Act (FISA), a powerful surveillance authority the NSA relies on to sweep up countless Americans’ electronic communications. EFF vehemently opposed S. 139 for its failure to enact true reform of Section 702.

As passed by the House today, the bill:

  • Endorses nearly all warrantless searches of databases containing Americans’ communications collected under Section 702.
  • Provides a narrow and seemingly useless warrant requirement that applies only for searches in some later-stage criminal investigations, a circumstance which the FBI itself has said almost never happens.
  • Allows for the restarting of “about” collection, an invasive type of surveillance that the NSA ended last year after being criticized by the Foreign Intelligence Surveillance Court for privacy violations.
  • Sunsets in six years, delaying Congress’ best opportunity to debate the limits NSA surveillance.

You can read more about the bill here.                             

Sadly, the House’s approval of S. 139 was its second failure today. The first was in the House’s inability to pass an amendment—through a 183-233 vote—that would have replaced the text of S. 139 with the text of the USA Rights Act, a bill that EFF is proud to support. You can read about that bill here.

The amendment to replace the text of S. 139 with the USA Rights Act was introduced by Reps. Justin Amash (R-MI) and Zoe Lofgren (D-CA) and included more than 40 cosponsors from sides of the aisle. Its defeat came from both Republicans and Democrats.

S. 139 now heads to the Senate, which we expect to vote by January 19. The Senate has already considered stronger bills to rein in NSA surveillance, and we call on the Senate to reject this terrible bill coming out of the House.

We thank every supporter who lent their voice to defend the Constitution. And we thank every legislator who championed civil liberties in this months-long fight. The debate around surveillance reform has evolved—and will continue to evolve—for years. We thank those who have come to understand that privacy does not come at the price of security. Indeed, we can have both.

Thank you to the scores of representatives who sponsored and co-sponsored the USA Rights Act amendment, or voiced support on the House floor today, including Reps. Amash, Lofgren, Jerrold Nadler, Ted Poe, Jared Polis, Mark Meadows, Tulsi Gabbard, Jim Sensenbrenner, Walter Jones Jr., Thomas Massie, Andy Biggs, Warren Davidson, Mark Sanford, Steve Pearce, Scott Perry, Sheila Jackson Lee, Alex Mooney, Paul Gosar, David Schweikert, Louie Gohmert, Ted Yoho, Joe Barton, Dave Brat, Keith Ellison, Lloyd Doggett, Rod Blum, Tom Garrett Jr., Morgan Griffith, Jim Jordan, Earl Blumenauer, Ro Khanna, Beto O’Rourke, Todd Rokita, Hank Johnson, Blake Farenthold, Mark Pocan, Dana Rohrabacher, Raúl Grijalva, Raúl Labrador, Peter Welch, Tom McClintock, Salud Carbajal, Ted Lieu, Bobby Scott, Pramila Jayapal, and Jody Hice.

 Email your Senator today and tell them to uphold your constitutional rights by rejecting S. 139.

Take Action

Tell Your Senator to Reject S. 139

Groups Line Up For Meaningful NSA Surveillance Reform

Multiple nonprofit organizations and policy think tanks, and one company have recently joined ranks to limit broad NSA surveillance. Though our groups work for many causes— freedom of the press, shared software development, universal access to knowledge, equal justice for all—our voices are responding to the same threat: the possible expansion of Section 702 of the FISA Amendments Act.

On January 5, the Rules Committee for the House of Representatives introduced S. 139. The bill—which you can read here—is the most recent attempt to expand Section 702, a law that the NSA uses to justify the collection of Americans’ electronic communications during foreign intelligence surveillance. The new proposal borrows some of the worst ideas from prior bills meant to reauthorize Section 702, while adding entirely new bad ideas, too.

Meaningless Warrant Requirements

The new proposal to expand Section 702 fails to protect Americans whose electronic communications are predictably swept up during broad NSA surveillance. Today, the NSA uses Section 702 to target non-U.S. persons not living in the United States, collecting emails both “to” and “from” an individual. Predictably, those emails include messages sent by U.S. persons. The government stores those messages in several databases that—because of a loophole—can then be searched and read by government agents who do not first obtain a warrant, even when those communications are written by Americans.

These searches are called “backdoor” searches because they skirt Americans’ Fourth Amendment rights to a warrant requirement.

The new proposal would require a warrant for such backdoor searches for only the most narrow of circumstances.

According to the bill, FBI agents would only have to obtain search warrants “in connection with a predicated criminal investigation opened by the Federal Bureau of Investigation that does not relate to the national security of the United States.”

That means an FBI agent would only need to get a warrant once she has found enough information to launch a formal criminal investigation. Should an FBI agent wish to search through Section 702-collected data that belongs to Americans, she can do so freely without a warrant.

The bill’s narrow warrant requirement runs the Fourth Amendment through a funhouse mirror, flipping its intentions and providing protections only after a search has been made.

“About” Collection

“About” collection is an invasive type of NSA surveillance that the agency ended last year, after years of criticism from the Foreign Intelligence Surveillance Court, which provides judicial oversight on Section 702. This type of collection allows the NSA to tap the Internet’s backbone and collect communications that are simply “about” a targeted individual. The messages do not have to be “to” or “from” the individual.

The new proposal to expand Section 702 regrettably includes a path for the Attorney General and the Director of National Intelligence to restart “about” collection. It is a model that we saw in an earlier Section 702 reauthorization bill in 2017. EFF vehemently opposed that bill, which you can read about here.

Working Together

Today, EFF sent a letter to House of Representatives leadership, lambasting any bills that would extend Section 702 and did not include robust backdoor search warrant requirements. You can read our letter here.

EFF also wrote a letter—joined by Aspiration Tech, Freedom of the Press Foundation, and Internet Archive—to House of Representatives Minority Leader Nancy Pelosi, demanding the same. You can read that letter here

GitHub, the communal coding company, joined the effort, sending a letter of their own to Minority Leader Pelosi’s office, too. Read GitHub’s letter here

And policy think tanks across America, including the Brennan Center for Justice and Center for American Progress, have written in opposition of S. 139.

For weeks, surveillance apologists have tried to ram NSA surveillance expansion bills through Congress. They are not letting up.

We will need your help this week more than ever. To start, you can call Leader Pelosi and let her know: any bill to extend Section 702 must include robust warrant requirements for American communications. 

Call today.

Supreme Court Won’t Hear Key Surveillance Case

The Supreme Court announced today that it will not review a lower court’s ruling in United States v. Mohamud, which upheld warrantless surveillance of an American citizen under Section 702 of the Foreign Intelligence Surveillance Act. EFF had urged the Court to take up Mohamud because this surveillance violates core Fourth Amendment protections. The Supreme Court’s refusal to get involved here is disappointing.

Using Section 702, the government warrantlessly collects billions of communications, including those belonging to a large but unknown number of Americans. The Ninth Circuit Court of Appeals upheld this practice only by creating an unprecedented exception to the Fourth Amendment. This exception allows the government to collect Americans’ communications without a warrant by targeting foreigners outside the United States, known as “incidental collection.”

We wish the Supreme Court had stepped in to fix this misguided ruling, but its demurral shouldn’t be taken to mean that Section 702 surveillance is totally fine. Some of the most controversial aspects of these programs have never been reviewed by a public court, let alone the Supreme Court. That includes “backdoor searches,” the practice of searching databases for Americans’ incidentally collected communications. Even in deciding Mohamud, the Ninth Circuit refused to address the constitutionality of backdoor searches.

Thorough judicial review of Section 702 surveillance remains one of EFF’s key priorities. In addition, as Congress nears a vote on the statute’s reauthorization, we’re pushing for legislative reforms to eliminate backdoor searches and other unconstitutional practices.

How to Assess a Vendor's Data Security

Perhaps you’re an office manager tasked with setting up a new email system for your nonprofit, or maybe you’re a legal secretary for a small firm and you’ve been asked to choose an app for scanning sensitive documents: you might be wondering how you can even begin to assess a tool as “safe enough to use.” This post will help you think about how to approach the problem and select the right vendor.

If the company can’t or won’t answer these questions, they are asking you to trust them based on very little evidence: this is not a good sign.

As every organization has unique circumstances and needs, we can’t provide definitive software recommendations or provider endorsements. However, we can offer some advice for assessing a software vendor and for gauging their claims of protecting the security and privacy of your clients and your employees.

If you are signing up for a service where you will be storing potentially sensitive employee or client data, or if you are considering using a mobile or desktop application which will be handling client or employee data, you should make sure that the company behind the product or service has taken meaningful steps to secure that data from misuse or theft, and that they won’t give the data to other parties—including governments or law enforcement—without your knowledge.

If you are the de facto IT person for a small organization—but aren’t sure how to evaluate software before adopting it—here are some questions you can ask vendors before choosing to buy their software. (For the purposes of this post, we will be focusing on concerns relating to the confidentiality of data, as opposed to the integrity and availability of data.)

If the company can’t or won’t answer these questions, they are asking you to trust them based on very little evidence: this is not a good sign.

Here are some general things to keep in mind before investing in software applications for your organization.

When you’re researching vendors, consider the following:

  • Have there been past security issues with or criticisms of the tool?
  • If so, how quickly have they responded to criticisms about their tool? How quickly have they patched or made updates to fix vulnerabilities? Generally companies should provide updates to vulnerable software as quickly as possible, but sometimes actually getting the updated software can be difficult.
      • Note that criticisms and vulnerabilities are not necessarily a bad sign for the company, as even the most carefully-built software can have vulnerabilities. What’s more important is that the company takes security concerns seriously, and fixes them as quickly as possible.
    • Of course, companies selling products and enthusiasts advertising their latest software can be misled, be misleading, or even outright lie. A product that was originally secure might have terrible flaws in the future.
  • Do you have a plan to stay well-informed on the latest news about the tools that you use?
    • Setting up a Google alert for “[example product] data breach flaw vulnerability” is one way to find out about problems with a product that you use, though it probably won’t catch every problem.
    • You can also follow tech news websites or social media to keep up with information security news. You can check the “Security News” section of the Security Education Companion, which curates EFF Deeplinks posts relevant to software vulnerabilities, as well as other considerations for people teaching digital security to others.
  • Is this vendor honest about the limitations of their product?
    • If a vendor makes claims like “NSA-Proof” or “Military Grade Encryption” without stating what the security limitations of the product are, this can be a sign that the vendor is overconfident in the security of their product. A vendor should be able to clearly state the situations that their security model doesn’t defend against.
  • Does the company provide a guarantee about the availability of your data?
    • This is sometimes called a “Service Level Agreement” (SLA).
    • How likely is it that this company is going to stick around? Does it seem like they have sustainable business practices?
    • If the service disappears, is there a way to access your data and move it to another service provider or app? Or will it be gone forever?
    • Is there any chance that they will ban you from using their app or service, and thus also lock you out from accessing your data? Think about if there are any limits to how the service can be used.

Questions to ask the vendor:

Note that you may not be able to hit all of the following points—however, asking these questions will give you a better sense of what to expect from the service.

  • Does the vendor have a privacy policy on their website? Do they share or sell data to any third parties?
    • If you have the means to chat with a lawyer while reviewing the privacy policy, you can ask about:
      • Notification: Do they promise to notify us of any legal demand before handing over any of our data, or data about us (with no exceptions)?
      • Viewing: Do they promise not to look at our data themselves, except when they absolutely need to?
      • Sharing: Do they require anyone who they share the data with to abide by the same privacy policy and notification terms?
      • Restriction: Are they only using the data for the purpose of which they provided?
  • Will the vendor disclose any client data to their partners or other third parties in the normal course of business? If so, are those conditions clearly stated? What are the privacy practices of those other entities?
  • Does the vendor follow current best practices in responding to government requests in your jurisdiction?
    • Do they require a warrant before handing over user content?
    • Do they publish regular transparency reports?
    • Do they publish law enforcement guides?
  • Do they have a dedicated security team? If so, what is their incident response plan? Do they have any specifics about responding to breaches of data security?
  • Have they had a recent security audit? If there were any security issues found, how quickly were they fixed?
  • How often do they get security audits? Will they share the results, or at least share an executive summary?
  • What measures do they take to secure private data from theft or misuse?
  • Have they had a data breach? (This is not necessarily a bad thing, especially if they have a plan for how to prevent them in the future. This is really about what was breached— for example, was it a contact list from a webform, or their health information files?)
    • If they had a data breach in the past, what measures have they taken to prevent a data breach in the future?
  • How does the company notify customers about data breaches?
  • Does the vendor give advance notice when it changes its data practices?
  • Does the vendor encrypt data in transit? Do they default to secure connections? (For example, does a website redirect an unencrypted HTTP website to an encrypted HTTPS site?)  What is the vendor’s disaster recovery plan and backup scheme?
  • What internal controls exist for vendor’s staff accessing logs, client data and other sensitive information?
  • Does this service allow two-factor authentication on login?
    • If not, why not? How soon do they plan to implement it?
  • Do they push regular software updates?

While many companies don’t yet do this, it is still good to ask:

  • Do they encrypt stored data? (This is also called “encrypted at rest.” For example, when it’s “in the cloud”/on their computers, is it encrypted?)
  • Do they have a bug bounty program? If they do not have a bug bounty program in place, how do they respond to vulnerability reports? (If they are hostile to security researchers, this is a bad sign.)

If the service is free...

It is often said that “if the software is free, then you are the product”—this is true of any company that has targeted advertising as a business model. This is even true of the free products that nonprofits use. For this reason, free services and apps should be treated with extra caution. If you are pursuing a free service, in addition to asking the questions above, you will want to consider the following additional points.

  • How does the vendor make money? Do they make money by selling access to—or products based on—your private data?
  • Will they respond to customer service requests?
  • How likely are they to invest in security infrastructure?

If your organization has legally-mandated requirements for protecting data...

  • If your organization has a unique legal circumstance (e.g. needing to abide by attorney-client privilege, HIPAA requirements for those in the medical profession, COPPA and FERPA for working with K-12 students), ask:
    • Is the client data being stored and transmitted in accordance with the legally mandated standards of your field?
    • How often do they re-audit that they are in compliance with these standards?
    • Are the audit results publicly available?
  • If you use education technology or if you work with youth under 18 years old, consider following up with this series of questions for K-12 software vendors: check out EFF’s white paper on student privacy and recommendations for school stakeholders.

These questions do not on their own guarantee that the vendor or product will be perfectly private or secure, but that’s not a promise any vendor or software can make (and if they did, it would be a red flag). However, the answers to these questions should at least give you some idea of whether the vendor takes security and privacy seriously or not, and can therefore help you make an informed decision about whether use their product. For more information about considerations for smaller organizations evaluating tools, check out Information Ecology’s Security Questions for Providers.

Guest Author: Jonah Sheridan - Information Ecology

New CBP Border Device Search Policy Still Permits Unconstitutional Searches

U.S. Customs and Border Protection (CBP) issued a new policy on border searches of electronic devices that's full of loopholes and vague language and that continues to allow agents to violate travelers’ constitutional rights. Although the new policy contains a few improvements over rules first published nine years ago, overall it doesn’t go nearly far enough to protect the privacy of innocent travelers or to recognize how exceptionally intrusive electronic device searches are.

Nothing announced in the policy changes the fact that these device searches are unconstitutional, and EFF will continue to fight for travelers’ rights in our border search lawsuit.

Below is a legal analysis of some of the key features of the new policy.

The New Policy Purports to Require Reasonable Suspicion for Forensic Searches, But Contains a Huge Loophole and Has Other Problems

CBP’s previous policy permitted agents to search a traveler’s electronic devices at the border without having to show that they suspect that person of any wrongdoing. The new policy creates a distinction between two different types of searches, “basic” and “advanced.” Basic searches are when agents manually search a device by tapping or mousing around a device to open applications or files. Advanced searches are when agents use other devices or software to conduct forensic analysis of the contents of a device.

The updated policy states that basic searches can continue to be conducted without suspicion, while advanced searches require border agents to have “reasonable suspicion of activity in violation of the laws enforced or administered by CBP.” [5.1.4]

This new policy dichotomy appears to be inspired by the U.S. Court of Appeals for the Ninth Circuit’s 2013 case U.S. v. Cotterman, which required reasonable suspicion for forensic searches. CBP’s new policy defines advanced searches as those where a border agent “connects external equipment, through a wired or wireless connection, to an electronic device not merely to gain access to the device, but to review, copy, and/or analyze its contents.”

The Cotterman ruling has been only applicable in the western states within the Ninth Circuit’s jurisdiction, whereas this new policy is nationwide. It’s notable, however, that CBP has taken five years to address Cotterman in a public document.

There are at least four problems with this new rule.

First, this new rule has one huge loophole—border agents don’t need to have reasonable suspicion to conduct an advanced device search when “there is a national security concern.” This exception will surely swallow the rule, as “national security” can be construed exceedingly broadly and CBP has provided few standards for agents to follow. The new policy references individuals on terrorist watch lists, but then mentions unspecified “other articulable factors as appropriate.”

Second, as we argue in our lawsuit against CBP and its sister agencies (now called Alasaad v. Nielsen), the Constitution requires border agents to obtain a probable cause warrant before searching electronic devices given the unprecedented and significant privacy interests travelers have in their digital data. Only a reasonable suspicion standard for electronic device searches at the border, and no court oversight of those searches, is insufficient under the Fourth Amendment to protect personal privacy. Thus, the new policy is wrong to state that it goes “above and beyond prevailing constitutional and legal requirements.” [4]

Third, it is inappropriate to have a legal rule hinge on the flimsy distinction between “manual/basic” and “forensic/advanced” searches. As we’ve argued previously, while forensic searches can obtain deleted files, “manual” searches can be effectively just as intrusive as “forensic” searches given that the government obtains essentially the same information regardless of what search method is used: all the emails, text messages, contact lists, photos, videos, notes, calendar entries, to-do lists, and browsing histories found on mobile devices. And all this data collectively can reveal highly personal and sensitive information about travelers—their political beliefs, religious affiliations, health conditions, financial status, sex lives, and family details.

Fourth, this new rule broadly asserts that border agents need only “reasonable suspicion of activity in violation of the laws enforced or administered by CBP” before conducting an advanced search. We argue that the Constitution requires that agents’ suspicions be tied to data on the device—in other words, border agents must have a basis to believe that the device itself contains evidence of a violation of an immigration or customs law, not a general belief that the traveler has violated an immigration or customs law.

The New Policy Explicitly (and Wrongly) Requires Travelers to Unlock Their Devices at the Border

The new policy basically states that travelers must unlock or decrypt their electronic devices and/or provide their device passwords to border agents. Specifically: “Travelers are obligated to present electronic devices and the information contained therein in a condition that allows inspection of the device and its contents.” [5.3.1]

This is simply wrong—as we explained in our border guide (March 2017), travelers have a right to refuse to unlock, decrypt, or provide passwords to border agents. However, there may be consequences, such as travel delay, device confiscation, or even denial of entry for non-U.S. persons.

The New Policy Confirms Border Agents Cannot Search Cloud Content, But Details Betray CBP’s Stonewalling of EFF's FOIA Request

The new policy finally confirms that CBP agents must avoid accessing data stored in the cloud when they conduct device searches by placing devices in airplane mode or otherwise disabling network connectivity. [5.1.2] In April 2017, the agency said that border agents could only access data that is stored locally on the device. EFF filed a Freedom of Information Act (FOIA) request to get a copy of that policy and to learn precisely how agents avoided accessing data stored remotely.

CBP initially stonewalled our efforts to get answers via our FOIA request, redacting the portions of the policy that explained how border agents avoided searching cloud content. But after we successfully appealed and got more information released, and CBP Acting Commissioner Kevin McAleenan made additional public statements, we were able to learn that border agents were disabling network connectivity on the devices.

Frustratingly, CBP continued to claim that the specific methods border agents used to disable network connectivity—which we suspected was primarily toggling on airplane mode—were secret law enforcement techniques. The redacted document states:

To avoid retrieving or accessing information stored remotely and not otherwise present on the device, where available, steps such as [REDACTED] must be taken prior to search.

Prior to conducting the search of an electronic device, an officer will [REDACTED].

Those details should never have been redacted under FOIA. CBP apparently now agrees. Section 5.1.2 of the new policy states:

To avoid retrieving or accessing information stored remotely and not otherwise present on the device, Officers will either request that the traveler disable connectivity to any network (e.g., by placing the device in airplane mode), or, where warranted by national security, law enforcement, officer safety, or other operational considerations, Officers will themselves disable network connectivity.

It thus appears that the new policy contains much of the same information that CBP redacted in response to our FOIA request. The fact that such information is now public in CBP’s updated policy makes the agency’s initial stonewalling all the more unreasonable. 

Border Agents Will Now Handle Attorney-Client Privileged Information Differently

The new policy provides more robust procedures for data that is protected by the attorney-client privilege (the concept that communications between attorneys and their clients are secret) or that is attorney work product (materials prepared by or for lawyers, or for litigation). A “filter team” will be used to segregate protected material. []

Unfortunately, no new protections are provided for other types of sensitive information, such as confidential source or work product information carried by journalists, or medical records.

Conspicuously Absent: Any Updates to ICE’s Border Device Search Policy

While we welcome the improvements in the new policy, it’s important to note that it only applies to CBP. U.S. Immigration and Customs Enforcement (ICE), which includes agents from Homeland Security Investigations (HSI), has not issued a comparable new policy. And often times ICE/HSI agents are the ones who conduct border searches, not CBP agents, so any enhanced privacy protections found in the new policy are wholly inapplicable to searches by these agents.

 CBP Must Update Policy in Three Years

Finally, the new policy must be reviewed again by CBP in three years. This is important, given that much has changed in the nine years since the original policy was published in 2009, yet CBP never updated its policy to reflect changes in the law that occurred during that time.

The loopholes and failures of CBP’s new policy for border searches of electronic devices demonstrate that the government continues to flout Fourth Amendment rights at the border. We look forward to putting these flawed policies before a judge in our lawsuit Alasaad v. Nielsen.

Related Cases: Alasaad v. Nielsen

Tipping the Scales on HTTPS: 2017 in Review

The movement to encrypt the web reached milestone after milestone in 2017. The web is in the middle of a massive change from non-secure HTTP to the more secure, encrypted HTTPS protocol. All web servers use one of these two protocols to get web pages from the server to your browser. HTTP has serious problems that make it vulnerable to eavesdropping and content hijacking. By adding Transport Layer Security (or TLS, a prior version of which was known as Secure Sockets Layer or SSL) HTTPS fixes most of these problems. That’s why EFF, and many like-minded supporters, have been pushing for web sites to adopt HTTPS by default.

In February, the scales tipped. For the first time, approximately half of Internet traffic was protected by HTTPS. Now, as 2017 comes to a close, an average of 66% of page loads on Firefox are encrypted, and Chrome shows even higher numbers.

At the beginning of the year, Let’s Encrypt had issued about 28 million certificates. In June, it surpassed 100 million certificates. Now, Let’s Encrypt’s total issuance volume has exceeded 177 million certificates. Certificate Authorities (CAs) like Let’s Encrypt issue signed, digital certificates to website owners that help web users and their browsers independently verify the association between a particular HTTPS site and a cryptographic key. Let's Encrypt stands out because it offers these certificates for free. And, with EFF’s Certbot, they are easier than ever for web masters and website administrators to get.

Throughout the entire year, projects like Secure the News and Pulse have been tracking HTTPS adoption among news sites and government sites, respectively.

Browsers have been pushing the movement to encrypt the web further, too. Early this year, Chrome and Firefox started showing users “Not secure” warnings when HTTP websites asked them to submit password or credit card information. In October, Chrome expanded the warning to cover all input fields, as well as all pages viewed in Incognito mode. Chrome has eventual plans to show a “Not secure” warning for all HTTP pages.

One of the biggest CAs, Symantec, was threatened with removal of trust by Firefox and Chrome. Symantec had long been held up as an example of a CA that was “too big to fail.” Removing trust directly would break thousands of important websites overnight. However, browsers found many problems with Symantec’s issuance practices, and the browsers collectively decided to make the leap, using a staged distrust mechanism that would minimize impact to websites and people using the Internet. Symantec subsequently sold their CA business to fellow CA DigiCert for nearly a billion dollars, with the expectation that DigiCert’s infrastructure and processes will issue certificates with fewer problems. Smaller CAs WoSign and StartCom were removed from trust by Chrome and Firefox last year.

The next big step in encrypting the web is ensuring that most websites default to HTTPS without ever sending people to the HTTP version of their site. The technology to do this is called HTTP Strict Transport Security (HSTS), and is being more widely adopted. Notably, the registrar for the .gov TLD announced that all new .gov domains would be set up with HSTS automatically. A related and more powerful setting, HTTP Public Key Pinning (HPKP), was targeted for removal by Chrome. The Chrome developers believe that HPKP is too hard for site owners to use correctly, and too risky when used incorrectly. We believe that HPKP was a powerful, if flawed, part of the HTTPS ecosystem, and would rather see it reformed than removed entirely.

The Certification Authority Authorization (CAA) standard became mandatory for all CAs to implement this year. CAA allows site owners to specify in DNS which CAs are allowed to issue for their site, and may reduce misissuance events. Let's Encrypt led the way on this by enforcing CAA from first launch, and EFF is glad to see this protection extended to the broader CAA ecosystem.

There’s plenty to look forward to in 2018. In a significant improvement to the TLS ecosystem, for example, Chrome plans to require Certificate Transparency starting next April. As browsers and users alike pressure websites for ubiquitous HTTPS, and as the process of getting a certificate gets easier and more intuitive for web masters, we expect 2018 to be another banner year for HTTPS growth and improvement.

We particularly thank Feisty Duck for the Bulletproof TLS Newsletter, which provides updates on many of these topics.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.



Communities from Coast to Coast Fight for Control Over Police Surveillance: 2017 in Review

Americans in 2017 lived under a threat of constant surveillance, both online and offline. While the battle to curtail unaccountable and unconstitutional NSA surveillance continued this year with only limited opportunities appearing in Congress, the struggle to secure community control over surveillance by local police has made dramatic and expanding strides across the country at the local level.

In July, Seattle passed a law making it the nation’s second jurisdiction to require law enforcement agencies to seek community approval before acquiring surveillance technology. Santa Clara County in California, which encompasses most of Silicon Valley, pioneered this reform in spring 2016 before similar proposals later spread across the country.

Two other jurisdictions in the San Francisco Bay Area—the cities of Oakland and Berkeley—have conducted multiple public hearings on proposed reforms to require community control. Both cities are nearing decision points for local legislators who in 2018 will consider whether to empower themselves and their constituents, or whether instead to allow secrecy and unaccountability to continue unfettered.

Other communities across California have also mobilized. In addition to Oakland and Berkeley, EFF has supported proposed reforms in Palo Alto and before the Bay Area Rapid Transit Board, and also addressed communities seeking similar measures in Davis, Humboldt County (where a local group in the Electronic Frontier Alliance hosted two public forums in March and another in December), and Santa Cruz (where local activists began a long running local dialog in 2016).

Reflecting this interest from across the state, the California State Senate approved a measure, S.B. 21, which would have applied the transparency and community control principles of the Santa Clara County ordinance to hundreds of law enforcement agencies across the state. While the measure was successful before the state Senate, and also cleared two committees in the State Assembly, it died without a vote in the state Assembly’s appropriations committee.

While S.B. 21 was not enacted in 2017, we anticipate similar measures advancing in communities across California in 2018. In many other states, municipal bodies have already begun considering analogous policies.

In New York City, over a dozen council members have supported the Public Oversight of Surveillance Technology (POST) Act, which would require transparency before the New York Police Department acquires new surveillance technology. This is an important step forward, though without reform elements that, in Santa Clara County and Seattle, have placed communities in control over police surveillance. The support of local policymakers may help bring to the public debate underlying facts about the proposed reform which appear to have escaped figures who oppose it, including Mayor Bill de Blasio.

In Cambridge, Massachusetts, policymakers began a conversation last year that continued throughout 2017. This October, a coalition of local allies hosted a public forum about a proposed ordinance that the City Council will reportedly consider in 2018. They included Digital Fourth (a member of the EFA), the Pirate Party, and students at the Berkman Klein Center for Internet & Society at Harvard University, one of whom wrote that “[w]ithout appropriate controls, technologies intended for one purpose can be twisted for another.”

In the Midwest, Missouri has emerged as a potentially crucial state in the nationwide battle over surveillance technology. Years after grassroots opposition to police violence vaulted the town of Ferguson to international recognition, St. Louis city policymakers introduced B.B. 66, a measure modeled closely on Santa Clara County’s.

While the Missouri state legislature has yet to consider a similar proposal, it did consider—without yet adopting—another proposed reform to limit law enforcement surveillance. In particular, S.B. 84 would have limited the parameters under which state and local police could deploy cell-site simulators, which use cell phone networks to track a user’s location or monitor data or voice transmissions. This is just one of many invasive surveillance platforms available to law enforcement.

Nearby states have also taken notice of cell-site simulators. Illinois has already enacted a strong law constraining the use of those particular tools, while Nebraska considered a bill that would have prohibited police from using cell-site simulators at all. This established support for limiting one surveillance tool across the region suggests potential traction for process reforms, like Seattle’s and Santa Clara County’s, that would apply to all platforms. 

The fight against unaccountable secret government surveillance will continue across the United States in 2018. While Congress has yet to enact legislation this year protecting the American people from NSA surveillance, local and state legislatures are heeding the call to conduct effective oversight and to empower the communities they represent.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.



Seven Times Journalists Were Censored: 2017 in Review

Social media platforms have developed into incredibly useful resources for professional and citizen journalists, and have allowed people to learn about and read stories that may never have been published in traditional media. Sharing on just one of a few large platforms like Facebook, Twitter, and YouTube may mean the difference between a story being read by a few hundred versus tens of thousands of people.

Unfortunately, these same platforms have taken on the role of censor. They have created moderation policies to increase polite speech on their platforms, but simply put: they are not very good at it. These moderation policies are applied in imbalanced ways, often without an appeal process, sometimes relying on artificial intelligence to flag content, and usually without transparency into the decision-making process. This results in the censorship and blocking of content of all types.

Globally, these content takedown processes often ignore the important evidentiary and journalistic roles content can play in countries where sharing certain information has consequences far beyond those in the U.S. We recommend any intermediary takedown practice include due process and be transparent, as recommended in our Manila Principles. And, as these examples demonstrate, social media platforms often make censorship decisions without due process, without transparency, and with end results that would make most people scratch their heads and wonder. 

We’re regularly documenting censorship and content takedowns like these on, a platform to document the who, what, and why of content takedowns on social media sites. is a project of the Electronic Frontier Foundation (EFF) and Visualizing Impact. 

While there are hundreds, and possibly thousands of examples, here are seven of the most egregious instances of social media platforms censoring journalism in 2017.

 1. Human Rights Abuses in Syria and Myanmar Removed from Youtube and Facebook 

Social media platforms can contain video or photographic evidence that can be used to build human rights abuse cases, especially in situations where the videos or photos aren’t safe on a hard drive due to potential loss or retaliation, or in instances where larger organizations have been blocked. But first-hand accounts like these are at constant risk on platforms like YouTube and Facebook. YouTube in particular has implemented artificial intelligence systems to identify and remove violent content that may be extremist propaganda or disturbing to viewers, and according to a report in the Intercept, removed documentation of the civil war in Syria. Facebook meanwhile removed photos and images of abuses by the Myanmar government against the Rohingya ethnic minority. 

2. A Buzzfeed Journalist’s Account is Locked on Twitter for a Seven Year-Old Tweet 

In November, Katie Notopoulos, a journalist for Buzzfeed, was banned from Twitter after a seven-year old tweet was reported by several people all at once. She was “mass-reported”, or subject to a campaign where many people reported her, for a 2011 tweet that read “Kill All White People.” After this, her account was locked until the offending tweet was removed. Twitter’s inconsistent content policies allow for this sort of targeted harassment, while making it difficult to know what is and what is not “acceptable” on the platform. 

3. Ukrainian News Site Liga is Banned from Facebook

In December, Facebook banned all links and all publications from independent Ukrainian news website They’ve since restored the links and posts, and are completing an internal investigation. According to Liga, Facebook told them they were banned because of "nudity." A Facebook representative told us that they were blocked because they had "triggered a malicious ad rule." Organizations can be banned and given confusing answers about why it's happening and what they can do about it due to murky moderation policies. A single platform with this sort of lack of transparency should not be able to flip a switch and stop a majority of the traffic to an entire domain without offering a concrete explanation to affected users. 

4. At Request of Indian Government, Twitter Suspends Accounts and Deletes Tweets Sympathetic to Kashmiri Independence

In August, the Indian government asked Twitter to suspend over two dozen Twitter accounts and remove over 100 tweets—some belonging to journalists and activists—that talked about the conflict in Kashmir, or showed sympathy for Kashmiri independence movements. The Indian government claimed the tweets violated Section 69A of India's Information Technology Act, which allows the government to block online content when it believes the content threatens the security, sovereignty, integrity, or defense of the country. 

The Indian government reported the tweets and Twitter accounts, and Twitter contacted the users explaining they would be censored. There were no individual explanations given for why these tweets or accounts were chosen, beyond highlighting the conflict in Kashmir. 

5. Panama Papers Co-Author Blocked from Facebook for Sharing Documents Critical of Maltese Government

Pulitzer prize-winning journalist Matthew Caruana Galizia was locked out of his Facebook account after sharing four posts that Facebook deleted for violating the social network’s community standards. The four posts contained allegations against Malta’s prime minister, his chief of staff, and his minister of energy. The posts included images of documents from the 11.5 million documents in the Panama Papers leak, a collection put together by the International Consortium of Investigative Journalists, of which he is a member. 

It’s unclear what community standard Facebook applied to delete the photos and lock the account, although it seems that it was due to the materials containing private information about individuals. Facebook has since announced that material that would otherwise violate its standards would be allowed if it was found to be “newsworthy, significant, or important to the public interest.” However, the expectation that Facebook moderators should decide what is newsworthy or important is part of the problem: the platform itself, through an undisclosed process, continues to be the gatekeeper for journalistic content. 

6. San Diego CityBeat’s Article on Sexual Harassment Removed from Facebook 

Alex Zaragoza, a writer for San Diego CityBeat, had links to her article removed from Facebook because, according to them, it was an “attack.” The article, entitled “Dear dudes, you’re all trash,” critiqued men for their surprise and obliviousness in the light of multiple, high-profile sexual harassment scandals.

Presumably, the post ran afoul of Facebook’s policy against “hate speech,” which includes attacks against a group on the basis of gender. But as ProPublica noted this summer, those standards aren’t applied evenly: “White men” are a protected group, for example, but “black children” aren’t. 

If Facebook is going to continue to encourage publishers to publish their stories on the platform first, it needs to consider the effect its rules have on journalistic content. They’ve made efforts in the past to modify their standards for historically significant content. For example, they decided after much controversy to allow users to share images of the iconic Vietnam war photo of the ‘Napalm Girl’, recognizing “the history and global importance of this image in documenting a particular moment in time.” They should perhaps consider doing this for contemporary newsworthy content (especially content that expresses valuable critique and dissent from minority voices) that would otherwise run afoul of their rules. 

7. Snapchat and Medium Censor Qatari Media At Request of Saudi Arabia

The Kingdom of Saudi Arabia is one of the world’s most prolific censors. American companies—including Facebook and Google—have at times in the past voluntarily complied with content restriction demands from Saudi Arabia, though we know little about their context. 

In June, Medium complied with requests from the government to restrict access to content from two publications: Qatar-backed Al Araby Al Jadeed (“The New Arab”) and The New Khaliji News. In the interest of transparency, the company sent both requests to Lumen, a database which has collected and analyzed millions of takedown requests since 2001.

In September, Snap disappointed free expression advocates by joining the list of companies willing to team up with Saudi Arabia against Qatar and its media outlets. The social media giant pulled the Al Jazeera Discover Publisher Channel from Saudi Arabia. A company spokesperson told Reuters: “We make an effort to comply with local laws in the countries where we operate.”

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.



The Worst Law in Technology Strikes Again: 2017 in Review

The latest on the Computer Fraud and Abuse Act? It’s still terrible. And this year, the detrimental impacts of the notoriously vague and outdated criminal computer crime statute showed themselves loud and clear. The statute lies at the heart of the Equifax breach, which might have been averted if our laws didn’t criminalize security research. And it’s at the center of a court case pending in the Ninth Circuit Court of Appeals, hiQ v. LinkedIn, which threatens a hallmark of today’s Internet: free and open access to publicly available information.

At EFF, we’ve spent 2017 working to make sure that courts and policy makers understand the role the CFAA has played in undermining security research, and that the Ninth Circuit rejects LinkedIn’s attempt to transform a criminal law meant to target serious computer break-ins into a tool for enforcing corporate computer use policies. We’ve also continued our work to protect programmers and developers engaged in cutting-edge exploration of technology via our Coders’ Rights Project—coders who often find themselves grappling with the messiness that is the CFAA. As this fight carries us into 2018, we stand ready to do all we can to rein in the worst law in technology.

Equifax: The CFAA Chills Another Security Researcher

The CFAA makes it illegal to engage in “unauthorized access” to a computer connected to the Internet, but the statute doesn’t tells us what “authorization” or “without authorization” means. This vague language might have seemed innocuous to some back in 1986 when the statute was passed, but in today’s networked world, where we all regularly connect to and use computers owned by others, courts cannot even agree on what the law covers. And as a result, this pre-Web law is causing serious problems.

One of the biggest problems: the law notorious for chilling the work of security researchers.

Most of the time, we never hear about the research that could have prevented a security nightmare. But with Equifax’s data breach, we did. As if the news of the catastrophic breach wasn’t bad enough, we learned in October—thanks to reporting by Motherboard—that a security researcher had warned Equifax “[m]onths before its catastrophic data breach . . . that it was vulnerable to the kind of attack that later compromised the personal data of more than 145 million Americans[.]” According to Equifax’s own timeline, the company didn’t patch the vulnerability for six months—and “only after the massive breach that made headlines had already taken place[.]”

The security researcher who discovered the vulnerability in Equifax’s system back in 2016 should have been empowered to bring their findings to someone else's attention after Equifax ignored them. If they had, the breach may have been avoided. Instead, they faced the risk of a CFAA lawsuit and potentially decades in federal prison.

In an era of massive data breaches that impact almost half of the U.S. population as well as people around the globe, a law that ostracizes security researchers is foolish—and it undermines the security of all of us. A security research exemption is necessary to ensure that our security research community can do their work to keep us all safe and secure without fear of prosecution. We’ve been calling for these reforms for years, and it’s long overdue.  

hiQ v. Linkedin: Abuse of the CFAA to Block Access to Publicly Available Information

One thing that’s consistently gotten in the way of CFAA reform: corporate interests. And 2016 was no different in this respect. This year, LinkedIn has been pushing to expand the CFAA’s already overly broad scope, so that it can use the statute to maintain its edge over a competing commercial service, hiQ Labs. We blogged about the details of the dispute earlier this year. The social media giant wants to use the CFAA to enforce its corporate policy against using automated scripts—i.e., scraping—to access publicly available information on the open Internet. But what that would mean is potentially criminalizing automated tools that we all rely on every day. The web crawlers that power Google Search, DuckDuckGo, and the Internet archive, for instance, are all automated tools that collect (or scrape) publicly information from across the Web. LinkedIn paints all “bots” as bad, but they are a common and necessary part of the Internet. Indeed, “good bots” were responsible for 23 percent of global Web traffic in 2016. Using them to access publicly available information on the open Internet should not be punishable as a federal felony.

Congress passed the CFAA to target serious computer break-ins. It did not intend to hand private companies a tool for enforcing their computer use policies. Using automated scripts to access publicly available data does not involve breaking into any computer, and neither does violating a website’s terms of use. Neither should be CFAA offenses.

LinkedIn’s expansive interpretation of the CFAA would exacerbate the law’s chilling effects—not only for the security research community, but also for journalists, discrimination researchers, and others who use automated tools to support their socially valuable work. Similar lawsuits are already starting to pop up across the country, including one by airline RyanAir alleging that Expedia's fair scraping violated the CFAA.

Luckily, a court in San Francisco called foul, questioning LinkedIn’s use of the CFAA to block access to public data, finding that the “broad interpretation of the CFAA invoked by LinkedIn, if adopted, could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.”

The case is now on appeal, and EFF, DuckDuckGo, and the Internet Archive have urged the Ninth Circuit Court of Appeals to uphold the lower court's finding and reject LinkedIn’s shortsighted request to transform the CFAA into a tool for policing the use of publicly available data on the open Internet. And we’re hopeful it will. During a Ninth Circuit oral argument in a different case in July, Judge Susan Graber pushed back [at around 33:40] on Oracle’s argument that automated scraping was a computer crime.

LinkedIn says it wants to protect the privacy of user data. But public data is not private, so why not just put the data behind its pre-existing username and password barrier? It seems that LinkedIn wants to take advantage of the benefits of the open Internet while at the same time abusing the CFAA to avoid the Web’s “open trespass norms.” The CFAA is an old, blunt instrument, and trying to use it to solve a modern, complicated dispute between two companies will undermine open access to information on the Internet for everyone. As we said in our amicus brief:

The power to limit access to publicly available information on the Internet under color of the law should be dictated by carefully considered rules that balance the various competing policy interests. These rules should not allow the handful of companies that collect massive amounts of user data to reap the benefits of making that information publicly available online—i.e., more Internet traffic and thus more data and more eyes for advertisers—while at the same time limiting use of that public information via the force of criminal law.

The Ninth Circuit will hear oral argument on the LinkedIn case in March 2018, and we’ll continue to fight LinkedIn’s expansive interpretation of the CFAA into the New Year.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.



Related Cases: hiQ v. LinkedIn

Court Challenges to NSA Surveillance: 2017 in Review

One of the government’s most powerful surveillance tools is scheduled to sunset in less than three weeks, and, for months, EFF has fought multiple legislative attempts to either extend or expand the NSA’s spying powers—warning the public, Representatives, and Senators about circling bills that threaten Americans’ privacy. But the frenetic, deadline-pressure environment on Capitol Hill betrays the slow, years-long progress that EFF has made elsewhere: the courts.

2017 was a year for slow, procedural breakthroughs.

Here is an update on the lawsuits that EFF and other organizations have against broad NSA surveillance powers.

Jewel v. NSA

EFF began 2017 with significant leverage in our signature lawsuit against NSA surveillance, Jewel v. NSA. The year prior, U.S. District Court Judge Jeffrey White in Oakland, California, ordered the U.S. government to comply with EFF’s “discovery” requests—which are inquiries for evidence when lawsuits advance towards trial. In several lawsuits, this process can take months. In Jewel v. NSA, simply allowing the process to begin took eight years.

This year, EFF waited expectantly for the U.S. government to provide materials that could prove our plaintiff was subject to NSA surveillance through the agency’s practice of tapping into the Internet’s backbone to collect traffic. But expectations were tempered. The U.S. government’s lawyers missed the discovery deadline, asked for an extension, and were given a new, tentative deadline by the judge: August 9, 2017.

The U.S. government’s lawyers missed that deadline, and asked for an extension, approved by the judge: October 9, 2017.

The U.S. government’s lawyers missed that deadline, and asked for another extension, this time indefinitely.                                                          

Producing the materials, the government attorneys claimed, was simply too difficult to do on a timely basis.

“[T]he volume of documents and electronic data that the government defendants must review for potentially responsive information is massive,” the attorneys wrote.

EFF strongly opposed the government’s request for an indefinite extension, and suggested a new deadline in January to comply with the court’s previous orders. The judge agreed and put an end to the delay. The deadline is now January 22, 2018.

The basic premise of our questions is simple: we want information that explains whether the plaintiffs’ data was collected. 

EFF hopes the government can follow the judge’s orders this time.

Mohamed Osman Mohamud v. United States

EFF filed an amicus brief this year asking the Supreme Court to overturn a lower court’s ruling that allowed government agents to bypass the Fourth Amendment when searching through the electronic communications of U.S. persons.

The amicus was filed after a decision in Mohamud v. United States, a lawsuit that concerns the electronic communications of American citizen Mohamed Mohamud. In 2010, Mohamud was arrested for allegedly plotting to use a car bomb during a Christmas tree lighting ceremony in his home state of Oregon. It was only after Mohamud’s conviction in U.S. v. Mohamud that he learned the government relied on evidence collected under Section 702 of the FISA Amendments Act for his prosecution.

Section 702 authorizes surveillance on non-U.S. persons not living in the United States. Mohamud fits neither of those categories. After learning that the evidence gathered against him was collected under Section 702, Mohamud challenged the use of this evidence, claiming that Section 702 was unconstitutional.

The U.S. Court of Appeals for the Ninth Circuit, which heard Mohamud’s counter arguments, disagreed. In a disappointing opinion that scuttles constitutional rights, the court ruled that Americans whose communications are incidentally collected under Section 702 have no Fourth Amendment rights when those communications are searched and read by government agents.

Together with Center for Democracy & Technology and New America’s Open Technology Institute, EFF supported Mohamud’s request that the U.S. Supreme Court reconsider the appellate court’s opinion.

“We urge the Supreme Court to review this case and Section 702, which subjects Americans to warrantless surveillance on an unknown scale,” said EFF Staff Attorney Andrew Crocker. “We have long advocated for reining in NSA mass surveillance, and the ‘incidental’ collection of Americans’ private communications under Section 702 should be held unconstitutional once and for all.”

United States v. Agron Hasbajrami

EFF also filed an amicus brief in the case of U.S. v. Agron Hasbajrami, a lawsuit with striking similarities to U.S. v. Mohamud.

In 2011, Agron Hasbajrami was arrested at JFK Airport before a flight to Pakistan for allegedly providing material support to terrorists. In 2013, Hasbajrami pleaded guilty to the charges.

Hasbajrami’s court case was set for July 2015. Before going to trial, Hasbajrami pleaded guilty a second time.

But then something familiar happened. Much like Mohamud, Hasbajrami learned that the evidence used to charge him was collected under Section 702. And, just like Mohamud, Hasbajrami is a U.S. person living inside the United States. He is a resident of Brooklyn, New York.

Hasbajrami was allowed to request to withdraw his plea, and his lawyers argued to remove the evidence against him from court. Hasbajrami’s judge denied the request, and the case was moved to the Second Circuit Court of Appeals.

EFF and ACLU together urged the Second Circuit Court of Appeals to make the right decision. There is opportunity for the appellate court to protect the constitutional rights of all Americans, defending their privacy and enshrining their security from warrantless search. We plead to the court to not make the same misguided decision made in Mohamud v. U.S.

Wikimedia Foundation v. NSA

The Wikimedia Foundation scored an enormous victory this year when an appeals court allowed the nonprofit’s challenge to NSA surveillance to move forward, reversing an earlier decision that threw the lawsuit out.

Represented by the ACLU, Wikimedia sued the NSA in 2015 for the use of its “upstream” program, the same program that EFF is suing the NSA over in Jewel v. NSA. Wikimedia argued that the program infringed both the First Amendment and Fourth Amendment.

Originally filed in the U.S. District Court for the District of Maryland, Wikimedia’s lawsuit was thrown out because the court ruled that Wikimedia could not prove it had suffered harm due to NSA surveillance. This ability to prove that a plaintiff was actually wronged by what they allege is called “standing,” and the court ruled Wikimedia—and multiple other plaintiffs—lacked it.

But upon appellate review, the Fourth Circuit Court of Appeals approved standing for Wikimedia in May 2017. However, the appellate court denied standing for other plaintiffs in the lawsuit, which included Human Rights Watch, The Nation Magazine, The Rutherford Institute, Amnesty International USA and more.

This victory on a small issue—standing—is an enormous victory in continuing the fight against NSA surveillance.

What Next? 

The judicial system can be slow and, at times, frustrating. And while victories in things like discovery and standing may seem only procedural, they are the first footholds into future successes.

EFF will continue its challenges against NSA surveillance in the courts, and we are proud to stand by our partners who do the same.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.



Related Cases: Wikimedia v. NSAJewel v. NSA


JavaScript license information