Surveillance
Self-Defense

Blog

Why We Can’t Give You A Recommendation

No single messaging app can perfectly meet everyone’s security and communication needs, so we can’t make a recommendation without considering the details of a particular person’s or group’s situation. Straightforward answers are rarely correct for everyone—and if they’re correct now, they might not be correct in the future.

At time of writing, if we were locked in a room and told we could only leave if we gave a simple, direct answer to the question of what messenger the average person should use, the answer we at EFF would reluctantly give is, “Probably Signal or WhatsApp.” Both employ the well-regarded Signal protocol for end-to-end encryption. Signal stands out for collecting minimal metadata on users, meaning it has little to nothing to hand over if law enforcement requests user information. WhatsApp’s strength is that it is easy to use, making secure messaging more accessible for people of varying skill levels and interests.

No single messaging app can perfectly meet everyone’s security and communication needs.

However, once let out of the room, we would go on to describe the significant trade-offs. While Signal offers strong security features, its reliability can be inconsistent. Using it in preference to a more mainstream tool might attract unwanted attention and scrutiny, and pointing high-risk users exclusively to Signal could make that problem worse. And although WhatsApp’s user-friendly features produce a smooth user experience, they can also undermine encryption; settings prompts like automatic cloud backups, for example, can store unencrypted message content with a third party and effectively defeat the purpose of end-to-end encryption.

Any of these pros or cons can change suddenly or even imperceptibly. WhatsApp could change its policies around sharing user data with its parent company Facebook, like it did in 2016. Signal could be forcibly coerced into secret legal processes requiring it to log users’ metadata without notifying them. A newly discovered flaw in the design of either messenger could make all of their protections useless in the future. An unpublicized flaw might mean that none of those protections work right now.

More generally, security features are not the only variables that matter in choosing a secure messenger. An app with great security features is worthless if none of your friends and contacts use it, and the most popular and widely used apps can vary significantly by country and community. Poor quality of service or having to pay for an app can also make a messenger unsuitable for some people. And device selection also plays a role; for an iPhone user who communicates mostly with other iPhone users, for example, iMessage may be a great option (since iMessages between iPhones are end-to-end encrypted by default).

Security features are not the only variables that matter in choosing a secure messenger.

The question of who or what someone is worried about also influences which messenger is right for them. End-to-end encryption is great for preventing companies and governments from accessing your messages. But for many people, companies and governments are not the biggest threat, and therefore end-to-end encryption might not be the biggest priority. For example, if someone is worried about a spouse, parent, or employer with physical access to their device, the ability to send ephemeral, “disappearing” messages might be their deciding factor in choosing a messenger.

Most likely, even a confident recommendation to one person might include more than one messenger. It’s not unusual to use a number of different tools for different contexts, such as work, family, different groups of friends, or activism and community organizing.

Based on all of these factors and more, any recommendation is much more like a reasonable guess than an indisputable fact. A messenger recommendation must acknowledge all of these factors—and, most importantly, the ways they change over time. It’s hard enough to do that for a specific individual, and nearly impossible to do it for a general audience.


This post is part of a series on secure messaging.
Find the full series here.

Secure Messaging? More Like A Secure Mess.

There is no such thing as a perfect or one-size-fits-all messaging app. For users, a messenger that is reasonable for one person could be dangerous for another. And for developers, there is no single correct way to balance security features, usability, and the countless other variables that go into making a high-quality, secure communications tool.

Over the next week, we’ll be posting a series of articles to explain what makes different aspects of secure messaging so complex:

Tuesday - Why We Can’t Give You A Recommendation
Wednesday - Thinking About What You Need In A Messenger
Thursday - Building A Secure Messenger
Friday - Beyond Implementation: Policy Considerations for Messengers

Back in 2014, we released a Secure Messaging Scorecard that attempted to objectively evaluate messaging apps based on a number of criteria. After several years of feedback and a lengthy user study, however, we realized that the “scorecard” format dangerously oversimplified the complex question of how various messengers stack up from a security perspective. With this in mind, we archived the original scorecard, warned people to not rely on it, and went back to the drawing board.

Along with the significant valid criticisms of the original scorecard, EFF has heard supporters’ requests for an updated secure messaging guide. Throughout multiple internal attempts to draft and test a consumer-facing guide, we concluded it wasn’t possible for us to clearly describe the security features of many popular messaging apps, in a consistent and complete way, while considering the varied situations and security concerns of our audience.

So we have decided to take a step back and share what we have learned from this process: in sum, that secure messaging is hard to get right—and it’s even harder to tell if someone else has gotten it right. Every day this week, we’ll dive into all the ways we see this playing out, from the complexity of making and interpreting personal recommendations to the lack of consensus on technical and policy standards.

For users, we hope this series will help in developing an understanding of secure messaging that is deeper than a simple recommendation. This can be more frustrating and takes more time than giving a one-and-done list of tools to use or avoid, but we think it is worth it.

For developers, product managers, academics, and other professionals working on secure messaging, we hope this series will clarify EFF’s current thinking on secure messaging and invite further conversation.

This series is not our final word on what matters in secure messaging. EFF will stay active in this space: we will continue reporting on security news, holding the companies behind messaging apps accountable, maintaining surveillance-self defense guides, and developing resources for trainers.

Here, we want to offer our contribution, based on months of investigation, to an ongoing conversation among users, technologists, and others who care about messaging security. We hope this conversation will continue to evolve as the secure messaging landscape changes.

Users interested in secure messaging can also check out EFF’s Surveillance Self-Defense guide. The SSD provides instructions on how to download, configure, and use several messaging apps, as well as more information on how to decide on the right one for you.

Responsibility Deflected, the CLOUD Act Passes

UPDATE, March 23, 2018: President Donald Trump signed the $1.3 trillion government spending bill—which includes the CLOUD Act—into law Friday morning.

“People deserve the right to a better process.”

Those are the words of Jim McGovern, representative for Massachusetts and member of the House of Representatives Committee on Rules, when, after 8:00 PM EST on Wednesday, he and his colleagues were handed a 2,232-page bill to review and approve for a floor vote by the next morning.

In the final pages of the bill—meant only to appropriate future government spending—lawmakers snuck in a separate piece of legislation that made no mention of funds, salaries, or budget cuts. Instead, this final, tacked-on piece of legislation will erode privacy protections around the globe.

This bill is the CLOUD Act. It was never reviewed or marked up by any committee in either the House or the Senate. It never received a hearing. It was robbed of a stand-alone floor vote because Congressional leadership decided, behind closed doors, to attach this un-vetted, unrelated data bill to the $1.3 trillion government spending bill. Congress has a professional responsibility to listen to the American people’s concerns, to represent their constituents, and to debate the merits and concerns of this proposal amongst themselves, and this week, they failed.

On Thursday, the House approved the omnibus government spending bill, with the CLOUD Act attached, in a 256-167 vote. The Senate followed up late that night with a 65-32 vote in favor. All the bill requires now is the president’s signature.

Make no mistake—you spoke up. You emailed your representatives. You told them to protect privacy and to reject the CLOUD Act, including any efforts to attach it to must-pass spending bills. You did your part. It is Congressional leadership—negotiating behind closed doors—who failed.

Because of this failure, U.S. and foreign police will have new mechanisms to seize data across the globe. Because of this failure, your private emails, your online chats, your Facebook, Google, Flickr photos, your Snapchat videos, your private lives online, your moments shared digitally between only those you trust, will be open to foreign law enforcement without a warrant and with few restrictions on using and sharing your information. Because of this failure, U.S. laws will be bypassed on U.S. soil.

As we wrote before, the CLOUD Act is a far-reaching, privacy-upending piece of legislation that will:

  • Enable foreign police to collect and wiretap people's communications from U.S. companies, without obtaining a U.S. warrant.
  • Allow foreign nations to demand personal data stored in the United States, without prior review by a judge.
  • Allow the U.S. president to enter "executive agreements" that empower police in foreign nations that have weaker privacy laws than the United States to seize data in the United States while ignoring U.S. privacy laws.
  • Allow foreign police to collect someone's data without notifying them about it.
  • Empower U.S. police to grab any data, regardless if it's a U.S. person's or not, no matter where it is stored.

And, as we wrote before, this is how the CLOUD Act could work in practice:

London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect those messages. The London police would not necessarily need prior judicial review for this request. The London police would not be required to notify U.S. law enforcement about this request. The London police would not need a probable cause warrant for this collection.

Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages, if shared with U.S. law enforcement, could be used to criminally charge the U.S. person in a U.S. court, even though a warrant was never issued.

This bill has large privacy implications both in the U.S. and abroad. It was never given the attention it deserved in Congress.

As Rep. McGovern said, the people deserve the right to a better process.

The New Frontier of E-Carceration: Trading Physical for Virtual Prisons

Criminal justice advocates have been working hard to abolish cash bail schemes and dismantle the prison industrial complex. And one of the many tools touted as an alternative to incarceration is electronic monitoring or “EM”: a form of digital incarceration, often using a wrist bracelet or ankle “shackle” that can monitor a subject’s location, blood alcohol level, or breath. But even as the use of this new incarceration technology expands, regulation and oversight over it—and the unprecedented amount of information it gathers—still lags behind.

There are many different kinds of electronic monitoring schemes:

  1. Active GPS tracking, where the transmitter monitors a person using satellites and reports location information in real time at set intervals.
  2. Passive GPS tracking, where the transmitter tracks a person's activity and stores location information for download the next day.
  3. Radio Frequency ("RF") is primarily used for “curfew monitoring.” A home monitoring unit is set to detect a bracelet within a specified range and then sends confirmation to a monitoring center.
  4. Secure Continuous Remote Alcohol Monitoring ("SCRAM") - analyzes a person's perspiration to extrapolate blood alcohol content once every hour.
  5. Breathalyzer monitoring reviews and tests a subject’s breath at random to estimate BAC and typically has a camera.

Monitors are commonly a condition of pre-trial release, or post-conviction supervision, like probation or parole. They are sometimes a strategy to reduce jail and prison populations. Recently, EM’s applications have widened to include juveniles, the elderly, individuals accused or convicted of DUIs or domestic violence, immigrants awaiting legal proceedings, and adults in drug programs.

This increasingly wide use of EM by law enforcement remains relatively unchecked. That’s why EFF, along with over 50 other organizations, has endorsed a set of Guidelines for Respecting the Rights of Individuals on Electronic Monitoring. The guidelines are a multi-stakeholder effort led by the Center for Media Justice's Challenging E-carceration project to outline the legal and policy considerations that law enforcement’s use of EM raises for monitored individuals’ digital rights and civil liberties.

For example, a paramount concern is the risk of racial discrimination. People of color tend to be placed on EM far more often than their white counterparts. For example, Black people in Cook County, IL make up 24% of the population, yet represent 70% of people on EM. This ratio mirrors the similarly skewed racial disparity in physical incarceration.

Another concern is cost shifting. People on EM often pay user fees ranging from $3-$35/day along with $100-$200 in setup charges, shifting the costs of electronic incarceration from the government to the monitored and their families. Usually, this disproportionately affects poor communities of color who are already over-policed and over-represented within the criminal justice and immigration systems.

Then there are the consequences to individual privacy that threaten the rights not just of the monitored, but also of those who interact with them. When children, friends, or family members rely on individuals on EM for transportation or housing, they often suffer privacy intrusions from the same mechanisms that monitor their loved ones.

Few jurisdictions have regulations limiting access to location tracking data and its attendant metadata, or specifying how long such information should be kept and for what purpose. Private companies that contract to provide EM to law enforcement typically store location data on monitored individuals and may share or sell clients’ information for a profit. This jeopardizes the safety and civil rights not just of the monitored, but also of their families, friends, and roommates who live, work, or socialize with them.

Just one example of how location information stored over time can provide an intimate portrait of someone’s life, and even be harvested by machine learning inferences to detect deviations in regular travel habits, is featured in this bi-analytics marketing video.

So, what do we do about EM? We must demand strict constitutional safeguards against its misuse, especially because “GPS monitoring generates [such] a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations” as the U.S. Supreme Court recognized in U.S. v. Jones. Recent studies by the Pew Research Center in 2014 show that 82% of Americans consider the details of their physical location over time to be sensitive information, including 50% of Americans who consider it to be “very sensitive.” Thus, law enforcement should be required to get a warrant or other court order before using EM to track an individual’s location information.

For criminal defense attorneys looking for more resources on fighting EM, review our one-pager explainer and practical advice. And if you seek amicus support in your case, email stephanie@eff.org with the following information:

  1. Case name & jurisdiction
  2. Case timeline/pending deadlines
  3. Defense Attorney contact information
  4. Brief description of your EM issue 

Related Cases: US v. Jones

How Congress Censored the Internet

In Passing SESTA/FOSTA, Lawmakers Failed to Separate Their Good Intentions from Bad Law

Today was a dark day for the Internet.

The U.S. Senate just voted 97-2 to pass the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865), a bill that silences online speech by forcing Internet platforms to censor their users. As lobbyists and members of Congress applaud themselves for enacting a law tackling the problem of trafficking, let’s be clear: Congress just made trafficking victims less safe, not more.

The version of FOSTA that just passed the Senate combined an earlier version of FOSTA (what we call FOSTA 2.0) with the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). The history of SESTA/FOSTA—a bad bill that turned into a worse bill and then was rushed through votes in both houses of Congress—is a story about Congress’ failure to see that its good intentions can result in bad law. It’s a story of Congress’ failure to listen to the constituents who’d be most affected by the laws it passed. It’s also the story of some players in the tech sector choosing to settle for compromises and half-wins that will put ordinary people in danger.

Silencing Internet Users Doesn’t Make Us Safer

SESTA/FOSTA undermines Section 230, the most important law protecting free speech online. Section 230 protects online platforms from liability for some types of speech by their users. Without Section 230, the Internet would look very different. It’s likely that many of today’s online platforms would never have formed or received the investment they needed to grow and scale—the risk of litigation would have simply been too high. Similarly, in absence of Section 230 protections, noncommercial platforms like Wikipedia and the Internet Archive likely wouldn’t have been founded given the high level of legal risk involved with hosting third-party content.

The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.

Importantly, Section 230 does not shield platforms from liability under federal criminal law. Section 230 also doesn’t shield platforms across-the-board from liability under civil law: courts have allowed civil claims against online platforms when a platform directly contributed to unlawful speech. Section 230 strikes a careful balance between enabling the pursuit of justice and promoting free speech and innovation online: platforms can be held responsible for their own actions, and can still host user-generated content without fear of broad legal liability.

SESTA/FOSTA upends that balance, opening platforms to new criminal and civil liability at the state and federal levels for their users’ sex trafficking activities. The platform liability created by new Section 230 carve outs applies retroactively—meaning the increased liability applies to trafficking that took place before the law passed. The Department of Justice has raised concerns [.pdf] about this violating the Constitution’s Ex Post Facto Clause, at least for the criminal provisions.

The bill also expands existing federal criminal law to target online platforms where sex trafficking content appears. The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.

Finally, SESTA/FOSTA expands federal prostitution law to cover those who use the Internet to “promote or facilitate prostitution.”

The Internet will become a less inclusive place, something that hurts all of us.

It’s easy to see the impact that this ramp-up in liability will have on online speech: facing the risk of ruinous litigation, online platforms will have little choice but to become much more restrictive in what sorts of discussion—and what sorts of users—they allow, censoring innocent people in the process.

What forms that erasure takes will vary from platform to platform. For some, it will mean increasingly restrictive terms of service—banning sexual content, for example, or advertisements for legal escort services. For others, it will mean over-reliance on automated filters to delete borderline posts. No matter what methods platforms use to mitigate their risk, one thing is certain: when platforms choose to err on the side of censorship, marginalized voices are censored disproportionately. The Internet will become a less inclusive place, something that hurts all of us.

Big Tech Companies Dont Speak for Users

SESTA/FOSTA supporters boast that their bill has the support of the technology community, but it’s worth considering what they mean by “technology.” IBM and Oracle—companies whose business models don’t heavily rely on Section 230—were quick to jump onboard. Next came the Internet Association, a trade association representing the world’s largest Internet companies, companies that will certainly be able to survive SESTA while their smaller competitors struggle to comply with it.

Those tech companies simply don’t speak for the Internet users who will be silenced under the law. And tragically, the people likely to be censored the most are trafficking victims themselves.

SESTA/FOSTA Will Put Trafficking Victims in More Danger

Throughout the SESTA/FOSTA debate, the bills’ proponents provided little to no evidence that increased platform liability would do anything to reduce trafficking. On the other hand, the bills’ opponents have presented a great deal of evidence that shutting down platforms where sexual services are advertised exposes trafficking victims to more danger.

Freedom Network USA—the largest national network of organizations working to reduce trafficking in their communities—spoke out early to express grave concerns [.pdf] that removing sexual ads from the Internet would also remove the best chance trafficking victims had of being found and helped by organizations like theirs as well as law enforcement agencies.

Reforming [Section 230] to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking.

It is important to note that responsible website administration can make trafficking more visible—which can lead to increased identification. There are many cases of victims being identified online—and little doubt that without this platform, they would have not been identified. Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are forced into even more dangerous circumstances.

Freedom Network was far from alone. Since SESTA was introduced, many experts have chimed in to point out the danger that SESTA would put all sex workers in, including those who are being trafficked. Sex workers themselves have spoken out too, explaining how online platforms have literally saved their lives. Why didn’t Congress bring those experts to its deliberations on SESTA/FOSTA over the past year?

While we can’t speculate on the agendas of the groups behind SESTA, we can study those same groups’ past advocacy work. Given that history, one could be forgiven for thinking that some of these groups see SESTA as a mere stepping stone to banning pornography from the Internet or blurring the legal distinctions between sex work and trafficking.

In all of Congress’ deliberations on SESTA, no one spoke to the experiences of the sex workers that the bill will push off of the Internet and onto the dangerous streets. It wasn’t surprising, then, when the House of Representatives presented its “alternative” bill, one that targeted those communities more directly.

“Compromise” Bill Raises New Civil Liberties Concerns

In December, the House Judiciary Committee unveiled its new revision of FOSTA. FOSTA 2.0 had the same inherent flaw that its predecessor had—attaching more liability to platforms for their users’ speech does nothing to fight the underlying criminal behavior of traffickers.

In a way, FOSTA 2.0 was an improvement: the bill was targeted only at platforms that intentionally facilitated prostitution, and so would affect a narrower swath of the Internet. But the damage it would do was much more blunt: it would expand federal prostitution law such that online platforms would have to take down any posts that could potentially be in support of any sex work, regardless of whether there’s an indication of force or coercion, or whether minors were involved.

FOSTA 2.0 didn’t stop there. It criminalized using the Internet to “promote or facilitate” prostitution. Activists who work to reduce harm in the sex work community—by providing health information, for example, or sharing lists of dangerous clients—were rightly worried that prosecutors would attempt to use this law to put their work in jeopardy.

Regardless, a few holdouts in the tech world believed that their best hope of stopping SESTA was to endorse a censorship bill that would do slightly less damage to the tech industry.

They should have known it was a trap.

SESTA/FOSTA: The Worst of Both Worlds

That brings us to last month, when a new bill combining SESTA and FOSTA was rushed through congressional procedure and overwhelmingly passed the House.

When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.

Thousands of you picked up your phone and called your senators, urging them to oppose the new Frankenstein bill. And you weren’t alone: EFF, the American Civil Liberties Union, the Center for Democracy and Technology, and many other experts pleaded with Congress to recognize the dangers to free speech and online communities that the bill presented.

Even the Department of Justice wrote a letter urging Congress not to go forward with the hybrid bill [.pdf]. The DOJ said that the expansion of federal criminal law in SESTA/FOSTA was simply unnecessary, and could possibly undermine criminal investigations. When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.

Assuming that the president signs it into law, SESTA/FOSTA is the most significant rollback to date of the protections for online speech in Section 230. We hope that it’s the last, but it may not be. Over the past year, we’ve seen more calls than ever to create new exceptions to Section 230.

In any case, we will continue to fight back against proposals that undermine our right to speak and gather online. We hope you’ll stand with us.

How To Change Your Facebook Settings To Opt Out of Platform API Sharing

You shouldn't have to do this. You shouldn't have to wade through complicated privacy settings in order to ensure that the companies with which you've entrusted your personal information are making reasonable, legal efforts to protect it. But Facebook has allowed third parties to violate user privacy on an unprecedented scale, and, while legislators and regulators scramble to understand the implications and put limits in place, users are left with the responsibility to make sure their profiles are properly configured.

Over the weekend, it became clear that Cambridge Analytica, a data analytics company, got access to more than 50 million Facebook users' data in 2014. The data was overwhelmingly collected, shared, and stored without user consent. The scale of this violation of user privacy reflects how Facebook's terms of service and API were structured at the time. Make no mistake: this was not a data breach. This was exactly how Facebook's infrastructure was designed to work.

In addition to raising questions about Facebook's role in the 2016 presidential election, this news is a reminder of the inevitable privacy risks that users face when their personal information is captured, analyzed, indefinitely stored, and shared by a constellation of data brokers, marketers, and social media companies.

Tech companies can and should do more to protect users, including giving users far more control over what data is collected and how that data is used. That starts with meaningful transparency and allowing truly independent researchers—with no bottom line or corporate interest—access to work with, black-box test, and audit their systems. Finally, users need to be able to leave when a platform isn’t serving them — and take their data with them when they do.

Of course, you could choose to leave Facebook entirely, but for many that is not a viable solution. For now, if you'd like keep your data from going through Facebook's API, you can take control of your privacy settings. Keep in mind that this disables ALL platform apps (like Farmville, Twitter, or Instagram) and you will not be able to log into sites using your Facebook login.

Log into Facebook and visit the App Settings page (or go there manually via the Settings Menu > Apps ).

From there, click the "Edit" button under "Apps, Websites and Plugins." Click "Disable Platform."

A modal will appear called “Turn Platform Off,” with a description of the Platform features. Click the “Disable Platform” button.

If disabling platform entirely is too much, there is another setting that can help: limiting the personal information accessible by apps that others use. By default, other people who can see your info can bring it with them when they use apps, and your info becomes available to those apps. You can limit this as follows.

From the same page, click "Edit" under "Apps Others Use." Then uncheck the types of information that you don't want others' apps to be able to access. For most people reading this post, that will mean unchecking every category.

 From the App Settings page, find the section called "Apps Others Use." Click the “Edit” button. A modal will appear that has many checkboxes, including "Bio", "Birthday," "If I'm online," and so on. Uncheck the boxes, and click the "Save" button.

Advocating for Change: How Lucy Parsons Labs Defends Transparency in Chicago

Here at the Electronic Frontier Alliance, we’re lucky to have incredible member organizations engaging in advocacy on our issues across the U.S. One of those groups in Chicago, Lucy Parsons Labs (LPL), has done incredible work taking on a range of civil liberties issues. They’re a dedicated group of advocates volunteering to make their world (and the Windy City) a better, more equitable place.

We sat down with one of the founders of LPL, Freddy Martinez, to gain a better understanding of the Lab and how they use their collective powers for good. 

How would you describe Lucy Parsons Labs? How did the organization get started, and what need were you trying to fill?

The lab got started four years back when a few people doing digital security training in Chicago saw there was need for a more technical group that could bridge the gap between advocacy and technology. We each had areas of interest and expertise that we were doing activism around, and it grew pretty organically from there. For example, lawmakers would try to pass a bill without fully understanding the full implications that the piece of legislation would have, technologically or otherwise. We began to work together on these projects to educate lawmakers and inform the public on these issues as a friend group, and the organization grew out of that as we added or expanded projects. We do a lot of public records requests and work on police transparency, but our group has broad, varied interests. The common thread that runs through the work is that we have a lot of expertise in a lot of different advocacy areas, and we leverage that expertise to make the world better. It lets us sail in many different waters.

LPL participates in the Electronic Frontier Alliance (EFA), a network of grassroots digital rights groups around the country. Your work in Chicago runs the gamut from advocating for transparency in the criminal justice system, to investigating civil asset forfeiture, from operating a SecureDrop system for whistleblowers, to investigating the use of cell-simulators by the Chicago Police Department. Given that, how does the EFA play into your work?

I feel that the more the organization grows, the more having groups around the country who are building capacity is key to making sure that these projects get done. There’s such a huge amount of work to be done, and having other partners who are interested in various subsections of our work and can help us achieve our goals is really valuable. EFA provides us access to a diverse array of experts, from academics and lawyers to grassroots activists. It gives us a lot of leverage, and lets us share our subject matter expertise in ways we wouldn’t be able to if we were going it alone.

Let’s talk surveillance. LPL has done incredible work via the open records process to expose the use of cell-site simulators (sometimes referred to as “Stingrays” or IMSI Catchers) by the Chicago Police Department. Can you tell us about how you started investigating, and why these kinds of surveillance need to be brought into the public conversation?

I actually heard of this equipment through news reporting—you would see major cities buying these devices, and then troubling patterns began to emerge. Prosecutors would begin dropping cases because they didn’t want to tell defense attorneys where they got the information or how. There were cases of parallel construction. After noticing this trend, I sent my first public records request to get info on whether the Chicago Police Department had bought any. Instead of following the law, they decided to ignore the request until a judge ordered them to release the records. They were ostensibly used for the war on drugs, but usually they are used overseas in the war on terror. They test these technologies on black and brown populations in war zones, then bring them back to surveil their citizens. It’s an abuse of power and an invasion of privacy. We need to be talking about this. We think that there’s a reason that this stuff is acquired in secret, because people would not be okay with their government doing this if they knew.

LPL has done tons of community work in the anti-surveillance realm as well. Why do you believe educating people about how they can protect themselves from surveillance is important?

I think that you need to give people the breathing room to participate in society safely. Surveillance is usually thought of as an eye in the sky watching over your every move, but it’s so much more pervasive than that. We think about these things in abstract ways, with very little understanding of how they can affect our daily lives. A way to frame the importance of, say, encryption, is to use the example of medical correspondence. If you’re talking to your doctor, you don’t want your messages to be seen by anyone else. It’s critical to have these discussions and decisions made in public so that people can make informed decisions about their lives and privacy. This is a broader responsibility we have as a society, and to each other.

Do you have any advice for other community-based advocacy groups based on your experience?

I have found that being organized is extremely important. We’re a small team of volunteers, so we have to keep things really well documented, especially when dealing with something like public records requests. You also have to, and I can’t stress this enough, enjoy the work and make sure you don’t burn out. It’s a labor of love—you need to be invested in these projects and taking care of yourself in order to do effective activism. Otherwise the work will suffer.

LPL has partnered with other organizations and community groups in the past. What are some ways that you’ve found success in coalition building? What advice would you give to other groups that would like to work more collaboratively with their peer groups?

LPL is also part of a larger group called the Chicago Data Collaborative, where we are working on sharing and analyzing data on the criminal justice system. One of the most important pieces of information to know before embarking on a multi-organization enterprise is that you will have to do a lot of capacity building in order to work together effectively. You’ll need to set aside a lot of time and effort to context build for those not in the know. You must be “in the room” (whether that’s digital or physical) for dedicated, direct collaboration. This is what makes or breaks a good partnership.

Anything else you’d like to add?

I have a bit of advice for people who’d like to get involved in grassroots activism and advocacy, but aren’t sure where to start: You’ll never know when you’re going to come across these projects. Being curious and following your gut will take you down weird rabbit holes. Get started somewhere and follow your gut. You’ll be surprised how far that will take you.

If you’re advocating for digital rights within your community, please explore the Electronic Frontier Alliance and consider joining.

This interview has been lightly edited for length and readability.

A Smattering of Stars in Argentina's First "Who Has Your Back?" ISP Report

It’s Argentina's turn to take a closer look at the practices of their local Internet Service Providers, and how they treat their customers’ personal data when the government comes knocking.

Argentina's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of Asociación por los Derechos Civiles and the Electronic Frontier Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to turn a spotlight on how the policies of Internet Service Providers either advance or hinder the privacy rights of users.

The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities. Last year Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and ETICAS Foundation released a similar study earlier this year, part of a series across Latin America and Spain.

The report set out to examine which Argentine ISPs best defend their customers. Which are transparent about their policies regarding requests for data? Do any challenge disproportionate data demands for their users’ data? Which require a judicial order before handing over personal data? Do any of the companies notify their users when complying with judicial requests? ADC examined publicly posted information, including the privacy policies and codes of practice, from six of the biggest Argentine telecommunications access providers: Cablevisión (Fibertel), Telefónica (Speedy), Telecom (Arnet), Telecentro, IPLAN, and DirecTV (AT&T). Between them, these providers cover 90% of the fixed and broadband market.

Each company was given the opportunity to answer a questionnaire, to take part in a private interview and to send any additional information if they felt appropriate, all of which was incorporated into the final report. ADC’s rankings for Argentine ISPs are below; the full report, which includes details about each company, is available at: https://adcdigital.org.ar/qdtd

Evaluation Criteria for ¿Quién Defiende tus Datos?

  1. Privacy Policy: whether its privacy policy is easy to understand, whether it tells users which data is being collected, how long these companies store their data, if they notify users if they change their privacy policies, if they publish a note regarding the right of access to personal data, and if they foresee how the right of access to a person's’ data may be exercised.
  2. Transparency: whether they publish transparency reports that are accessible to the public, and how many requests have been received, compiled and rejected, including details about the type of requests, the government agencies that made the requests and the reasons provided by the authority.
  3. Notification: whether they provide any kind of notification to customers of government data demands, and bonus points if they do the notification apriori.
  4. Judicial Court: Whether they require the government to obtain a court order before handing over data, and if they judicially resist data requests that are excessive and do not comply with legal requirements.
  5. Law Enforcement Guidelines: whether they publish their guidelines for law enforcement requests.

Companies in Argentina are off to a good start but still have a way to go to fully protect their customers’ personal data and be transparent about who has access to it. ADC and EFF expect to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Argentines will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope next year’s report will shine with more stars.

Offline/Online Project Highlights How the Oppression Marginalized Communities Face in the Real World Follows Them Online

People in marginalized communities who are targets of persecution and violence—from the Rohingya in Burma to Native Americans in North Dakota—are using social media to tell their stories, but finding that their voices are being silenced online.

This is the tragic and unjust consequence of content moderation policies of companies like Facebook, which is deciding on a daily basis what can be and can’t be said and shown online. Platform censorship has ratcheted up in these times of political strife, ostensibly to combat hate speech and online harassment. Takedowns and closures of neo-Nazi and white supremacist sites have been a matter of intense debate. Less visible is the effect content moderation is having on vulnerable communities.

Flawed rules against hate speech have shut down online conversations about racism and harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of aboriginal women in Australia, Dakota pipeline protestors and many others are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are being flagged for takedown by Facebook. The powerless struggle to be heard in the first place; online censorship further marginalizes vulnerable communities. This is not OK.

In response, EFF and Visualizing Impact launched an awareness project today that highlights the online censorship of communities across the globe that are struggling or in crisis. Offline/Online is a series of visuals demonstrating that the inequities and oppression these communities face in the physical world are being replicated online. The visuals can be downloaded and shared on Twitter, Facebook, and Snapchat, or printed out for distribution.

In one, the displacement of nearly 700,000 Rohingya Muslims from Myanmar because of state violence is represented in a photo showing Rohingya children trying to board a small boat. Rohingya refugees, many of whom are women and children, are arriving in Bangladesh with wounds from gunshot and fire, according to the United Nations.

And online? Facebook is an essential means of communication in Myanmar. Activists there and in the West have documented the violence against the Rohingya online, only to have their Facebook posts removed and accounts suspended.

Inequity offline, censorship online.

The EFF/Visualizing Impact project exposes this pattern among Palestinians, aboriginal women in Australia, Native Americans, Dakota pipeline protestors, and black Americans. We believe this is just the tip of the iceberg. We are already far down the slippery slope from judicious moderation of online content to outright censorship. With two billion Facebook users worldwide, there are likely more vulnerable communities being subject to online censorship.

Our hope is that activists, concerned citizens, and online communities will post and share Inequity Offline/Censorship Online visuals (found here) many times, raising awareness about the impact of censorship on marginalized communities—a story that is underreported. Sharing the visuals is a step all of us can take to combat online censorship. It may help restore the speech and voices being erased online.

Geek Squad's Relationship with FBI Is Cozier Than We Thought

After the prosecution of a California doctor revealed the FBI’s ties to a Best Buy Geek Squad computer repair facility in Kentucky, new documents released to EFF show that the relationship goes back years. The records also confirm that the FBI has paid Geek Squad employees as informants.

EFF filed a Freedom of Information Act (FOIA) lawsuit last year to learn more about how the FBI uses Geek Squad employees to flag illegal material when people pay Best Buy to repair their computers. The relationship potentially circumvents computer owners’ Fourth Amendment rights.

The documents released to EFF show that Best Buy officials have enjoyed a particularly close relationship with the agency for at least 10 years. For example, an FBI memo from September 2008 details how Best Buy hosted a meeting of the agency’s “Cyber Working Group” at the company’s Kentucky repair facility.

The memo and a related email show that Geek Squad employees also gave FBI officials a tour of the facility before their meeting and makes clear that the law enforcement agency’s Louisville Division “has maintained close liaison with the Geek Squad’s management in an effort to glean case initiations and to support the division’s Computer Intrusion and Cyber Crime programs.”

Another document records a $500 payment from the FBI to a confidential Geek Squad informant. This appears to be one of the same payments at issue in the prosecution of Mark Rettenmaier, the California doctor who was charged with possession of child pornography after Best Buy sent his computer to the Kentucky Geek Squad repair facility.

Other documents show that over the years of working with Geek Squad employees, FBI agents developed a process for investigating and prosecuting people who sent their devices to the Geek Squad for repairs. The documents detail a series of FBI investigations in which a Geek Squad employee would call the FBI’s Louisville field office after finding what they believed was child pornography.

The FBI agent would show up, review the images or video and determine whether they believe they are illegal content. After that, they would seize the hard drive or computer and send it to another FBI field office near where the owner of the device lived. Agents at that local FBI office would then investigate further, and in some cases try to obtain a warrant to search the device. 

Some of these reports indicate that the FBI treated Geek Squad employees as informants, identifying them as “CHS,” which is shorthand for confidential human sources. In other cases, the FBI identifies the initial calls as coming from Best Buy employees, raising questions as to whether certain employees had different relationships with the FBI.

In the case of the investigation into Rettenmaier’s computers, the documents released to EFF do not appear to have been made public in that prosecution. These raise additional questions about the level of cooperation between the company and law enforcement.

For example, documents reflect that Geek Squad employees only alert the FBI when they happen to find illegal materials during a manual search of images on a device and that the FBI does not direct those employees to actively find illegal content.

But some evidence in the case appears to show Geek Squad employees did make an affirmative effort to identify illegal material. For example, the image found on Rettenmaier’s hard drive was in an unallocated space, which typically requires forensic software to find. Other evidence showed that Geek Squad employees were financially rewarded for finding child pornography. Such a bounty would likely encourage Geek Squad employees to actively sweep for suspicious content.

Although these documents provide new details about the FBI’s connection to Geek Squad and its Kentucky repair facility, the FBI has withheld a number of other documents in response to our FOIA suit. Worse, the FBI has refused to confirm or deny to EFF whether it has similar relationships with other computer repair facilities or businesses, despite our FOIA specifically requesting those records. The FBI has also failed to produce documents that would show whether the agency has any internal procedures or training materials that govern when agents seek to cultivate informants at computer repair facilities.

We plan to challenge the FBI’s stonewalling in court later this spring. In the meantime, you can read the documents produced so far here and here.

Related Cases: FBI Geek Squad Informants FOIA Suit

Páginas

JavaScript license information