Building A Secure Messenger

Given different people’s and community’s security needs, it’s hard to arrive at a consensus of what a “secure” messenger must provide. In this post, we discuss various options for developers to consider when working towards the goal of improving a messenger’s security. A messenger that’s perfectly secure for every single person is unlikely to exist, but there are still steps that developers can take to work towards that goal.

Messengers in the real world reflect a series of compromises by their creators. Technologists often think of those compromises in terms of what encryption algorithms or protocols are chosen. But the choices that undermine security in practice often lie far away from the encryption engine.

Encryption is the Easy Part

The most basic building block towards a secure messenger is end-to-end encryption. End-to-end encryption means that a messenger must encrypt messages in a way that nobody besides the intended recipient(s)—not messaging service providers, government authorities, or third-party hackers—can read them.

The choices that undermine security in practice often lie far away from the encryption engine.

The actual encryption is not the hard part. Most tools use very similar crypto primitives (e.g. AES, SHA2/3, and ECDH). The differences in algorithm choice rarely matter. Apps have evolved to use very similar encryption protocols (e.g. Signal's Double Ratchet). We expect that any application making a good-faith effort to provide this functionality will have published a documented security design.

Beyond encryption, all that’s left are the remaining details of product trade-offs and implementation difficulties, which is where the hottest debate over secure messaging lies.

Next: Important Details That Are Hard to Perfect

Every secure messaging app has to worry about code quality, user experience, and service availability. These features are hard to perfect, but putting no effort into them will render an application unusable.

When it comes to encrypted messaging apps, there’s a big difference between the theoretical security they provide and the practical security they provide. That’s because while the math behind an encryption algorithm may be flawless, a programmer may still make mistakes when translating that math into actual code. As a result, messengers vary in their implementation quality—i.e. how well the code was written, and how likely the code is to have bugs that could introduce security vulnerabilities.

Every secure messaging app has to worry about code quality, user experience, and service availability.

Code quality is particularly hard to assess, even by professional engineers. Whether the server and client codebases are open or closed source, they should be regularly audited by security specialists. When a codebase is open source, it can be reviewed by anyone, but it doesn’t mean that it has been, so being open source is not necessarily a guarantee of security. Nor is it a guarantee of insecurity: modern cryptography doesn’t require the encryption algorithm to be kept secret to function.

User experience quality can also vary, specifically with regard to the user’s ability to successfully send and receive encrypted messages. The goalpost here is that the user always sends messages encrypted such that only the intended recipients can read them. The user experience should be designed around reaching that goal.

While this goal seems straightforward, it’s difficult to achieve in practice. For example, say Alice sends a message to Bob. Before Bob can read it, he gets a new phone and has to update his encryption keys. Should Alice’s phone re-send the message to Bob’s new phone automatically, or wait for Alice to approve Bob’s new encryption keys before resending the message? What if Bob’s sister shoulder-surfed his password and signed into his account on her phone? If Alice’s phone re-sends the message on her behalf, she risks accidentally sending it to Bob’s sister instead of Bob.

Different applications might have different priorities about message delivery versus sticking to encryption guarantees, but we expect applications to make thoughtful choices and give advanced users the option to choose for themselves.

Like seatbelts and two-factor authentication, the biggest failure mode of secure messengers is not using them at all. If a tool fails to reliably deliver messages in congested or hostile network conditions, users may be forced to fall back to less secure channels. Building a reliable service takes consistent, applied engineering effort that smaller services might not have the resources for, but it’s essential to ensuring the user’s security.

Adding Security Features

Past these basic tenets of implementation and user experience, the discussion becomes thornier. Security benefits can get left behind when they’re deemed to not be worth the cost to implement or when they’re judged as detrimental to ease of use. We recommend considering some features in particular to create a more robust secure messenger.

There's a big difference between the theoretical and practical security messengers provide.

Modern messengers store conversation histories in the cloud. If a secure messenger stores the conversation history unencrypted in the cloud (or encrypted under information that the service provider can access), then the messenger might as well not have been end-to-end encrypted. Messengers can choose to encrypt the backups under a key kept on the user’s device or a password that only the users know, or it can choose to not encrypt the backups. If backups aren’t encrypted, they should be off by default and the user should have an opportunity to understand the implications of turning them on.

A secure application should have secure auto-updating mechanisms to quickly mitigate security problems. An out-of-date messenger with known security flaws is potentially vulnerable to more potent attacks than an up-to-date unencrypted messenger.

Perhaps surprisingly, a messenger being marketed as secure can undermine the goals of security. If having the application on a user’s phone is a marker that the user is trying to stay secure, that could make the person’s situation more dangerous. Say an outside party discovers that a person has a “secure” app on their phone. That app could be used as evidence that the person is engaging in an activity that the outside party doesn’t approve of, and invite retribution. The ideal secure messenger is primarily a messenger of sufficiently high popularity that its use is not suspicious.

An app may also choose to provide reliable indicators of compromise that are recognizable to an end-user, including in the event of machine-in-the-middle attacks or a compromised service provider. The application should also allow users to verify all their communications are encrypted to the correct person (i.e. fingerprint verification).

Finally, we recommend allowing users to choose an alias, instead of requiring that their only identifier be a phone number. For vulnerable users, a major identifier like a phone number could be private information; they shouldn’t have to give it out to get the benefits that a secure messenger provides.

Features to Look Forward To

In this section, we discuss the stretch goals that no major app has managed to implement yet, but that less popular apps or academic prototypes might have.

We’d love to see academics and experimental platforms begin working on delivering these in a scalable manner, but we don’t expect a major application to worry about these just yet.

While protecting the contents of messages is important, so too is protecting the metadata of who is talking to whom and when. When messages go through a central server, this is hard to mask. Hiding the network metadata is a feature we’d like to see grow past the experimental phase. Until then, we expect to see services retain only the metadata absolutely necessary to make the service function, and for the minimum possible time.

There's no consensus on what the best combination of features is, and there may never be.

Most messengers let users discover which of their existing contacts are already using the service, but they do so in a way that reveals the entire contents of a contact list to the service. This means that to find out which of your friends are using a service, you have to tell the service provider every person whose contact info you’ve saved to your phone, and you have no guarantee that they’re not saving this data to figure out who’s friends with whom—even for people who don’t use the service. Some messengers are already taking steps to make this information leakage a thing of the past.

Pushing reliable security updates is of prime importance to security. But automatically accepting new versions of applications means that users might inadvertently download a backdoored update onto this device. Using reproducible builds and binary transparency, users can at least ensure that the same update gets pushed to every user, so that targeted attacks become infeasible. Then, there’s a better chance that the backdooring will get noticed.

When a messenger allows group messaging, advanced security properties like future secrecy are lost. New protocols aim to fix these holes and give group messaging the security properties that users deserve.

In the secure messaging community, there's no consensus on what the best combination of features is, and there may never be. So while there will never be one perfectly secure messenger to rule them all, technical questions and conversations like the ones described above can move us towards better messengers providing more types of security.

This post is part of a series on secure messaging.
Find the full series here.

Thinking About What You Need In A Secure Messenger

All the features that determine the security of a messaging app can be confusing and hard to keep track of. Beyond the technical jargon, the most important question is: What do you need out of a messenger? Why are you looking for more security in your communications in the first place?

The goal of this post is not to assess which messenger provides the best “security” features by certain technical standards, but to help you think about precisely the kind of security you need.

Here are some examples of questions to guide you through potential concerns and line them up with certain secure messaging features. These questions are by no means comprehensive, but they can help get you into the mindset of evaluating messengers in terms of your specific needs.

Are you worried about your messages being intercepted by governments or service providers?

Are you worried about people in your physical environment reading your messages?

Do you want to avoid giving out your phone number?

How risky would a mistake be? Do you need a “foolproof” encrypted messenger?

Are you more concerned about the the “Puddle Test” or the “Hammer Test”?

Do you need features to help you verify the identity of the person you’re talking to?

We can’t capture every person’s concerns or every secure messaging feature with a handful questions. Other important issues might include corporate ownership, country-specific considerations, or background information on a company’s security decisions.

The more clearly you understand what you want and need out of a messenger, the easier it will be to navigate the wealth of extensive, conflicting, and sometimes outdated information out there. When recommendations conflict, you can use these kinds of questions to decide what direction is right for you. And when conditions change, they can help you decide whether it’s time to change your strategy and find new secure apps or tools.

This post is part of a series on secure messaging.
Find the full series here.

Are you worried about your messages being intercepted by governments or service providers?

End-to-end encryption ensures that a message is turned into a secret message by its original sender (the first “end”), and decoded only by its final recipient (the second “end”). This means that no one can “listen in” and eavesdrop on your messages in the middle, including the messaging service provider itself. Somewhat counter-intuitively, just because you have messages in an app on your phone does not mean that the app company itself can see it. This is a core characteristic of good encryption: even the people who design and deploy it cannot themselves break it.

Do not confuse end-to-end encryption with transport-layer encryption (also known as “network encryption”). While end-to-end encryption protects your messages all the way from your device to your recipient’s device, transport-layer encryption only protects them as they travel from your device to the app’s servers and from the app’s servers to your recipient’s device. In the middle, your messaging service provider can see unencrypted copies of your messages—and, in the case of legal requests, has them available to hand over to law enforcement.

One way to think about the difference between end-to-end and transport-layer encryption is the concept of trust. Transport-layer encryption requires you to trust a lot of different parties with the contents of your messages: the app or service you are using, the government of the country where the service is incorporated, the government of the country where its servers sit. However, you shouldn’t have to trust corporations or governments with your messages in order to communicate. With end-to-end encryption, you don’t have to. As a matter of general privacy hygiene, it is generally better to go with services that support end-to-end encryption whenever possible.

Are you worried about people in your physical environment reading your messages?

If you are concerned that someone in your physical environment—maybe a spouse, teacher, parent, or employer—might try to take your device and read your messages off the screen directly, ephemeral or “disappearing” messages might be an important feature for you. This generally means you are able to set messages to automatically disappear after a certain amount of time, leaving less content on your device for others to see.

It’s important to remember, though, that just because messages disappear on your device doesn’t mean they disappear everywhere. Your recipient could always take a screenshot of the message before it disappears. And if the app doesn’t use end-to-end encryption (see above), the app provider might also have a copy of your message.

(Outside of messenger choice, you can also make your device more physically secure by enabling full-disk encryption with a password.)

Do you want to avoid giving out your phone number?

Using your phone number as your messenger “username” can be convenient. It’s simple to remember, and makes it easy to find friends using the same service. However, a phone number is often a personally identifying piece of information, and you might not want to give it out to professional contacts, new acquaintances, or other people you don’t necessarily trust.

This can be a concern for women worried about harassment in particular. Activists and others involved in subversive work can also have a problem with this, as it can be dangerous to link the same phone number to both the messenger one uses for activism and the messenger one uses for communicating with friends and family.

Messengers that allow aliases can help. This usually means letting you choose a “username” or identifier that is not your phone number. Some apps also let you create multiple aliases. Even if a messenger requires your phone number to sign up, it may still allow you to use a non-phone number alias as your public-facing username.

How risky would a mistake be? Do you need a “foolproof” encrypted messenger?

Depending on your situation, it’s likely that the last thing you want is to send information unencrypted that you meant to send encrypted. If this is important to you, messengers that encrypt by default or only support encrypted communication are worth looking into.

When a messenger does not encrypt by default and instead offers a special “secret” encrypted mode, users may make mistakes and send unencrypted messages without realizing it. This can also happen because of service issues; when connectivity poses a problem, some apps may provide an unencrypted “fallback” option for messages rather than wait until an encrypted message can be sent.

Are you more concerned about the the “Puddle Test” or the “Hammer Test”?

Are you more worried about the possibility of losing your messages forever, or about someone else being able to read them? The “Puddle Test” reflects the first concern, and the “Hammer Test” reflects the second.

Messaging developers sometimes talk about the “Puddle Test”: If you accidentally dropped your phone in a Puddle and ruined it, would your messages be lost forever? Would you be able to recover them? Conversely, there’s the “Hammer Test”: If you and a contact intentionally took a Hammer to your phones or otherwise tried to delete all your messages, would they really be deleted? Would someone else be able to recover them?

There is a tension between these two potential situations: accidentally losing your messages, and intentionally deleting them. Is it more important to you that your messages be easy to recover if you accidentally lose them, or difficult to recover if you intentionally delete them?

If the hypothetical “Hammer Test” reflects your concerns, you may want to learn about a security property called forward secrecy. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Even if they had been surveilling you externally and managed to compromise the encryption keys protecting your messages, they still would not be able to read your past messages.

Cloud backups of your messages can throw a wrench in the “Hammer Test” described above. Backups help you pass the “Puddle Test,” but make it much harder to intentionally "hammer" your old messages out of existence. Apps that backup your messages unencrypted store a plaintext copy of your messages outside your device. An unencrypted copy like this can defeat the purpose of forward secrecy, and can stop your deleted messages from really being deleted. For people who are more worried about the “Puddle Test,” this can be a desirable feature. For others, it can be a serious danger.

Do you need features to help you verify the identity of the person you’re talking to?

Most people can be reasonably sure that the contact they are messaging with is who they think it is. For targeted people in high-risk situations situations, however, it can be critical to be absolutely certain that no one else is viewing or intercepting your conversation. Therefore, this question is for those most high-risk users.

Apps with contact verification can help you be certain that no one outside the intended recipient(s) are viewing your conversation. This feature lets you confirm your recipient’s unique cryptographic “fingerprint” and thus their identity. Usually this takes the form of an in-real-life check; you might scan QR codes on each other’s phones, or you might call or talk to your friend to make sure that the fingerprint code you have for them matches the one they have for you.

When one of your contacts’ fingerprints changes, that is an indicator that something about their cryptographic identity has changed. Someone else might have tricked your app into accepting their cryptographic keys instead—or it might also just mean that they got a new phone. Apps can deal with this in two ways: key change notifications, which alert you to the change while not interfering with messages, or key change confirmations, which require you to acknowledge the change before any messages are sent. The latter generally offers a higher level of protection for vulnerable users who cannot risk misfired messages.

This post is part of a series on secure messaging.
Find the full series here.

Why We Can’t Give You A Recommendation

No single messaging app can perfectly meet everyone’s security and communication needs, so we can’t make a recommendation without considering the details of a particular person’s or group’s situation. Straightforward answers are rarely correct for everyone—and if they’re correct now, they might not be correct in the future.

At time of writing, if we were locked in a room and told we could only leave if we gave a simple, direct answer to the question of what messenger the average person should use, the answer we at EFF would reluctantly give is, “Probably Signal or WhatsApp.” Both employ the well-regarded Signal protocol for end-to-end encryption. Signal stands out for collecting minimal metadata on users, meaning it has little to nothing to hand over if law enforcement requests user information. WhatsApp’s strength is that it is easy to use, making secure messaging more accessible for people of varying skill levels and interests.

No single messaging app can perfectly meet everyone’s security and communication needs.

However, once let out of the room, we would go on to describe the significant trade-offs. While Signal offers strong security features, its reliability can be inconsistent. Using it in preference to a more mainstream tool might attract unwanted attention and scrutiny, and pointing high-risk users exclusively to Signal could make that problem worse. And although WhatsApp’s user-friendly features produce a smooth user experience, they can also undermine encryption; settings prompts like automatic cloud backups, for example, can store unencrypted message content with a third party and effectively defeat the purpose of end-to-end encryption.

Any of these pros or cons can change suddenly or even imperceptibly. WhatsApp could change its policies around sharing user data with its parent company Facebook, like it did in 2016. Signal could be forcibly coerced into secret legal processes requiring it to log users’ metadata without notifying them. A newly discovered flaw in the design of either messenger could make all of their protections useless in the future. An unpublicized flaw might mean that none of those protections work right now.

More generally, security features are not the only variables that matter in choosing a secure messenger. An app with great security features is worthless if none of your friends and contacts use it, and the most popular and widely used apps can vary significantly by country and community. Poor quality of service or having to pay for an app can also make a messenger unsuitable for some people. And device selection also plays a role; for an iPhone user who communicates mostly with other iPhone users, for example, iMessage may be a great option (since iMessages between iPhones are end-to-end encrypted by default).

Security features are not the only variables that matter in choosing a secure messenger.

The question of who or what someone is worried about also influences which messenger is right for them. End-to-end encryption is great for preventing companies and governments from accessing your messages. But for many people, companies and governments are not the biggest threat, and therefore end-to-end encryption might not be the biggest priority. For example, if someone is worried about a spouse, parent, or employer with physical access to their device, the ability to send ephemeral, “disappearing” messages might be their deciding factor in choosing a messenger.

Most likely, even a confident recommendation to one person might include more than one messenger. It’s not unusual to use a number of different tools for different contexts, such as work, family, different groups of friends, or activism and community organizing.

Based on all of these factors and more, any recommendation is much more like a reasonable guess than an indisputable fact. A messenger recommendation must acknowledge all of these factors—and, most importantly, the ways they change over time. It’s hard enough to do that for a specific individual, and nearly impossible to do it for a general audience.

This post is part of a series on secure messaging.
Find the full series here.

Secure Messaging? More Like A Secure Mess.

There is no such thing as a perfect or one-size-fits-all messaging app. For users, a messenger that is reasonable for one person could be dangerous for another. And for developers, there is no single correct way to balance security features, usability, and the countless other variables that go into making a high-quality, secure communications tool.

Over the next week, we’ll be posting a series of articles to explain what makes different aspects of secure messaging so complex:

Tuesday - Why We Can’t Give You A Recommendation
Wednesday - Thinking About What You Need In A Messenger
Thursday - Building A Secure Messenger
Friday - Beyond Implementation: Policy Considerations for Messengers

Back in 2014, we released a Secure Messaging Scorecard that attempted to objectively evaluate messaging apps based on a number of criteria. After several years of feedback and a lengthy user study, however, we realized that the “scorecard” format dangerously oversimplified the complex question of how various messengers stack up from a security perspective. With this in mind, we archived the original scorecard, warned people to not rely on it, and went back to the drawing board.

Along with the significant valid criticisms of the original scorecard, EFF has heard supporters’ requests for an updated secure messaging guide. Throughout multiple internal attempts to draft and test a consumer-facing guide, we concluded it wasn’t possible for us to clearly describe the security features of many popular messaging apps, in a consistent and complete way, while considering the varied situations and security concerns of our audience.

So we have decided to take a step back and share what we have learned from this process: in sum, that secure messaging is hard to get right—and it’s even harder to tell if someone else has gotten it right. Every day this week, we’ll dive into all the ways we see this playing out, from the complexity of making and interpreting personal recommendations to the lack of consensus on technical and policy standards.

For users, we hope this series will help in developing an understanding of secure messaging that is deeper than a simple recommendation. This can be more frustrating and takes more time than giving a one-and-done list of tools to use or avoid, but we think it is worth it.

For developers, product managers, academics, and other professionals working on secure messaging, we hope this series will clarify EFF’s current thinking on secure messaging and invite further conversation.

This series is not our final word on what matters in secure messaging. EFF will stay active in this space: we will continue reporting on security news, holding the companies behind messaging apps accountable, maintaining surveillance-self defense guides, and developing resources for trainers.

Here, we want to offer our contribution, based on months of investigation, to an ongoing conversation among users, technologists, and others who care about messaging security. We hope this conversation will continue to evolve as the secure messaging landscape changes.

Users interested in secure messaging can also check out EFF’s Surveillance Self-Defense guide. The SSD provides instructions on how to download, configure, and use several messaging apps, as well as more information on how to decide on the right one for you.

Responsibility Deflected, the CLOUD Act Passes

UPDATE, March 23, 2018: President Donald Trump signed the $1.3 trillion government spending bill—which includes the CLOUD Act—into law Friday morning.

“People deserve the right to a better process.”

Those are the words of Jim McGovern, representative for Massachusetts and member of the House of Representatives Committee on Rules, when, after 8:00 PM EST on Wednesday, he and his colleagues were handed a 2,232-page bill to review and approve for a floor vote by the next morning.

In the final pages of the bill—meant only to appropriate future government spending—lawmakers snuck in a separate piece of legislation that made no mention of funds, salaries, or budget cuts. Instead, this final, tacked-on piece of legislation will erode privacy protections around the globe.

This bill is the CLOUD Act. It was never reviewed or marked up by any committee in either the House or the Senate. It never received a hearing. It was robbed of a stand-alone floor vote because Congressional leadership decided, behind closed doors, to attach this un-vetted, unrelated data bill to the $1.3 trillion government spending bill. Congress has a professional responsibility to listen to the American people’s concerns, to represent their constituents, and to debate the merits and concerns of this proposal amongst themselves, and this week, they failed.

On Thursday, the House approved the omnibus government spending bill, with the CLOUD Act attached, in a 256-167 vote. The Senate followed up late that night with a 65-32 vote in favor. All the bill requires now is the president’s signature.

Make no mistake—you spoke up. You emailed your representatives. You told them to protect privacy and to reject the CLOUD Act, including any efforts to attach it to must-pass spending bills. You did your part. It is Congressional leadership—negotiating behind closed doors—who failed.

Because of this failure, U.S. and foreign police will have new mechanisms to seize data across the globe. Because of this failure, your private emails, your online chats, your Facebook, Google, Flickr photos, your Snapchat videos, your private lives online, your moments shared digitally between only those you trust, will be open to foreign law enforcement without a warrant and with few restrictions on using and sharing your information. Because of this failure, U.S. laws will be bypassed on U.S. soil.

As we wrote before, the CLOUD Act is a far-reaching, privacy-upending piece of legislation that will:

  • Enable foreign police to collect and wiretap people's communications from U.S. companies, without obtaining a U.S. warrant.
  • Allow foreign nations to demand personal data stored in the United States, without prior review by a judge.
  • Allow the U.S. president to enter "executive agreements" that empower police in foreign nations that have weaker privacy laws than the United States to seize data in the United States while ignoring U.S. privacy laws.
  • Allow foreign police to collect someone's data without notifying them about it.
  • Empower U.S. police to grab any data, regardless if it's a U.S. person's or not, no matter where it is stored.

And, as we wrote before, this is how the CLOUD Act could work in practice:

London investigators want the private Slack messages of a Londoner they suspect of bank fraud. The London police could go directly to Slack, a U.S. company, to request and collect those messages. The London police would not necessarily need prior judicial review for this request. The London police would not be required to notify U.S. law enforcement about this request. The London police would not need a probable cause warrant for this collection.

Predictably, in this request, the London police might also collect Slack messages written by U.S. persons communicating with the Londoner suspected of bank fraud. Those messages could be read, stored, and potentially shared, all without the U.S. person knowing about it. Those messages, if shared with U.S. law enforcement, could be used to criminally charge the U.S. person in a U.S. court, even though a warrant was never issued.

This bill has large privacy implications both in the U.S. and abroad. It was never given the attention it deserved in Congress.

As Rep. McGovern said, the people deserve the right to a better process.

The New Frontier of E-Carceration: Trading Physical for Virtual Prisons

Criminal justice advocates have been working hard to abolish cash bail schemes and dismantle the prison industrial complex. And one of the many tools touted as an alternative to incarceration is electronic monitoring or “EM”: a form of digital incarceration, often using a wrist bracelet or ankle “shackle” that can monitor a subject’s location, blood alcohol level, or breath. But even as the use of this new incarceration technology expands, regulation and oversight over it—and the unprecedented amount of information it gathers—still lags behind.

There are many different kinds of electronic monitoring schemes:

  1. Active GPS tracking, where the transmitter monitors a person using satellites and reports location information in real time at set intervals.
  2. Passive GPS tracking, where the transmitter tracks a person's activity and stores location information for download the next day.
  3. Radio Frequency ("RF") is primarily used for “curfew monitoring.” A home monitoring unit is set to detect a bracelet within a specified range and then sends confirmation to a monitoring center.
  4. Secure Continuous Remote Alcohol Monitoring ("SCRAM") - analyzes a person's perspiration to extrapolate blood alcohol content once every hour.
  5. Breathalyzer monitoring reviews and tests a subject’s breath at random to estimate BAC and typically has a camera.

Monitors are commonly a condition of pre-trial release, or post-conviction supervision, like probation or parole. They are sometimes a strategy to reduce jail and prison populations. Recently, EM’s applications have widened to include juveniles, the elderly, individuals accused or convicted of DUIs or domestic violence, immigrants awaiting legal proceedings, and adults in drug programs.

This increasingly wide use of EM by law enforcement remains relatively unchecked. That’s why EFF, along with over 50 other organizations, has endorsed a set of Guidelines for Respecting the Rights of Individuals on Electronic Monitoring. The guidelines are a multi-stakeholder effort led by the Center for Media Justice's Challenging E-carceration project to outline the legal and policy considerations that law enforcement’s use of EM raises for monitored individuals’ digital rights and civil liberties.

For example, a paramount concern is the risk of racial discrimination. People of color tend to be placed on EM far more often than their white counterparts. For example, Black people in Cook County, IL make up 24% of the population, yet represent 70% of people on EM. This ratio mirrors the similarly skewed racial disparity in physical incarceration.

Another concern is cost shifting. People on EM often pay user fees ranging from $3-$35/day along with $100-$200 in setup charges, shifting the costs of electronic incarceration from the government to the monitored and their families. Usually, this disproportionately affects poor communities of color who are already over-policed and over-represented within the criminal justice and immigration systems.

Then there are the consequences to individual privacy that threaten the rights not just of the monitored, but also of those who interact with them. When children, friends, or family members rely on individuals on EM for transportation or housing, they often suffer privacy intrusions from the same mechanisms that monitor their loved ones.

Few jurisdictions have regulations limiting access to location tracking data and its attendant metadata, or specifying how long such information should be kept and for what purpose. Private companies that contract to provide EM to law enforcement typically store location data on monitored individuals and may share or sell clients’ information for a profit. This jeopardizes the safety and civil rights not just of the monitored, but also of their families, friends, and roommates who live, work, or socialize with them.

Just one example of how location information stored over time can provide an intimate portrait of someone’s life, and even be harvested by machine learning inferences to detect deviations in regular travel habits, is featured in this bi-analytics marketing video.

So, what do we do about EM? We must demand strict constitutional safeguards against its misuse, especially because “GPS monitoring generates [such] a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations” as the U.S. Supreme Court recognized in U.S. v. Jones. Recent studies by the Pew Research Center in 2014 show that 82% of Americans consider the details of their physical location over time to be sensitive information, including 50% of Americans who consider it to be “very sensitive.” Thus, law enforcement should be required to get a warrant or other court order before using EM to track an individual’s location information.

For criminal defense attorneys looking for more resources on fighting EM, review our one-pager explainer and practical advice. And if you seek amicus support in your case, email with the following information:

  1. Case name & jurisdiction
  2. Case timeline/pending deadlines
  3. Defense Attorney contact information
  4. Brief description of your EM issue 

Related Cases: US v. Jones

How Congress Censored the Internet

In Passing SESTA/FOSTA, Lawmakers Failed to Separate Their Good Intentions from Bad Law

Today was a dark day for the Internet.

The U.S. Senate just voted 97-2 to pass the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865), a bill that silences online speech by forcing Internet platforms to censor their users. As lobbyists and members of Congress applaud themselves for enacting a law tackling the problem of trafficking, let’s be clear: Congress just made trafficking victims less safe, not more.

The version of FOSTA that just passed the Senate combined an earlier version of FOSTA (what we call FOSTA 2.0) with the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). The history of SESTA/FOSTA—a bad bill that turned into a worse bill and then was rushed through votes in both houses of Congress—is a story about Congress’ failure to see that its good intentions can result in bad law. It’s a story of Congress’ failure to listen to the constituents who’d be most affected by the laws it passed. It’s also the story of some players in the tech sector choosing to settle for compromises and half-wins that will put ordinary people in danger.

Silencing Internet Users Doesn’t Make Us Safer

SESTA/FOSTA undermines Section 230, the most important law protecting free speech online. Section 230 protects online platforms from liability for some types of speech by their users. Without Section 230, the Internet would look very different. It’s likely that many of today’s online platforms would never have formed or received the investment they needed to grow and scale—the risk of litigation would have simply been too high. Similarly, in absence of Section 230 protections, noncommercial platforms like Wikipedia and the Internet Archive likely wouldn’t have been founded given the high level of legal risk involved with hosting third-party content.

The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.

Importantly, Section 230 does not shield platforms from liability under federal criminal law. Section 230 also doesn’t shield platforms across-the-board from liability under civil law: courts have allowed civil claims against online platforms when a platform directly contributed to unlawful speech. Section 230 strikes a careful balance between enabling the pursuit of justice and promoting free speech and innovation online: platforms can be held responsible for their own actions, and can still host user-generated content without fear of broad legal liability.

SESTA/FOSTA upends that balance, opening platforms to new criminal and civil liability at the state and federal levels for their users’ sex trafficking activities. The platform liability created by new Section 230 carve outs applies retroactively—meaning the increased liability applies to trafficking that took place before the law passed. The Department of Justice has raised concerns [.pdf] about this violating the Constitution’s Ex Post Facto Clause, at least for the criminal provisions.

The bill also expands existing federal criminal law to target online platforms where sex trafficking content appears. The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.

Finally, SESTA/FOSTA expands federal prostitution law to cover those who use the Internet to “promote or facilitate prostitution.”

The Internet will become a less inclusive place, something that hurts all of us.

It’s easy to see the impact that this ramp-up in liability will have on online speech: facing the risk of ruinous litigation, online platforms will have little choice but to become much more restrictive in what sorts of discussion—and what sorts of users—they allow, censoring innocent people in the process.

What forms that erasure takes will vary from platform to platform. For some, it will mean increasingly restrictive terms of service—banning sexual content, for example, or advertisements for legal escort services. For others, it will mean over-reliance on automated filters to delete borderline posts. No matter what methods platforms use to mitigate their risk, one thing is certain: when platforms choose to err on the side of censorship, marginalized voices are censored disproportionately. The Internet will become a less inclusive place, something that hurts all of us.

Big Tech Companies Dont Speak for Users

SESTA/FOSTA supporters boast that their bill has the support of the technology community, but it’s worth considering what they mean by “technology.” IBM and Oracle—companies whose business models don’t heavily rely on Section 230—were quick to jump onboard. Next came the Internet Association, a trade association representing the world’s largest Internet companies, companies that will certainly be able to survive SESTA while their smaller competitors struggle to comply with it.

Those tech companies simply don’t speak for the Internet users who will be silenced under the law. And tragically, the people likely to be censored the most are trafficking victims themselves.

SESTA/FOSTA Will Put Trafficking Victims in More Danger

Throughout the SESTA/FOSTA debate, the bills’ proponents provided little to no evidence that increased platform liability would do anything to reduce trafficking. On the other hand, the bills’ opponents have presented a great deal of evidence that shutting down platforms where sexual services are advertised exposes trafficking victims to more danger.

Freedom Network USA—the largest national network of organizations working to reduce trafficking in their communities—spoke out early to express grave concerns [.pdf] that removing sexual ads from the Internet would also remove the best chance trafficking victims had of being found and helped by organizations like theirs as well as law enforcement agencies.

Reforming [Section 230] to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking.

It is important to note that responsible website administration can make trafficking more visible—which can lead to increased identification. There are many cases of victims being identified online—and little doubt that without this platform, they would have not been identified. Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are forced into even more dangerous circumstances.

Freedom Network was far from alone. Since SESTA was introduced, many experts have chimed in to point out the danger that SESTA would put all sex workers in, including those who are being trafficked. Sex workers themselves have spoken out too, explaining how online platforms have literally saved their lives. Why didn’t Congress bring those experts to its deliberations on SESTA/FOSTA over the past year?

While we can’t speculate on the agendas of the groups behind SESTA, we can study those same groups’ past advocacy work. Given that history, one could be forgiven for thinking that some of these groups see SESTA as a mere stepping stone to banning pornography from the Internet or blurring the legal distinctions between sex work and trafficking.

In all of Congress’ deliberations on SESTA, no one spoke to the experiences of the sex workers that the bill will push off of the Internet and onto the dangerous streets. It wasn’t surprising, then, when the House of Representatives presented its “alternative” bill, one that targeted those communities more directly.

“Compromise” Bill Raises New Civil Liberties Concerns

In December, the House Judiciary Committee unveiled its new revision of FOSTA. FOSTA 2.0 had the same inherent flaw that its predecessor had—attaching more liability to platforms for their users’ speech does nothing to fight the underlying criminal behavior of traffickers.

In a way, FOSTA 2.0 was an improvement: the bill was targeted only at platforms that intentionally facilitated prostitution, and so would affect a narrower swath of the Internet. But the damage it would do was much more blunt: it would expand federal prostitution law such that online platforms would have to take down any posts that could potentially be in support of any sex work, regardless of whether there’s an indication of force or coercion, or whether minors were involved.

FOSTA 2.0 didn’t stop there. It criminalized using the Internet to “promote or facilitate” prostitution. Activists who work to reduce harm in the sex work community—by providing health information, for example, or sharing lists of dangerous clients—were rightly worried that prosecutors would attempt to use this law to put their work in jeopardy.

Regardless, a few holdouts in the tech world believed that their best hope of stopping SESTA was to endorse a censorship bill that would do slightly less damage to the tech industry.

They should have known it was a trap.

SESTA/FOSTA: The Worst of Both Worlds

That brings us to last month, when a new bill combining SESTA and FOSTA was rushed through congressional procedure and overwhelmingly passed the House.

When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.

Thousands of you picked up your phone and called your senators, urging them to oppose the new Frankenstein bill. And you weren’t alone: EFF, the American Civil Liberties Union, the Center for Democracy and Technology, and many other experts pleaded with Congress to recognize the dangers to free speech and online communities that the bill presented.

Even the Department of Justice wrote a letter urging Congress not to go forward with the hybrid bill [.pdf]. The DOJ said that the expansion of federal criminal law in SESTA/FOSTA was simply unnecessary, and could possibly undermine criminal investigations. When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.

Assuming that the president signs it into law, SESTA/FOSTA is the most significant rollback to date of the protections for online speech in Section 230. We hope that it’s the last, but it may not be. Over the past year, we’ve seen more calls than ever to create new exceptions to Section 230.

In any case, we will continue to fight back against proposals that undermine our right to speak and gather online. We hope you’ll stand with us.

How To Change Your Facebook Settings To Opt Out of Platform API Sharing

You shouldn't have to do this. You shouldn't have to wade through complicated privacy settings in order to ensure that the companies with which you've entrusted your personal information are making reasonable, legal efforts to protect it. But Facebook has allowed third parties to violate user privacy on an unprecedented scale, and, while legislators and regulators scramble to understand the implications and put limits in place, users are left with the responsibility to make sure their profiles are properly configured.

Over the weekend, it became clear that Cambridge Analytica, a data analytics company, got access to more than 50 million Facebook users' data in 2014. The data was overwhelmingly collected, shared, and stored without user consent. The scale of this violation of user privacy reflects how Facebook's terms of service and API were structured at the time. Make no mistake: this was not a data breach. This was exactly how Facebook's infrastructure was designed to work.

In addition to raising questions about Facebook's role in the 2016 presidential election, this news is a reminder of the inevitable privacy risks that users face when their personal information is captured, analyzed, indefinitely stored, and shared by a constellation of data brokers, marketers, and social media companies.

Tech companies can and should do more to protect users, including giving users far more control over what data is collected and how that data is used. That starts with meaningful transparency and allowing truly independent researchers—with no bottom line or corporate interest—access to work with, black-box test, and audit their systems. Finally, users need to be able to leave when a platform isn’t serving them — and take their data with them when they do.

Of course, you could choose to leave Facebook entirely, but for many that is not a viable solution. For now, if you'd like keep your data from going through Facebook's API, you can take control of your privacy settings. Keep in mind that this disables ALL platform apps (like Farmville, Twitter, or Instagram) and you will not be able to log into sites using your Facebook login.

Log into Facebook and visit the App Settings page (or go there manually via the Settings Menu > Apps ).

From there, click the "Edit" button under "Apps, Websites and Plugins." Click "Disable Platform."

A modal will appear called “Turn Platform Off,” with a description of the Platform features. Click the “Disable Platform” button.

If disabling platform entirely is too much, there is another setting that can help: limiting the personal information accessible by apps that others use. By default, other people who can see your info can bring it with them when they use apps, and your info becomes available to those apps. You can limit this as follows.

From the same page, click "Edit" under "Apps Others Use." Then uncheck the types of information that you don't want others' apps to be able to access. For most people reading this post, that will mean unchecking every category.

 From the App Settings page, find the section called "Apps Others Use." Click the “Edit” button. A modal will appear that has many checkboxes, including "Bio", "Birthday," "If I'm online," and so on. Uncheck the boxes, and click the "Save" button.

Advocating for Change: How Lucy Parsons Labs Defends Transparency in Chicago

Here at the Electronic Frontier Alliance, we’re lucky to have incredible member organizations engaging in advocacy on our issues across the U.S. One of those groups in Chicago, Lucy Parsons Labs (LPL), has done incredible work taking on a range of civil liberties issues. They’re a dedicated group of advocates volunteering to make their world (and the Windy City) a better, more equitable place.

We sat down with one of the founders of LPL, Freddy Martinez, to gain a better understanding of the Lab and how they use their collective powers for good. 

How would you describe Lucy Parsons Labs? How did the organization get started, and what need were you trying to fill?

The lab got started four years back when a few people doing digital security training in Chicago saw there was need for a more technical group that could bridge the gap between advocacy and technology. We each had areas of interest and expertise that we were doing activism around, and it grew pretty organically from there. For example, lawmakers would try to pass a bill without fully understanding the full implications that the piece of legislation would have, technologically or otherwise. We began to work together on these projects to educate lawmakers and inform the public on these issues as a friend group, and the organization grew out of that as we added or expanded projects. We do a lot of public records requests and work on police transparency, but our group has broad, varied interests. The common thread that runs through the work is that we have a lot of expertise in a lot of different advocacy areas, and we leverage that expertise to make the world better. It lets us sail in many different waters.

LPL participates in the Electronic Frontier Alliance (EFA), a network of grassroots digital rights groups around the country. Your work in Chicago runs the gamut from advocating for transparency in the criminal justice system, to investigating civil asset forfeiture, from operating a SecureDrop system for whistleblowers, to investigating the use of cell-simulators by the Chicago Police Department. Given that, how does the EFA play into your work?

I feel that the more the organization grows, the more having groups around the country who are building capacity is key to making sure that these projects get done. There’s such a huge amount of work to be done, and having other partners who are interested in various subsections of our work and can help us achieve our goals is really valuable. EFA provides us access to a diverse array of experts, from academics and lawyers to grassroots activists. It gives us a lot of leverage, and lets us share our subject matter expertise in ways we wouldn’t be able to if we were going it alone.

Let’s talk surveillance. LPL has done incredible work via the open records process to expose the use of cell-site simulators (sometimes referred to as “Stingrays” or IMSI Catchers) by the Chicago Police Department. Can you tell us about how you started investigating, and why these kinds of surveillance need to be brought into the public conversation?

I actually heard of this equipment through news reporting—you would see major cities buying these devices, and then troubling patterns began to emerge. Prosecutors would begin dropping cases because they didn’t want to tell defense attorneys where they got the information or how. There were cases of parallel construction. After noticing this trend, I sent my first public records request to get info on whether the Chicago Police Department had bought any. Instead of following the law, they decided to ignore the request until a judge ordered them to release the records. They were ostensibly used for the war on drugs, but usually they are used overseas in the war on terror. They test these technologies on black and brown populations in war zones, then bring them back to surveil their citizens. It’s an abuse of power and an invasion of privacy. We need to be talking about this. We think that there’s a reason that this stuff is acquired in secret, because people would not be okay with their government doing this if they knew.

LPL has done tons of community work in the anti-surveillance realm as well. Why do you believe educating people about how they can protect themselves from surveillance is important?

I think that you need to give people the breathing room to participate in society safely. Surveillance is usually thought of as an eye in the sky watching over your every move, but it’s so much more pervasive than that. We think about these things in abstract ways, with very little understanding of how they can affect our daily lives. A way to frame the importance of, say, encryption, is to use the example of medical correspondence. If you’re talking to your doctor, you don’t want your messages to be seen by anyone else. It’s critical to have these discussions and decisions made in public so that people can make informed decisions about their lives and privacy. This is a broader responsibility we have as a society, and to each other.

Do you have any advice for other community-based advocacy groups based on your experience?

I have found that being organized is extremely important. We’re a small team of volunteers, so we have to keep things really well documented, especially when dealing with something like public records requests. You also have to, and I can’t stress this enough, enjoy the work and make sure you don’t burn out. It’s a labor of love—you need to be invested in these projects and taking care of yourself in order to do effective activism. Otherwise the work will suffer.

LPL has partnered with other organizations and community groups in the past. What are some ways that you’ve found success in coalition building? What advice would you give to other groups that would like to work more collaboratively with their peer groups?

LPL is also part of a larger group called the Chicago Data Collaborative, where we are working on sharing and analyzing data on the criminal justice system. One of the most important pieces of information to know before embarking on a multi-organization enterprise is that you will have to do a lot of capacity building in order to work together effectively. You’ll need to set aside a lot of time and effort to context build for those not in the know. You must be “in the room” (whether that’s digital or physical) for dedicated, direct collaboration. This is what makes or breaks a good partnership.

Anything else you’d like to add?

I have a bit of advice for people who’d like to get involved in grassroots activism and advocacy, but aren’t sure where to start: You’ll never know when you’re going to come across these projects. Being curious and following your gut will take you down weird rabbit holes. Get started somewhere and follow your gut. You’ll be surprised how far that will take you.

If you’re advocating for digital rights within your community, please explore the Electronic Frontier Alliance and consider joining.

This interview has been lightly edited for length and readability.

A Smattering of Stars in Argentina's First "Who Has Your Back?" ISP Report

It’s Argentina's turn to take a closer look at the practices of their local Internet Service Providers, and how they treat their customers’ personal data when the government comes knocking.

Argentina's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of Asociación por los Derechos Civiles and the Electronic Frontier Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to turn a spotlight on how the policies of Internet Service Providers either advance or hinder the privacy rights of users.

The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities. Last year Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and ETICAS Foundation released a similar study earlier this year, part of a series across Latin America and Spain.

The report set out to examine which Argentine ISPs best defend their customers. Which are transparent about their policies regarding requests for data? Do any challenge disproportionate data demands for their users’ data? Which require a judicial order before handing over personal data? Do any of the companies notify their users when complying with judicial requests? ADC examined publicly posted information, including the privacy policies and codes of practice, from six of the biggest Argentine telecommunications access providers: Cablevisión (Fibertel), Telefónica (Speedy), Telecom (Arnet), Telecentro, IPLAN, and DirecTV (AT&T). Between them, these providers cover 90% of the fixed and broadband market.

Each company was given the opportunity to answer a questionnaire, to take part in a private interview and to send any additional information if they felt appropriate, all of which was incorporated into the final report. ADC’s rankings for Argentine ISPs are below; the full report, which includes details about each company, is available at:

Evaluation Criteria for ¿Quién Defiende tus Datos?

  1. Privacy Policy: whether its privacy policy is easy to understand, whether it tells users which data is being collected, how long these companies store their data, if they notify users if they change their privacy policies, if they publish a note regarding the right of access to personal data, and if they foresee how the right of access to a person's’ data may be exercised.
  2. Transparency: whether they publish transparency reports that are accessible to the public, and how many requests have been received, compiled and rejected, including details about the type of requests, the government agencies that made the requests and the reasons provided by the authority.
  3. Notification: whether they provide any kind of notification to customers of government data demands, and bonus points if they do the notification apriori.
  4. Judicial Court: Whether they require the government to obtain a court order before handing over data, and if they judicially resist data requests that are excessive and do not comply with legal requirements.
  5. Law Enforcement Guidelines: whether they publish their guidelines for law enforcement requests.

Companies in Argentina are off to a good start but still have a way to go to fully protect their customers’ personal data and be transparent about who has access to it. ADC and EFF expect to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Argentines will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope next year’s report will shine with more stars.


JavaScript license information