You are here

EFF

Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 23 hours 47 min ago

So-called “Consent Searches” Harm Our Digital Rights

Thu, 01/14/2021 - 18:56

Imagine this scenario: You’re driving home. Police pull you over, allegedly for a traffic violation. After you provide your license and registration, the officer catches you off guard by asking: “Since you’ve got nothing to hide, you don’t mind unlocking your phone for me, do you?” Of course, you don’t want the officer to copy or rummage through all the private information on your phone. But they’ve got a badge and a gun, and you just want to go home. If you’re like most people, you grudgingly comply.

Police use this ploy, thousands of times every year, to evade the Fourth Amendment’s requirement that police obtain a warrant, based on a judge’s independent finding of probable cause of crime, before searching someone’s phone. These misleadingly named “consent searches” invade our digital privacy, disparately burden people of color, undermine judicial supervision of police searches, and rest on a legal fiction.

Legislatures and courts must act. In highly coercive settings, like traffic stops, police must be banned from conducting “consent searches” of our phones and similar devices.

In less-coercive settings, such “consent searches” must be strictly limited. Police must have reasonable suspicion that crime is afoot. They must collect and publish statistics about consent searches, to deter and detect racial profiling. The scope of consent must be narrowly construed. And police must tell people they can refuse.

Other kinds of invasive digital searches currently rest on “consent,” too. Schools use it to search the phones of minor students. Police also use it to access data from home internet of things (IoT) devices, like Amazon Ring doorbell cameras, that are streamlined for bulk police requests. Such “consent” requests must also be limited.

“Consent” Is a Legal Fiction

The “consent search” end-run around the warrant requirement rests on a legal fiction: that people who say “yes” to an officer’s demand for “consent” have actually consented. The doctrinal original sin is Schneckloth v. Bustamonte (1973), which held that “consent” alone is a legal basis for search, even if the person searched was not aware of their right to refuse. As Justice Thurgood Marshall explained in his dissent:

All the police must do is conduct what will inevitably be a charade of asking for consent. If they display any firmness at all, a verbal expression of assent will undoubtedly be forthcoming.

History has proven Justice Marshall right. Field data show that the overwhelming majority of people grant “consent.” For example, statistics on all traffic stops in Illinois, for 2015, 2016, 2017, and 2018, show that about 85% of white drivers and about 88% of minority drivers grant consent.

Lab data show the same. For example, a 2019 study in the Yale Law Journal, titled “The Voluntariness of Voluntary Consent,” asked each participant to unlock their phone for a search. Compliance rates were 97% and 90%, in two cohorts of about 100 people each.

The study separately asked other people whether a hypothetical reasonable person would agree to unlock their phone for a search. These participants were not themselves asked for consent. Some 86% and 88% of these two cohorts (again about 100 participants each) predicted that a reasonable person would refuse to grant consent. The authors observed that this “empathy gap” appears in many social psychology experiments on obedience. They warned that judges in the safety of their chambers may assume that motorists stopped by police feel free to refuse search requests--when motorists in fact don’t.

Why might people comply with search requests from police when they don’t want to? Many are not aware they can refuse. Many others reasonably fear the consequences of refusal, including longer detention, speeding tickets, or even further escalation, including physical violence. Further, many savvy officers use their word choice and tone to dance on the line between commands—which require objective suspicion—and requests—which don’t.

“Consent Searches” Are Widespread

In October 2020, Upturn published a watershed study about police searches of our phones, called “Mass Extraction.” It found that more than 2,000 law enforcement agencies, located in all 50 states, have purchased surveillance technology that can conduct “forensic” searches of our mobile devices. Further, police have used this tech hundreds of thousands of times to extract data from our phones.

The Upturn study also found that police based many of these searches on “consent.” For example, consent searches account for  38% of all cell phone searches in Anoka County, Minnesota; about one-third in Seattle, Washington; and 18% in Broward County, Florida.

Far more common are “manual” searches, where officers themselves scrutinize the data in our phones, without assistance from external software. For example, there was a ten-to-one ratio of manual searches to forensic searches by U.S. Customs and Border Protection in fiscal year 2017. Manual searches are just as threatening to our privacy. Police access virtually the same data (except some forensic searches recover “deleted” data or bypass encryption). Also, it is increasingly easy for police to use a phone’s built-in search tools to locate pertinent data. As with forensic searches, it is likely that a large portion of manual searches are by “consent.”

 “Consent Searches” Invade Privacy

Phone searches are extraordinarily invasive of privacy, as the U.S. Supreme Court explained in Riley v. California (2014). In that case, the Court held that an arrest alone does not absolve police of their ordinary duty to obtain a warrant before searching a phone. Quantitatively, our phones have “immense storage capacity,” including “millions of pages of text.” Qualitatively, they “collect in one place many distinct types of information – an address, a note, a prescription, a bank statement, a video – that reveal much more in combination than any isolated record.” Thus, phone searches “bear little resemblance” to searches of containers like bags that are “limited by physical realities.” Rather, phone searches reveal the “sum of an individual’s private life.”

“Consent Searches” Cause Racial Profiling

There is a greater risk of racial and other bias, intentional or implicit, when decision-makers have a high degree of subjective discretion, compared to when they are bounded by objective criteria. This occurs in all manner of contexts, including employment and law enforcement.

Whether to ask a person for “consent” to search is a high-discretion decision. The officer needs no suspicion at all and will almost always receive compliance.

Predictably, field data show racial profiling in “consent searches.” For example, the Illinois State Police (ISP) in 2019 were more than twice as likely to seek consent to search the cars of Latinx drivers compared to white drivers, yet more than 50% more likely to find contraband when searching the cars of white drivers compared to Latinx drivers. ISP data show similar racial disparities in other years.

When it comes to phone searches, it is highly likely that police likewise seek “consent” more often from people of color. Especially during our growing national understanding that Black Lives Matter, we must end police practices like these that unfairly burden people of color.

“Consent Searches” Undermine Judicial Review of Policing

Judges often examine warrantless searches by police. This occurs in criminal cases, if the accused moves to suppress evidence, and in civil cases, if a searched person sues the officer. Either way, the judge may analyze whether the officer had the requisite level of suspicion. This incentivizes officers to turn square corners while out in the field. And it helps ensure enforcement of the Fourth Amendment’s ban on “unreasonable” searches.

Police routinely evade this judicial oversight through the simple expedient of obtaining “consent” to search. The judge loses their power to investigate whether the officer had the requisite level of suspicion. Instead, the judge may only inquire whether “consent” was genuine.

Given all the problems discussed above, what should be done about police “consent searches” of our phones?

Ban “Consent Searches” in High-Coercion Settings

Legislatures and judges should bar police from searching a person’s phone or similar electronic devices based on consent when the person is in a high-coercion setting. This rule should apply during traffic stops, sidewalk detentions, home searches, station house arrests, and any other encounters with police where a reasonable person would not feel free to leave. This rule should apply to both manual and forensic searches.

Upturn’s 2020 study called for a ban on “consent searches” of mobile devices. It reasons that “the power and information asymmetries of cellphone consent searches are egregious and unfixable,” so “consent” is “essentially a legal fiction.” Further, consent searches are “the handmaiden of racial profiling.”

Civil rights advocates, such as the ACLU, have long demanded bans on “consent searches” of vehicles or persons during traffic stops. In 2003, the ACLU of Northern California settled a lawsuit with the California Highway Patrol, placing a three-year moratorium on consent searches. In 2010, the ACLU of Illinois petitioned the U.S. Department of Justice to ban the use of consent searches by the Illinois State Police. In 2018, the ACLU of Maryland supported a bill to ban them. Of course, searches of phones are even more privacy-invasive than searches of cars.

Strictly Limit “Consent Searches” in Less-Coercive Settings

Outside of high-coercion settings, some people may have a genuine interest in letting police inspect some of the data in their phones. An accused person might wish to present geolocation data showing they were far from the crime scene. A date rape survivor might wish to present a text message from their assailant showing the assailant’s state-of-mind. In less-coercive settings, consent to present such data is more likely to be truly voluntary.

Thus, it is not necessary to ban “consent searches” in less-coercive settings. But even in such settings, legislatures and courts must impose strict limits.

First, police must have reasonable suspicion that crime is afoot before conducting a consent search of a phone. Nearly two decades ago, the Supreme Courts of New Jersey and Minnesota imposed this limit on consent searches during traffic stops. So does a Rhode Island statute. This rule limits the subjective discretion of officers, and thus the risk of racial profiling. Further, it ensures courts may evaluate whether the officer had a criminal predicate before invading a person’s privacy.

Second, police must collect and publish statistics about consent searches of electronic devices, to deter and detect racial profiling. This is a common practice for police searches of pedestrians, motorists, and their effects. Then-State Senator Barack Obama helped pass the Illinois statute that requires this during traffic stops. California and many other states have similar laws.

Third, police and reviewing courts must narrowly construe the scope of a person’s consent to search their device. For example, if a person consents to a search of their recent text messages, a police officer must be barred from searching their older text messages, as well as their photos or social media. Otherwise, every consent search of a phone will turn into a free-ranging inquisition into all aspects of a person’s life. For the same reasons, EFF advocates for a narrow scope of device searches even pursuant to a warrant.

Fourth, before an officer searches a person’s phone by consent, the officer must notify the person of their legal right to refuse. The Rhode Island statute requires this warning, though only for youths. This is analogous to the famous Miranda warning about the right to remain silent.

Other Kinds of “Consent Searches”

Of course, consent searches by police of our phones are not the only kind of consent searches that threaten our digital rights.

For example, many public K-12 schools search students’ phones by “consent.” Indeed, some schools use forensic technology to do so. Given the inherent power imbalance between minor students and their adult teachers and principals, limits on consent searches by schools must be at least as privacy-protective as those presented above.

Also, some companies have built home internet of things (IoT) devices that facilitate bulk consent requests from police to residents. For example, Amazon Ring has built an integrated system of home doorbell cameras that enables local police, with the click of a mouse, to send residents a message requesting footage of passersby, neighbors, and themselves. Once police have the footage, they can use and share it with few limits. Ring-based consent requests may be less coercive than those during a home search. Still, the strict limits above must apply: reasonable suspicion, publication of aggregate statistics, narrow construction of the scope of consent, and notice of the right to withhold consent.

It’s Business As Usual At WhatsApp

Thu, 01/14/2021 - 17:44

WhatsApp users have recently started seeing a new pop-up screen requiring them to agree to its new terms and privacy policy by February 8th in order to keep using the app. 

The good news is that, overall, this update does not make any extreme changes to how WhatsApp shares data with its parent company Facebook. The bad news is that those extreme changes actually happened over four years ago, when WhatsApp updated its privacy policy in 2016 to allow for significantly more data sharing and ad targeting with Facebook. What's clear from the reaction to this most recent change is that WhatsApp shares much more information with Facebook than many users were aware, and has been doing so since 2016. And that’s not users’ fault: WhatsApp’s obfuscation and misdirection around what its various policies allow has put its users in a losing battle to understand what, exactly, is happening to their data.

This new terms of service and the privacy policy are one more step in Facebook's long-standing effort to monetize its messaging properties, and are also in line with its plans to make WhatsApp, Facebook Messenger, and Instagram Direct less separate. This brings serious privacy and competition concerns, including but not limited to WhatsApp's ability to share new information with Facebook about users' interactions with new shopping and payment products.

To be clear: WhatsApp still uses strong end-to-end encryption, and there is no reason to doubt the security of the contents of your messages on WhatsApp. The issue here is other data about you, your messages, and your use of the app. We still offer guides for WhatsApp (for iOS and Android) in our Surveillance Self-Defense resources, as well as for Signal (for iOS and Android).

Then and Now

This story really starts in 2016, when WhatsApp changed its privacy policy for the first time since its 2014 acquisition to allow Facebook access to several kinds of WhatsApp user data, including phone numbers and usage metadata (e.g. information about how long and how often you use the app, as well as your operating system, IP address, mobile network, etc.). Then, as now, public statements about the policy highlighted how this sharing would help WhatsApp users communicate with businesses and receive more "relevant" ads on Facebook.

At the time, WhatsApp gave users a limited option to opt out of the change. Specifically, users had 30 days after first seeing the 2016 privacy policy notice to opt out of “shar[ing] my WhatsApp account information with Facebook to improve my Facebook ads and product experiences.” The emphasis is ours; it meant that WhatsApp users were able to opt out of seeing visible changes to Facebook ads or Facebook friend recommendations, but could not opt out of the data collection and sharing itself.

If you were a WhatsApp user in August 2016 and opted out within the 30-day grace period, that choice will still be in effect. You can check by going to the “Account” section of your settings and selecting “Request account info.” The more than one billion users who have joined since then, however, did not have the option to refuse this expanded sharing of their data, and have been subject to the 2016 policy this entire time.

Now, WhatsApp is changing the terms again. The new terms and privacy policy are mainly concerned with how businesses on WhatsApp can store and host their communications. This is happening as WhatsApp plans to roll out new commerce tools in the app like Facebook Shops. Taken together, this renders the borders between WhatsApp and Facebook (and Facebook-owned Instagram) even more permeable and ambiguous. Information about WhatsApp users’ interactions with Shops will be available to Facebook, and can be used to target the ads you see on Facebook and Instagram. On top of the WhatsApp user data Facebook already has access to, this is one more category of information that can now be shared and used for ad targeting. And there’s still no meaningful way to opt-out.

So when WhatsApp says that its data sharing practices and policies haven’t changed, it is correct—and that’s exactly the problem. Those practices and policies have represented an erosion of Facebook’s and WhatsApp’s original promises to keep the apps separate for over four years now, and these new products mean the scope of data that WhatsApp has access to, and can share with Facebook, is only expanding. 

All of this looks different for users in the EU, who are protected by the EU’s General Data Protection Regulation, or GDPR. The GDPR prevents WhatsApp from simply passing on user data to Facebook without the permission of its users. As user consent must be freely given, voluntary, and unambiguous, the all-or-nothing consent framework that appeared to many WhatsApp users last week is not allowed. Tying consent for a performance of a service—in this case, private communication on WhatsApp—to additional data processing by Facebook—like shopping, payments, and data sharing for targeted advertising—violates the “coupling prohibition” under the GDPR.

The Problems with Messenger Monetization

Facebook has been looking to monetize its messaging properties for years. WhatsApp’s 2016 privacy policy change paved the way for Facebook to make money off it, and its recent announcements and changes point to a monetization strategy focused on commercial transactions that span WhatsApp, Facebook, and Instagram.

Offering a hub of services on top of core messaging functionality is not new—LINE and especially WeChat are two long-standing examples of “everything apps”—but it is a problem for privacy and competition, especially given WhatsApp's pledge to remain a “standalone” product from Facebook. Even more dangerously, this kind of mission creep might give those who would like to undermine secure communications another pretense to limit, or demand access to, those technologies.

With three major social media and messaging properties in its “family of companies”—WhatsApp, Facebook Messenger, and Instagram Direct—Facebook is positioned to blur the lines between various services with anticompetitive, user-unfriendly tactics. When WhatsApp bundles new Facebook commerce services around the core messaging function, it bundles the terms users must agree to as well. The message this sends to users is clear: regardless of what services you choose to interact with (and even regardless of whether or when those services are rolled out in your geography), you have to agree to all of it or you’re out of luck. We’ve addressed similar user choice issues around Instagram’s recent update.

After these new shopping and payment features, it wouldn’t be unreasonable to expect WhatsApp to drift toward even more data sharing for advertising and targeting purposes. After all, monetizing a messenger isn’t just about making it easier for you to find businesses; it's also about making it easier for businesses to find you.

Facebook is no stranger to building and then exploiting user trust. Part of WhatsApp’s immense value to Facebook was, and still is, its reputation for industry-leading privacy and security. We hope that doesn’t change any further.  

EFF Welcomes Fourth Amendment Defender Jumana Musa to Advisory Board

Tue, 01/12/2021 - 15:55

Our Fourth Amendment rights are under attack in the digital age, and EFF is proud to announce that human rights attorney and racial justice activist Jumana Musa has joined our advisory board, bringing great expertise to our fight defending users’ privacy rights.

Musa is Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers (NACDL), where she oversees initiatives to challenge Fourth Amendment violations and outdated legal doctrines that have allowed the government and law enforcement to rummage, with little oversight or restrictions, through people’s private digital files.

The Fourth Amendment Center provides assistance and training for defense attorneys handling cases involving surveillance technologies like geofencing, Stingrays that track people’s digital locations, facial recognition, and more.

In a recent episode of EFF’s How to Fix the Internet podcast, Musa said an important goal in achieving privacy protections for users is to build case law to remove the “third party doctrine.” This is the judge-created legal tenant that metadata—names of people you called or called you, websites you visited, or your location—held by third parties like Internet providers, phone companies, or email services, isn’t private and therefore isn’t protected by the Fourth Amendment. Police are increasingly using spying tools in criminal investigations to gather metadata in whole communities or during protests, Musa said, a practice that disproportionately affects black and indigenous people, and communities of color.

Prior to joining NACDL, Ms. Musa was a policy consultant for the Southern Border Communities Coalition, comprised of over 60 groups across the southwest organized to help immigrants facing brutality and abuse by border enforcement agencies and support a human immigration agenda.

Previously, as Deputy Director for the Rights Working Group, a national coalition of civil rights, civil liberties, human rights, and immigrant rights advocates, Musa coordinated the “Face the Truth” campaign against racial profiling. She was also the Advocacy Director for Domestic Human Rights and International Justice at Amnesty International USA, where she addressed the domestic and international impact of U.S. counterterrorism efforts on human rights. She was one of the first human rights attorneys allowed to travel to the naval base at Guantanamo Bay, Cuba, and served as Amnesty International's legal observer at military commission proceedings on the base. 

Welcome to EFF, Jumana!

Face Surveillance and the Capitol Attack

Tue, 01/12/2021 - 14:11

After last week’s violent attack on the Capitol, law enforcement is working overtime to identify the perpetrators. This is critical to accountability for the attempted insurrection. Law enforcement has many, many tools at their disposal to do this, especially given the very public nature of most of the organizing. But we object to one method reportedly being used to determine who was involved: law enforcement using facial recognition technologies to compare photos of unidentified individuals from the Capitol attack to databases of photos of known individuals. There are just too many risks and problems in this approach, both technically and legally, to justify its use. 

Government use of facial recognition crosses a bright red line, and we should not normalize its use, even during a national tragedy.

EFF Opposes Government Use of Face Recognition

Make no mistake: the attack on the Capitol can and should be investigated by law enforcement. The attackers’ use of public social media to both prepare and document their actions will make the job easier than it otherwise might be.  

But a ban on all government use of face recognition, including its use by law enforcement, remains a necessary precaution to protect us from this dangerous and easily misused technology. This includes a ban on government’s use of information obtained by other government actors and by third-party services through face recognition.

One such service is Clearview AI, which allows law enforcement officers to upload a photo of an unidentified person and, allegedly, get back publicly-posted photos of that person. Clearview has reportedly seen a huge increase in usage since the attack. Yet the faceprints in Clearview’s database were collected, without consent, from millions of unsuspecting users across the web, from places like Facebook, YouTube, and Venmo, along with links to where those photos were posted on the Internet. This means that police are comparing images of the rioters to those of many millions of individuals who were never involved—probably including yours. 

EFF opposes law enforcement use of Clearview, and has filed an amicus brief against it in a suit brought by the ACLU. The suit correctly alleges the company’s faceprinting without consent violates the Illinois Biometric Information Privacy Act (BIPA). 

Separately, police tracking down the Capitol attackers are likely using government-controlled databases, such as those maintained by state DMVs, for face recognition purposes. We also oppose this use of face recognition technology, which matches images collected during nearly universal practices like applying for a driver’s license. Most individuals require government-issued identification or a license but have no ability to opt out of such face surveillance. 

Face Recognition Impacts Everyone, Not Only Those Charged With Crimes 

The number of people affected by government use of face recognition is staggering: from DMV databases alone, roughly two-thirds of the population of the U.S. is at risk of image surveillance and misidentification, with no choice to opt out. Further, Clearview has extracted faceprints from over 3 billion people. This is not a question of “what happens if face recognition is used against you?” It is a question of how many times law enforcement has already done so. 

For many of the same reasons, EFF also opposes government identification of those at the Capitol by means of dragnet searches of cell phone records of everyone present. Such searches have many problems, from the fact that users are often not actually where records indicate they are, to this tactic’s history of falsely implicating innocent people. The Fourth Amendment was written specifically to prevent these kinds of overbroad searches.

Government Use of Facial Recognition Would Chill Protected Protest Activity

Facial surveillance technology allows police to track people not only after the fact but also in real time, including at lawful political protests. Police repeatedly used this same technology to arrest people who participated in last year’s Black Lives Matter protests. Its normalization and widespread use by the government would fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment-protected rights to speak, peacefully assemble, and associate with others. 

Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden historically falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

Face surveillance technology is also prone to error and has already implicated multiple people for crimes they did not commit

Government use of facial recognition crosses a bright red line, and we should not normalize its use, even during a national tragedy. In responding to this unprecedented event, we must thoughtfully consider not just the unexpected ramifications that any new legislation could have, but the hazards posed by surveillance techniques like facial recognition. This technology poses a profound threat to personal privacy, racial justice, political and religious expression, and the fundamental freedom to go about our lives without having our movements and associations covertly monitored and analyzed.

 

Beyond Platforms: Private Censorship, Parler, and the Stack

Mon, 01/11/2021 - 18:00

Last week, following riots that saw supporters of President Trump breach and sack parts of the Capitol building, Facebook and Twitter made the decision to give the president the boot. That was notable enough, given that both companies had previously treated the president, like other political leaders, as largely exempt from content moderation rules. Many of the president’s followers responded by moving to Parler. This week, the response has taken a new turn. Infrastructure companies much closer to the bottom of the technical “stack” including Amazon Web Services (AWS), and Google’s Android and Apple’s iOS app storesdecided to cut off service not just to an individual but to an entire platform. Parler has so far struggled to return online, partly through errors of its own making, but also because the lower down the technical stack, the harder it is to find alternatives, or re-implement what capabilities the Internet has taken for granted.

Whatever you think of Parler, these decisions should give you pause. Private companies have strong legal rights under U.S. law to refuse to host or support speech they don’t like. But that refusal carries different risks when a group of companies comes together to ensure that certain speech or speakers are effectively taken offline altogether.

The Free Speech Stack—aka “Free Speech Chokepoints”

To see the implications of censorship choices by deeper stack companies, let’s back up for a minute. As researcher Joan Donovan puts it,“At every level of the tech stack, corporations are placed in positions to make value judgments regarding the legitimacy of content, including who should have access, and when and how.” And the decisions made by companies at varying layers of the stack are bound to have different impacts on free expression.

At the top of the stack are services like Facebook, Reddit, or Twitter, platforms whose decisions about who to serve (or what to allow) are comparatively visible, though still far too opaque to most users.  Their responses can be comparatively targeted to specific users and content and, most importantly, do not cut off as many alternatives. For instance, a discussion forum lies close to the top of the stack: if you are booted from such a platform, there are other venues in which you can exercise your speech. These are the sites and services that all users (both content creators and content consumers) interact with most directly. They are also the places where people think of when they think of the content (i.e.“I saw it on Facebook”). Users are often required to have individual accounts or advantaged if they do. Users may also specifically seek out the sites for their content. The closer to the user end, the more likely it is that sites will have more developed and apparent curatorial and editorial policies and practicestheir "signature styles." And users typically have an avenue, flawed as it may be, to communicate directly with the service. 

At the other end of the stack are internet service providers (ISPs), like Comcast or AT&T. Decisions made by companies at this layer of the stack to remove content or users raise greater concerns for free expression, especially when there are few if any competitors. For example, it would be very concerning if the only broadband provider in your area cut you off because they didn’t like what you said onlineor what someone else whose name is on the account said. The adage “if you don’t like the rules, go elsewhere” doesn’t work when there is nowhere else to go.

In between are a wide array of intermediaries, such as upstream hosts like AWS, domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services. EFF has a handy chart of some of those key links between speakers and their audience here. These intermediaries provide the infrastructure for speech and commerce, but many have only the most tangential relationship to their users. Faced with a complaint, takedown will be much easier and cheaper than a nuanced analysis of a given user’s speech, much less the speech that might be hosted by a company that is a user of their services. So these service are more likely to simply cut a user or platform off than do a deeper review. Moreover, in many cases both speakers and audiences will not be aware of the identities of these services and, even if they do, have no independent relationship with them. These services are thus not commonly associated with the speech that passes through them and have no "signature style" to enforce.

Infrastructure Takedowns Are Equally If Not More Likely to Silence Marginalized Voices

We saw a particularly egregious example of an infrastructure takedown just a few months ago, when Zoom made the decision to block a San Francisco State University online academic event featuring prominent activists from Black and South African liberation movements, the advocacy group Jewish Voice for Peace, and controversial figure Leila Khaled—inspiring Facebook and YouTube to follow suit. The decision, which Zoom justified on the basis of Khaled’s alleged ties to a U.S.-designated foreign terrorist organization, was apparently made following external pressure.

Although we have numerous concerns with the manner in which social media platforms like Facebook, YouTube, and Twitter make decisions about speech, we viewed Zoom’s decision differently. Companies like Facebook and YouTube, for good or ill, include content moderation as part of the service they provide. Since the beginning of the pandemic in particular, however, Zoom has been used around the world more like a phone company than a platform. And just as you don’t expect your phone company to start making decisions about who you can call, you don’t expect your conferencing service to start making decisions about who can join your meeting.

Just as you don’t expect your phone company to start making decisions about who you can call, you don’t expect your conferencing service to start making decisions about who can join your meeting.

It is precisely this reason that Amazon’s ad-hoc decision to cut off hosting to social media alternative Parler, in the face of public pressure, should be of concern to anyone worried about how decisions about speech are made in the long run. In some ways, the ejection of Parler is neither a novel, nor a surprising development. Firstly, it is by no means the first instance of moderation at this level of the stack. Prior examples include Amazon denying service to WikiLeaks and the entire nation of Iran. Secondly, the domestic pressure on companies like Amazon to disentangle themselves from Parler was intense, and for good reason. After all, in the days leading up to its removal by Amazon, Parler played host to outrageously violent threats against elected politicians from its verified users, including lawyer L. Lin Wood.

But infrastructure takedowns nonetheless represent a significant departure from the expectations of most users. First, they are cumulative, since all speech on the Internet relies upon multiple infrastructure hosts.  If users have to worry about satisfying not only their host’s terms and conditions but also those of every service in the chain from speaker to audience—even though the actual speaker may not even be aware of all of those services or where they draw the line between hateful and non-hateful speech—many users will simply avoid sharing controversial opinions altogether. They are also less precise. In the past, we’ve seen entire large websites darkened by upstream hosts because of a complaint about a single document posted. More broadly, infrastructure level takedowns move us further toward a thoroughly locked-down, highly monitored web, from which a speaker can be effectively ejected at any time.

Going forward, we are likely to see more cases that look like Zoom’s censorship of an academic panel than we are Amazon cutting off another Parler. Nevertheless, Amazon’s decision highlights core questions of our time: Who should decide what is acceptable speech, and to what degree should companies at the infrastructure layer play a role in censorship?

At EFF, we think the answer is both simple and challenging: wherever possible, users should decide for themselves, and companies at the infrastructure layer should stay well out of it. The firmest, most consistent, approach infrastructure chokepoints can take is to simply refuse to be chokepoints at all. They should act to defend their role as a conduit, rather than a publisher. Just as law and custom developed a norm that we might sue a publisher for defamation, but not the owner of the building the publisher occupies, we are slowly developing norms about responsibility for content online. Companies like Zoom and Amazon have an opportunity to shape those norms—for the better or for the worse.

Internet Policy and Practice Should Be User-Driven, Not Crisis-Driven

It’s easy to say today, in a moment of crisis, that a service like Parler should be shunned. After all, people are using it to organize attacks on the U.S. Capitol and on Congressional leaders, with an expressed goal to undermine the democratic process. But when the crisis has passed, pressure on basic infrastructure, as a tactic, will be re-used, inevitably, against unjustly marginalized speakers and forums. This is not a slippery slope, nor a tentative prediction—we have already seen this happen to groups and communities that have far less power and resources than the President of the United States and the backers of his cause. And this facility for broad censorship will not be lost on foreign governments who wish to silence legitimate dissent either. Now that the world has been reminded that infrastructure can be commandeered to make decisions to control speech, calls for it will increase: and principled objections may fall to the wayside. 

Over the coming weeks, we can expect to see more decisions like these from companies at all layers of the stack. Just today, Facebook removed members of the Ugandan government in advance of Tuesday’s elections in the country, out of concerns for election manipulation. Some of the decisions that these companies make may be well-researched, while others will undoubtedly come as the result of external pressure and at the expense of marginalized groups.

The core problem remains: regardless of whether we agree with an individual decision, these decisions overall have not and will not be made democratically and in line with the requirements of transparency and due process, and instead are made by a handful of individuals, in a handful of companies, most distanced and least visible to the most Internet users. Whether you agree with those decisions or not, you will not be a part of them, nor be privy to their considerations. And unless we dismantle the increasingly centralized chokepoints in our global digital infrastructure, we can anticipate an escalating political battle between political factions and nation states to seize control of their powers.


The FCC and States Must Ban Digital Redlining

Mon, 01/11/2021 - 17:22

The rollout of fiber broadband will never make it to many communities in the US. That’s because large, national ISPs are currently laying fiber primarily focused on high-income users to the detriment of the rest of their users. The absence of regulators has created a situation where wealthy end users are getting fiber, but predominantly low-income users are not being transitioned off legacy infrastructure. The result being “digital redlining” of broadband, where wealthy broadband users are getting the benefits of cheaper and faster Internet access through fiber, and low-income broadband users are being left behind with more expensive slow access by that same carrier. We have seen this type of economic discrimination in the past in other venues such as housing, and it is happening now with 21st-century broadband access. 

It doesn’t have to be this way. Federal, state, and local governments have a clear role in promoting anti-discrimination deployment and historically have enforced rules to prevent unjust discrimination. States and local governments have power through franchise authority to prohibit unjust discrimination through build-out requirements. In fact, it is already illegal in California to discriminate based on income status, as EFF noted in its comments to the state’s regulator. And cities that hold direct authority over ISPs can require non-discrimination like New York City just did when it required Verizon to deploy 500,000 more fiber connections last year to low-income users.

That’s why dozens of organizations have asked the incoming Biden FCC to directly confront digital redlining after the FCC reverses the Trump era deregulation of broadband providers and restore their common carriage obligations. For the last three years, the FCC had abandoned its authority to address these systemic inequalities causing it to sit out the pandemic at a time when dependence on broadband is sky-high. It is time to treat broadband as important as water and electricity and ensure that as a matter of law everyone gets the access they deserve.

What the Data Is Showing Us on Fiber in Cities

A great number of people in cities that can be served fiber in a commercially feasible manner (that is, you can build it and make a profit without government subsidies) are still on copper DSL networks. Studies of major metropolitan areas such as Oakland and Los Angeles County are showing systemic discrimination against low-income users in fiber deployment despite high population density, and because income can often serve as a proxy for race, this falls particularly hard on neighborhoods of color.

Other studies conducted by the Communications Workers of America and National Digital Inclusion Alliance have found that this digital redlining is systemic across AT&T’s footprint with only 1/3 of AT&T wireline customers connected to its fiber. In fact, not only is AT&T not deploying fiber to all of their customers over time; it is now in the process of preparing to disconnect its copper DSL customers and leaving them no other choice than an unreliable mobile connection.

There are no good reasons for this discrimination to continue. For example, Oakland has an estimated 7000+ people per square mile, which is far above sufficient density to finance at least one city-wide fiber network. Tightly packed populations are ideal for broadband providers because they have to invest less in infrastructure to reach a large number of paying customers: 7,000 users per square mile is far more than a provider needs to pay for fiber. Chattanooga, which currently has its fiber deployed by the local government, has a population density of only 1222 people per square mile. Since the Chattanooga government ISP publicly reports its finances in great detail, we can see its extremely rosy numbers (chart below) with a fraction of the density of Oakland and many other underserved cities. In fact, rural cooperatives are doing gigabit fiber at 2.4 people per square mile

EFF assembled this chart based on publicly reported data by EPB available at the following link (https://epb.com/about-epb/leadership-annual-reports)

In other words, there are no good reasons for this discrimination. If governments require carriers to deploy fiber in a non-discriminatory way, they will still make a profit. The question really boils down to whether we are going to allow incrementally higher profits from discrimination to continue despite historically enforcing laws against such practices.

There Are Concrete Ramifications for Broadband Affordability If We Do Not Resolve Digital Redlining of Fiber in Major Cities

The pandemic has shown us that broadband is not equally accessible at affordable prices even in our major cities. It wasn’t a rural part of America that had those little girls do their homework in a fast-food parking lot (picture below) that caught media attention, it was in Salinas, California, with a population density of 6,490 people per square mile.

Photo was taken at a Taco Bell in Salinas, California (https://www.ktvu.com/news/photo-of-girls-using-taco-bell-wifi-becomes-symbol-of-digital-divide)

 

Those kids probably had some basic Internet access, but it was likely too expensive and too slow to handle remote education. And that should come as no surprise given that a recent comprehensive study by the Open Technology Institute has found that the United States has on average the most expensive slowest Internet among modern economies

The lack of ubiquitous fiber infrastructure limits the government’s ability to support efforts to deliver access to those of limited income. When you don’t have fiber in those neighborhoods, all you can do is what the city of Salinas did, which was pay for an expensive, slow mobile hotspot. Meanwhile, Chattanooga is able to give 100/100 mbps broadband access to all of its low-income families for free for 10 years for around $8 million. Since the fiber is already built and connected throughout the city, and because it is very cheap to add people to the fiber network once built, it only cost the city an average of $2-$3 per month per child to give 28,000 kids free fast Internet at cost (that is, without making a profit). If we want to make free fast Internet a reality, we need the infrastructure that can keep costs sufficiently low to realistically deliver. 

The massive discrepancy between fiber and non-fiber is due to the fact that the older networks are getting more expensive to run and can’t cheaply add a bunch of new users for higher speed needs. No amount of subsidy will change the physical limitations of those networks on top of the fact that they are getting more expensive to maintain due to their obsolescence. 

The future of all things wireless and wireline in broadband is running through fiber infrastructure. It is a universal medium that is unifying the 21st century Internet because it has the ability to scale up capacity far ahead of expected growth in demand in a cost-effective way. It has orders of magnitude greater potential and capacity than any other wireline or wireless medium that transmits data. Our technical analysis concluded that the 21st century Internet is one where all Americans are connected to fiber and are actively supporting efforts in DC to pass a universal fiber plan and well as efforts in states like California. But for major cities, the lack of ubiquitous fiber is not due to the lack of government spending, it is from the lack of regulatory enforcement of non-discrimination.

 

The Government Has All of the Powers It Needs to Find and Prosecute Those Responsible for the Crimes on Capitol Hill This Week

Mon, 01/11/2021 - 10:54

Perpetrators of the horrific events that took place at the Capitol on January 6 had a clear goal: to undermine the legitimate operations of government, to disrupt the peaceful transition of power, and to intimidate, hurt, and possibly kill those political leaders that disagree with their worldview. 

These are all crimes that can and and should be investigated by law enforcement. Yet history provides the clear lesson that immediate legislative responses to an unprecedented national crime or a deeply traumatic incident can have profound, unforeseen, and often unconstitutional consequences for decades to come. Innocent people—international travelers, immigrants, asylum seekers, activists, journalists, attorneys, and everyday Internet users—have spent the last two decades contending with the loss of privacy, government harassment, and exaggerated sentencing that came along with the PATRIOT Act and other laws passed in the wake of national tragedies.   

Law enforcement does not need additional powers, new laws, harsher sentencing mandates, or looser restrictions on the use of surveillance measures to investigate and prosecute those responsible for the dangerous crimes that occurred. Moreover, we know from experience that any such new powers will inevitably be used to target the most vulnerable members of society or be used indiscriminately to surveil the broader public. 

EFF has spent the last three decades pushing back against overbroad government powers—in courts, in Congress, and in state legislatures—to demand an end to unconstitutional surveillance and to warn against the dangers of technology being abused to invade people’s rights. To take just a few present examples: we continue to fight against the NSA’s mass surveillance programs, the exponential increase of surveillance power used by immigration enforcement agencies at the U.S. border and in the interior, and the inevitable creep of military surveillance into everyday law enforcement to this day. The fact that we are still fighting these battles shows just how hard it is to end unconstitutional overreactions.

Policymakers must learn from those mistakes. 

First, Congress and state lawmakers should not even consider any new laws or deploying any new surveillance technology without first explaining why law enforcement’s vast, existing powers are insufficient. This is particularly true given that it appears that the perpetrators of the January 6 violence planned, organized, and executed their acts in the open, and that much of the evidence of their crimes is publicly available.

Second, lawmakers must understand that any new laws or technology will likely be turned on vulnerable communities as well as those who vocally and peacefully call for social change. Elected leaders should not exacerbate ongoing injustice via new laws or surveillance technology. Instead, they must work to correct the government’s past abuse of powers to target dissenting and marginalized voices.   

As Representative Barbara Lee said in 2001, “Our country is in a state of mourning. Some of us must say, let’s step back for a moment . . . and think through the implications of our actions today so that this does not spiral out of control.” Surveillance, policing, and exaggerated sentencing—no matter their intention—always end up being wielded against the most vulnerable members of society. We urge President-elect Joe Biden to pause and to not let this moment contribute to that already startling inequity in our society. 

YouTube and TikTok Put Human Rights In Jeopardy in Turkey

Sat, 01/09/2021 - 16:25

Democracy in Turkey is in a deep crisis. Its ruling party, led by Recep Tayyip Erdoğan, systematically silences marginalized voices, shuts down dissident TV channels, sentences journalists, and disregards the European Court of Human Rights decisions. As we wrote in November, in this oppressive atmosphere, Turkey’s new Social Media Law has doubled down on previous online censorship measures by requiring sites to appoint a local representative who can be served with content removal demands and data localization mandates. This company representative would also be responsible for maintaining the fast response turnaround times to government requests required by the law.

The pushback against the requirements of the Social Media Law was initially strong. Facebook, Instagram, Twitter, Periscope, YouTube, and TikTok had not appointed representatives when the law was first introduced late last year. But now, two powerful platforms have capitulated, despite the law’s explicit threat to users’ fundamental rights: The first one, YouTube, announced on December 16th, followed by TikTok last Friday and DailyMotion just today, January 9th. This decision creates a bad precedent that will make it harder for other companies to fight back.

Both companies now plan to set up a “legal entity” in Turkey, providing a local government point of contact. Even though both announcements promise that the platforms will not change their content review or data handling or holding practices, it is not clear how YouTube or TikTok will challenge or stand against the Turkish government once they agree to set up legal shops on Turkish soil. The move by YouTube (and the lack of transparency around its decision) is particularly disappointing given the importance of the platform for political speech and over a decade of attempts to control YouTube content by the Turkish government. 

The Turkish administration and courts have long attempted to punish sites like YouTube and Twitter who do not comply with its takedown orders to their satisfaction. With a local legal presence, government officials can not only throttle or block sites; they could force platforms to arbitrarily remove perfect legal, political speech or disclose political activists’ data or force them to be complicit in a government-sanctioned human rights violation. Arbitrary political arrests and detentions are increasingly common inside the country, from information security professionals to journalists, doctors and lawyers. A local employee of an Internet company in such a hostile environment could, quite literally, be a hostage to government interests.

Reacting to TikTok Friday’s news, Yaman Akdeniz, one of the founders of the Turkish Freedom of Expression Association, told EFF: 

“TikTok is completely misguided about Turkey Internet-related restrictions and government demands. The company will become part of the problem and can become complicit in rights violations in Turkey.”

Chilling Effects on Freedom of Expression 

Turkey’s government has been working to create ways to control foreign Internet sites and services for many years. Under the new Social Media Law, failure to appoint a representative leads to stiff fines, an advertisement ban, and throttling of the provider’s bandwidth. According to the law, the Turkish Information and Communication Technologies Authority (Bilgi Teknolojileri ve İletişim Kurumu or BTK) can issue a five-phase set of fines. BTK has already sanctioned social media platforms that did not appoint local representatives by imposing two initial sets of fines, on November 4 and December 11, of TRY10 million ($1.3 million) and TRY30 million ($4 million) respectively. Facing these fines, YouTube, and now TikTok, blinked. 

If platforms do not appoint a representative by January 17, 2021, BTK can prohibit Turkish taxpayers from placing ads on and making payments to a provider’s platform if they do not have a Turkey-based representative. If the provider continues to refuse to appoint a representative until April 2021, the BTK can apply to a Criminal Judgeship of Peace to throttle the provider’s bandwidth initially by 50%. If, after that, the provider hasn’t still appointed a representative until May 2021, BTK can apply for a further bandwidth reduction; this time, the judgeship can decide to throttle the provider’s bandwidth anywhere between 50%-90%. 

The Turkish government should refrain from imposing disproportionate sanctions on platforms given their significant chilling effect on freedom of expression. Moreover, throttling, which means that locals in Turkey do not have access to social media sites, is effectively a technical ban to access such sites and services—an inherently disproportionate measure. 

Human Rights Groups Fight Back

EFF stands with Turkish Freedom of Expression Association (Tr. İfade Özgürlüğü Derneği), Human Rights Watch, and Article 19 in their protest against YouTube. In a joint letter, they urge YouTube to reverse its decision and stand firm against the Turkish governments’ pressure. The letter urgently asks YouTube to clarify how the company intends to respect the rights to freedom of expression and privacy of their users in Turkey; and if they can publish the company’s Human Rights Impact Assessment that led to the decision to appoint a representative office, which can be served with content take-down notifications. 

YouTube, a Google subsidiary, has a corporate responsibility to uphold freedom of expression as guided by the UN Guiding Principles on Business and Human Rights, a global standard of “expected conduct for all business enterprises wherever they operate.” The Principles exist independently of States’ willingness to fulfill their human rights obligations and do not diminish such commitments. And it exists over and above compliance with national laws and regulations protecting human rights.

YouTube’s Newly Precarious Position

According to YouTube’s Community Guidelines, legal content is not removed unless it violates the site rules. Moreover, content is removed within a country only if it violates the laws of that country, as determined by YouTube’s lawyers. The Transparency Report for Turkey shows that Google did not take any action concerning government takedown requests for 46.6% of Turkey’s cases.

Those declined orders demonstrate how overbroad and politicized Turkey’s takedown process has become, and how important YouTube’s freedom to challenge such orders has been. In one of the requests, the Turkish government requested a takedown of videos where officials attack Syrian refugees who try to cross the Turkey-Greece border. YouTube only removed 1 out of 34 videos because only one video violated its community rules. In another instance, YouTube received a BTK request, and then a court order, to remove 84 videos that criticized high-level government officials. YouTube blocked access to seven videos, 16 videos were erased by users, and 61 videos remained on site. Another example shows that YouTube did not take down 242 videos allegedly related to an individual affiliated with law enforcement. 

With a local representative in place, YouTube will find it much harder to resist arbitrary orders, nor will it respect its responsibilities as part of its membership of the Global Network Initiative and under international human rights law.

Social media companies must uphold international human rights law when it conflicts with local laws. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms, not domestic laws. Likewise, the UN Guiding Principles on Business and Human Rights provide that companies respect human rights and avoid contributing to human rights violations. This becomes especially important in countries where democracy is most fragile. The Global Network Initiative’s implementation guidelines were written to cover cases where its corporate members operate in countries where local law stands in conflict with human rights. YouTube has given no public indication of how it would seek to match its GNI commitments given its changed relation with Turkey.

Tech companies have come under increasing criticism for decisions to flout and ignore local laws or treat non-U.S. countries with attitudes that lack understanding of the local context. In many cases, local compliance and representation by powerful multinational companies can be a positive step. 

But Turkey is not one of those cases. Its ruling party has undermined democratic pluralism, an independent judiciary, and separation of powers in recent years. Lack of checks and balances creates an oppressive atmosphere and results in the total absence of due process. Compliance with local Turkish law can potentially mean becoming the arm of an increasingly totalitarian State and complicity with its human rights violations. 

Arbitrary Blocking and Content Removal

EngelliWeb (Eng. BlockedWeb), a comprehensive project initiated by the Turkish Freedom of Expression Association, aims at keeping statistical records of censored content and reporting about it. In one instance, EngelliWeb reported that one of their stories (reporting the blocking of another site) became subject to a court order requesting the blocking and content removal of such news. The Association has recently announced they would object to the court decision. Another access blocking decision is striking because the same judge who decided in favor of the defendant in a defamation lawsuit ordered blocking access to the news story related to the same lawsuit on the grounds of “violation of personal rights.” These examples demonstrate there is no justified reason, much less a legal one, to censor such news in Turkey. 

According to Yaman Akdeniz, an academic and one of the founders of the Turkish Freedom of Expression Association:

“Turkish judges issue approximately 12,000 blocking and removal decisions each year, and over 450,000 websites and 140,000 URL are currently blocked from Turkey according to our EngelliWeb research. In YouTube’s case, access to over 10,000 YouTube videos is currently blocked from Turkey. In the absence of due process and independent judiciary, almost all appeals involving such decisions are rejected by same level judges without proper legal scrutiny. In the absence of due process, YouTube and any other social media platform provider willing to come to Turkey, risk becoming the long arm of the Turkish judiciary. 

...the Constitutional Court has become part of the problems associated with the judiciary and does not swiftly decide individual applications involving Internet-related blocking and removal decisions. Even when the Constitutional Court finds a violation as in the cases of Wikipedia, Sendika.Org, and others, the lower courts constantly ignore the decisions of the Constitutional Court which then diminish substantially the impact of such decisions.”

Social media companies should not give in to this pressure. If social media companies comply with the law, then the Turkish authoritarian government wins without a fight. YouTube and TikTok should not lead the retreat.

California City’s Effort to Punish Journalists For Publishing Documents Widely Available Online is Dangerous and Chilling, EFF Brief Argues

Fri, 01/08/2021 - 16:49

As part of their jobs, journalists routinely dig through government websites to find newsworthy documents and share them with the broader public. Journalists and Internet users understand that publicly available information on government websites is not secret and that, if government officials want to protect information from being disclosed online, they shouldn’t publicly post it on the Internet.

But  a California city is ignoring these norms and trying to punish several journalists for doing their jobs. The city of Fullerton claims that the journalists, who write for a digital publication called Friends for Fullerton’s Future, violated federal and state computer crime laws by accessing documents publicly available to any Internet user. Not only is the civil suit by the city a transparent attempt to cover up its own poor Internet security practices, it also threatens to chill valuable and important journalism. That’s why EFF, along with the ACLU and ACLU of Southern California, filed a friend-of-the-court brief in a California appellate court this week in support of the journalists.

The city sued two journalists and Friends for Fullerton’s Future based on several claims, including an allegation that they violated California’s Comprehensive Computer Data and Fraud Act when they obtained and published documents officials posted to a city file-sharing website that was available to anyone with an Internet connection. For months, the city made the file-sharing site available to the public without a password or any other access restrictions and used it to conduct city business, including providing records to members of the public who requested them under the California Public Records Act.

Even though they took no steps to limit public access to the city’s file sharing site, officials nonetheless objected when the journalists published publicly available documents that officials believed should not have been public or the subject of news stories. And instead of taking steps to ensure the public did not have access to sensitive government documents, the city is trying to stretch the California computer crime law, known as Section 502, to punish the journalists. 

EFF’s amicus brief argues that the city’s interpretation of California’s Section 502, which was intended to criminalize malicious computer intrusions and is similar to the federal Computer Fraud and Abuse Act, is wrong as a legal matter and that it threatens to chill the public’s constitutionally protected right to publish information about government affairs.

The City contends that journalists act “without permission,” and thus commit a crime under Section 502, by accessing a particular City controlled URL and downloading documents stored there—notwithstanding the fact that the URL is in regular use in City business and has been disseminated to the general public. The City claims that an individual may access a publicly available URL, and download documents stored in a publicly accessible account, only if the City specifically provides that URL in an email addressed to that particular person. But that interpretation of “permission” produces absurd—and dangerous—results: the City could choose arbitrarily to make a criminal of many visitors to its website, simply by claiming that it had not provided the requisite permission-email to the Visitor.

The city’s interpretation of Section 502 also directly conflicts with “the longstanding open-access norms of the Internet,” the brief argues. Because Internet users understand that they have permission to access information posted publicly on the Internet, the city must take affirmative steps to restrict access via technical barriers before it can claim a Section 502 violation.

The city’s broad interpretation of Section 502 is also dangerous because, if accepted, it would threaten a great deal of valuable journalism protected by the First Amendment.

The City’s interpretation would permit public officials to decide—after making records publicly available online (through their own fault or otherwise)—that accessing those records was illegal. Under the City’s theory, it can retroactively revoke generalized permission to access publicly available documents as to a single individual or group of users once it changes its mind or is simply embarrassed by the documents’ publication. The City could then leverage that revocation of permission into a violation of Section 502 and pursue both civil and criminal liability against the parties who accessed the materials.

Moreover, the “City’s broad reading of Section 502 would chill socially valuable research, journalism, and online security and anti-discrimination testing—activity squarely protected by the First Amendment,” the brief argues. The city’s interpretation of Section 502 would jeopardize important investigative reporting techniques that in the past have uncovered illegal employment and housing discrimination.

Finally, EFF’s brief argues that the city’s interpretation of Section 502 violates the U.S. Constitution’s due process protections because it would fail to give Internet users adequate notice that they were committing a crime while simultaneously giving government officials vast discretion to decide when to enforce the law against Internet users. 

The City proposes that journalists perusing a website used to disclose public records must guess whether particular documents are intended for them or not, intuit the City’s intentions in posting those documents, and then politely look the other way—or be criminally liable. This scheme results in unclear, subjective, and after-the-fact determinations based on the whims of public officials. Effectively, the public would have to engage in mind reading to know whether officials approve of their access or subsequent use of the documents from the City’s website.

The court should reject the city’s arguments and ensure that Section 502 is not abused to retaliate against journalists, particularly because the city is seeking to punish these reporters for its own computer security shortcomings. Publishing government records available to every Internet user is good journalism, not a crime, and using computer crime laws to punish journalists for obtaining documents available to every Internet user is dangerous—and unconstitutional.

ACLU, EFF, and Tarver Law Offices Urge Supreme Court to Protect Against Forced Disclosure of Phone Passwords to Law Enforcement

Fri, 01/08/2021 - 13:41
Does the Fifth Amendment Protect You from Revealing Your Passwords to Police?

Washington, D.C. - The American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF), along with New Jersey-based Tarver Law Offices, are urging the U.S. Supreme Court to ensure the Fifth Amendment protection against self-incrimination extends to the digital age by prohibiting law enforcement from forcing individuals to disclose their phone and computer passcodes.

“The Fifth Amendment protects us from being forced to give police a combination to a wall safe. That same protection should extend to our phone and computer passwords, which can give access to far more sensitive information than any wall safe could,” said Jennifer Granick, ACLU surveillance and cybersecurity counsel. “The Supreme Court should take this case to ensure our constitutional rights survive in the digital age.”

In a petition filed Thursday and first reported by The Wall Street Journal, the ACLU and EFF are asking the U.S. Supreme Court to hear Andrews v. New Jersey. In this case, a prosecutor obtained a court order requiring Mr. Robert Andrews to disclose passwords to two cell phones. Mr. Andrews fought the order, citing his Fifth Amendment privilege. Ultimately, the New Jersey State Supreme Court held that the privilege did not apply to the disclosure or use of the passwords.

“There are few things in constitutional law more sacred than the Fifth Amendment privilege against self-incrimination,” said Mr. Andrews’ attorney, Robert L. Tarver, Jr. “Up to now, our thoughts and the content of our minds have been protected from government intrusion. The recent decision of the New Jersey Supreme Court highlights the need for the Supreme Court to solidify those protections.”

The U.S. Supreme Court has long held, consistent with the Fifth Amendment, that the government cannot compel a person to respond to a question when the answer could be incriminating. Lower courts, however, have disagreed on the scope of the right to remain silent when the government demands that a person disclose or enter phone and computer passwords. This confusing patchwork of rulings has resulted in Fifth Amendment rights depending on where one lives, and in some cases, whether state or federal authorities are the ones demanding the password.

“The Constitution is clear: no one ‘shall be compelled in any criminal case to be a witness against himself,’” said EFF Senior Staff Attorney Andrew Crocker. “When law enforcement requires you to reveal your passcodes, they force you to be a witness in your own criminal prosecution. The Supreme Court should take this case to settle this critical question about digital privacy and self-incrimination.”

For the full petition:
https://www.eff.org/document/petition-writ-certiorari-andrews-v-new-jersey

Contact:  AndrewCrockerSenior Staff Attorneyandrew@eff.org MarkRumoldSenior Staff Attorneymark@eff.org