You are here


Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 18 hours 18 min ago

Why Slow Networks Really Cost More Than Fiber

Thu, 06/04/2020 - 17:18

A myth often pushed by incumbents who want to forestall universal fiber is that there is a high “cost” of fiber and cheaper alternatives. They’ll point to figures that for example show that a fiber-to-the home approach costs several thousands of dollars per household,  when upgrading the copper DSL line or cable line can be done at a fraction of the price. They then use this argument for “cheaper” networks to advocate that government subsidies and public investments make a short-sighted focus on building networks with slow speeds, but broader coverage. But what they leave out is, despite the appearance of lower costs, they are in fact setting up these government programs for an exorbitant amount of waste—that translates into company profit— in the long run. If the government incentivizes building fast networks for the future, not just fast-enough for today, we will all save a lot of money over the long-term. 

Slow Speed Networks Have Limits Hard Baked into Them Due to Physics

Cable and DSL networks, absent the investment in fiber optics, are not getting any faster and are deteriorating after decades of use. EFF has written this technical analysis explaining in great detail why this is the case.  In short, the capacity of those wires to transmit data have real world limits whereas fiber optic wires have a capacity that our network technology has not even begun to reach. In cable and DSL networks, copper wires can only carry so much data over so much distance. Any significant improvements in the future will involve replacing big sections of the copper cables with fiber optics. But for pure fiber networks, the limit on capacity isn’t the cables, it’s the transmitters and receivers at each end. Once the fiber is in place, we’ll be able to upgrade these networks for years to come without having to bury any new cables. Because of the massive capacity differences, there is a “speed chasm” between legacy networks and fiber networks. and this plays directly into the true costs of choosing to incrementally upgrade an old network or switching over to all fiber. 

To give you some real-world examples of the growing speed chasm between our two highest speed options in broadband we can look at the development of cable systems and fiber-to-the home, in the few places where Americans are lucky to have both. In 2015, Chatanooga’s EPB, the local government ISP, launched 10 gigabit download and upload networks whereas cable systems were able to transition towards gigabit download—but substantially slower upload—a little over a year ago. So, not only were the cable systems four years behind, they were also offering 1/10th of the speed., Butt here is the bigger issue that policymakers have to understand: it also costs EPB an exceedingly tiny amount of money to increase their network’s capacity ten-fold. All they had to do was switch out the networking hardware in their system when new devices were developed. We detail EPB’s financials below from their public reports when they transitioned to 10 gigabit networks, and the extra spending they needed to upgrade is practically invisible. In fact, the entire upgrade was 100% financed (with healthy profits) by affordable user-subscription fees with no price increases. Cable systems can not do this (and certainly not DSL).

The Number of Years a Wire Is Useful Is Directly Related to Its Cost

Let’s break this down with some numbers. If someone came to you and said you can spend $500 per household to give everyone 25 mbps/3 mbps—the woefully outdated federal definition of broadband—or you can spend $5,000 per household to give symmetrical gigabit (1000 mbps/1000 mbps), you may conclude that the first option saves you the most money, and the 25/3 offering is cheaper to build. You may think it’ll be easy to recover a $500 investment from subscribers, while a  $5,000 per household seems daunting and too expensive. But you would be wrong, and here is why. 

In any decision in building out a broadband network, we must also factor in its usefulness and capacity to handle the projected growth of consumption. For years without fail, data consumption has continued to rise as more applications and services require greater amounts of capacity. Cisco publishes these global trends with their annual reports. Check out the North American numbers below, and note that these projections did not account for COVID-19, which has only accelerated usage trends.

These estimates reflect that an average household’s regular Internet usage in 2020 already exceeds 25 mbps on the download, and push way past 3 mbps on uploads with video conferencing alone. On average, people are going through 100s of gigabytes of data per month and that number will continue to increase. 

So if you chose now, in 2020, to finance a broadband network delivering the federal standard of 25/3 as its top speed, you have already built an obsolete network that can not even keep up with normal usage of the Internet today. How many households could you get to  recoup that $500 investment? The answer is zero, unless you have a monopoly. In places that have at least two choices for broadband Internet, market analysts are demonstrating  that eventually no one will subscribe to DSL or advanced DSL networks, because they are not keeping up with consumption. That leaves the only real choices in broadband today as high-speed cable and fiber-to-the-home, for download purposes. The choices narrow even more if you’re prioritizing upload-based uses, like video conferencing. Only fiber-to-the-home works for this, until cable systems can be upgraded to be symmetrical. In essence, if you  build a $500 connection giving a basic slow speed, no one would willfully pay for its use unless given no other choices. Furthermore, the network will eventually become so slow that it can’t even make use of the Internet—much like dialup today. In the end, you’ve spent $500 per household, with potentially no willing buyers. That’s a  total loss.

Now, take the same scenario and apply the $5,000 fiber investment. If you are patient with the pace the infrastructure investment has to be recovered, you can space out how many years you are willing to wait to recover that $5,000. If you can wait ten years, it is around $42 per month plus interest. If you can wait twenty years, it is a little past $20 per month plus interest. The more years you add, the lower that monthly payment can be stretched—and you get to use the network at the same time. And you have a network that can not only service the needs of today, but also handle the level of demand needed for tomorrow and the distant future, with very little additional money needed to upgrade. It can also be utilized to simultaneously deliver 5G and a whole ecosystem of wireless companies as follow on users to help cover your costs.

When data consumption starts requiring 10 gigabit/10 gigabit connections, your same $5,000 investment remains useful, and in fact had that capacity ready five years ago. When average data consumption reaches 100 gigabit symmetrical or terabit symmetrical well into the future, your exact same network remains useful and ready for the challenge. You will still have willing buyers for its capacity because it remains relevant. And, given that broadband is an essential service that people will need their entire life, you will have a dedicated funding source. Even if it takes you 30 years to recover your costs, that still makes financial sense because fiber is expected to be useful for decades past those 30 years. That $5,000 cost can also be seen as a $10 per month for less than half of the asset’s usefulness, whereas a $500 cost in a slower speed will end being a $500 loss, and leaves you with a network that needs to be replaced by fiber anyway.

When Metrics Are Set Appropriately High, The Government Has Saved Money

North Dakota’s experience should be instructive for other state and federal policy makers. 60 percent of its households and businesses already have access to fiber to the home. An analysis by the consulting firm Conexon, which  specializes in rural fiber by rural cooperatives, has found that while states across the country that received tens of millions of dollars from the federal government for broadband, almost all lack dense fiber networks—with the exception of a state like North Dakota.

Blue areas represent high-speed. Source: Broadband map based off Conexon analysis of government spending in broadband found at

How did that happen? How did nearly identical amounts of public investments in broadband yield such massive discrepancies? It boils down to the decisions the local governments and small private ISPs in North Dakota made with that government money. The federal government had extremely low expectations on how the subsidies should be spent, sometimes approving of projects as slow as 10 mbps/1 mbps. Companies like AT&T and Frontier spent those dollars to slightly improve services in their legacy copper networks on the cheap to reach those low numbers, rather than take it to discount a transition to fiber. Frontier ignored profitably opportunities to deploy fiber for so long that it is now undergoing bankruptcy due to its neglect.

Meanwhile, in North Dakota, the local government and small private ISPs made the decision to invest those federal government dollars, and their own local investments to match, into building out fiber networks. Now those networks are being paid off in the long run. And the local, state, and federal governments no longer need to come back and spend money to replace anything there. Those fiber networks will be able to offer symmetrical 10 gigabit services, 100 gigabit services, and terabit services well into the 21st century. And they will able to do so without government subsidies, but rather financed by typical monthly payments from users. Meanwhile, we’re going to have to spend another estimated $80 billion on the rest of the country that lacks fiber-to-the-home because we didn’t require fiber in the first place. In essence, every dollar the government has spent to assist ISPs to slightly increase their speeds has been a waste, given that those slow speeds are no longer relevant or useful to consumers—or are rapidly approaching the cliff. Those networks have hit their limits and only a replacement to fiber can yield further advancements. If policy required states to spend the billions the government provided on a transition to fiber 10 years ago, we would look more like South Korea today as opposed to being behind close to a dozen EU nations, the advanced Asia markets, and China. It’s time for us to catch up with the rest of the world, and invest in smart Internet infrastructure.

How to Identify Visible (and Invisible) Surveillance at Protests

Thu, 06/04/2020 - 17:10

The full weight of U.S. policing has descended upon protesters across the country as people take to the streets to denounce the police killings of Breonna Taylor, George Floyd, and countless others who have been subjected to police violence. Along with riot shields, tear gas, and other crowd control measures also comes the digital arm of modern policing: prolific surveillance technology on the street and online.

For decades, EFF has been tracking police departments’ massive accumulation of surveillance technology and equipment. You can find detailed descriptions and analysis of common police surveillance tech at our Street-Level Surveillance guide. As we continue to expand our Atlas of Surveillance project, you can also see what surveillance tech law enforcement agencies in your area may be using. 

If you’re attending a protest, don’t forget to take a look at our Surveillance Self-Defense guide to learn how to keep your information and digital devices secure when attending a protest. 

Here is a review of surveillance technology that police may be deploying against ongoing protests against racism and police brutality.

Surveillance Tech that May be Visible
Body-Worn Cameras

Officers wearing new body cams for the first time. Source: Houston Police Department

Unlike many other forms of police technology, body-worn cameras may serve as both a law enforcement and a public accountability function. Body cameras worn by police can deter and document police misconduct and use of force, but footage can also be used to surveil both people that police interact with and third parties who might not even realize they are being filmed. If combined with face recognition or other technologies, thousands of police officers wearing body-worn cameras could record the words, actions, and locations of much of the population at a given time, raising serious First and Fourth Amendment concerns. For this reason, California placed a moratorium on the use of face recognition technology on mobile police devices, including body-worn cameras. 

                                       Axon Flex camera system. Source: TASER Training Academy presentation for Tucson Police Department

Body-worn cameras come in many forms. Often they are square boxes on the front of an officers chest. Sometimes they are mounted on the shoulder. In some cases, the camera may be partially concealed under a vest, with only the lens visible. Companies also are marketing tactical glasses that includes a camera and face recognition; we have not seen this deployed in the United States--yet. 

A body-worn camera lens is visible between the buttons on a Laredo Police officer's vest. Source: Laredo Police Department Facebook


Sahuarita Police Department display its drones on a table. Source: Town of Sahuarita YouTube

Drones are unmanned aerial vehicles that can be equipped with high definition, live-feed video cameras, thermal infrared video cameras, heat sensors, automated license plate readers, and radar—all of which allow for sophisticated and persistent surveillance. Drones can record video or still images in daylight or use infrared technology to capture such video and images at night. They can also be equipped with other capabilities, such as cell-phone interception technology, as well as back-end software tools like license plate readers, face recognition, and GPS trackers. There have been proposals for law enforcement to attach lethal and less-lethal weapons to drones.

Drones vary in size, from tiny quadrotors (also known as Small Unmanned Aerial Vehicles or sUAVs) to large fixed aircraft, such as the Predator Drone. They are harder to spot than airplane or helicopter surveillance, because they are smaller and quieter, and they can sometimes stay in the sky for a longer duration. 

Activists and journalists may also deploy drones in a protest setting, exercising their First Amendment rights to gather information about police response to protestors. So if you do see a drone at a protest, you should not automatically conclude that it belongs to the police.

Automated License Plate Readers

Photo by Mike Katz-Lacabe (CC BY)

Automated license plate readers (ALPRs) are high-speed, computer-controlled camera systems that can be mounted on street poles, streetlights, highway overpasses, mobile trailers, or attached to police squad cars. ALPRs automatically capture all license plate numbers that come into view, along with the location, date, and time. The data, which includes photographs of the vehicle and sometimes its driver and passengers, is then uploaded to a central server.

Photo by Mike Katz-Lacabe (CC BY)

At a protest, police can deploy ALPRs  to identify people driving toward, away from, or parking near a march, demonstration, or other public gathering. For example, CBP deployed an ALPR trailer at a gun show attended by Second Amendment supporters. Used in conjunction with other ALPR’s around the city, police could track protestors’ movement as they traveled from the demonstration to their homes.

Mobile Surveillance Trailers/Towers

A 'Mobile Utility Surveillance Tower' at San Diego Comic-Con and a mobile surveillance pole in New Orlean's French Quarter

Hundreds of police departments around the country have mobile towers that can be parked and raised a number of stories above a protest. These are often equipped with cameras, spotlights, speakers, and sometimes have small enclosed spaces for an officer. They also often have ALPR capabilities. 

Common towers include the Terrahawk M.U.S.T. which looks like a guard tower mounted on a van and the Wanco surveillance tower, which is a truck trailer with a large extendable pole. 

FLIR Cameras

Forward-looking infrared (FLIR) cameras are thermal cameras that can read a person’s body temperature and allow them to be surveyed at night. These cameras can be handheld, mounted on a car, rifle, or helmet, and are often used in conjunction with aerial surveillance such as planes, helicopters or drones. 

Surveillance Tech That May Not Be Visible Face Recognition (or other Video Analytics) 

Face recognition in the field from a San Diego County presentation

Face recognition is a method of identifying or verifying the identity of an individual using their face. Face recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops.

At a protest, any camera you encounter may have face recognition or other video analytics enabled. This includes police body cameras, mounted cameras on buildings, streetlights, or surveillance towers. 

Also, some police departments have biometric devices, such as specialized smartphones and tablets, that show the identity of individuals in custody. Likewise, face recognition can occur during the booking process at jails and holding facilities. 

Social Media Monitoring 

Social media monitoring is prevalent, especially surrounding protests. Police often scour hashtags, public events, digital interactions and connections, and digital organizing groups. This can be done either by actual people or by an algorithm trained to collect social media posts containing certain hashtags, words, phrases, or geolocation tags. 

EFF and other organizations have long called on social media platforms like Facebook to prohibit police from using covert social media accounts under fake names.  Pseudonyms such as “Bob Smith” have long allowed police to infiltrate private Facebook groups and events under false pretenses. 

Cell-Site Simulators

Cell-site simulators, also known as IMSI catchers, Stingrays, or dirtboxes, are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower

Police may use cell-site simulators to identify all of the IMSIs (International Mobile Subscriber IDs) at a protest or other physical place. Once they identify the phones’ IMSIs, they can then try to identify the protesters who own these phones. In the non-protest context, police also use cell-site simulators to identify the location of a particular phone (and its owner), often with greater accuracy than they could do with phone company cell site location information. 

Real-time Crime Centers 

Fresno Police Department's Real-time Crime Center. Source: Fresno PD Annual Report 2015

Real-time crime centers (RTCCs) are command centers staffed by officers and analysts to monitor a variety of surveillance technologies and data sources to monitor communities. RTCCs often provide a central location for analyzing ALPR feeds, social media, and camera networks, and offer analysts the ability to use predictive algorithms. 

Technical Excellence and Scale

Thu, 06/04/2020 - 16:42

In America, we hope that businesses will grow by inventing amazing things that people love – rather than through deep-pocketed catch-and-kill programs in which every competitor is bought and tamed before it can grow to become a threat. We want vibrant, competitive, innovative markets where companies vie to create the best products. Growth solely through merger-and-acquisition helps create a world in which new firms compete to be bought up and absorbed into the dominant players, and customers who grow dissatisfied with a product or service and switch to a "rival" find that they're still patronizing the same company—just another division.

To put it bluntly: we want companies that are good at making things as well as buying things.

This isn't the whole story, though.

Small companies with successful products can become victims of their own success. As they are overwhelmed by eager new customers, they are strained beyond their technical and financial limits – for example, they may be unable to buy server hardware fast enough, and unable to lash that hardware together in efficient ways that let them scale up to meet demand.

When we look at the once small, once beloved companies that are now mere divisions of large, widely mistrusted ones—Instagram and Facebook; YouTube and Google; Skype and Microsoft; DarkSkies and Apple—we can't help but notice that they are running at unimaginable scale, and moreover, they're running incredibly well.

These services were once plagued with outages, buffering delays, overcapacity errors, slowdowns, and a host of other evils of scale. Today, they run so well that outages are newsworthy events.

There's a reason for that: big tech companies are really good at being big. Whatever you think of Amazon, you can't dispute that it gets a lot of parcels from A to B with remarkably few bobbles. Google's search results arrive in milliseconds, Instagram photos load as fast as you can scroll them, and even Skype is far more reliable than in the pre-Microsoft days. These services have far more users than they ever did as independents, and yet, they are performing better than they did in those early days.

Can we really say that this is merely "buying things" and not also "making things?" Isn't this innovation? Isn't this technical accomplishment? It is. Does that mean big = innovative? It does not.

Operationalizing, scaling and maintaining services with millions (or billions!) of users is incredibly hard and requires real technical excellence. It's one thing to "move fast and break things," but mature products that people rely on need maintenance from people whose motto is "work deliberately and fix things." Monopolists that have found themselves in antitrust's crosshairs were accused of a long list of sins, but they are rarely accused of technical incompetence.

Rail barons moved a lot of freight. Standard Oil pumped a lot of crude. Alcoa refined a lot of aluminum. A&P sold a lot of groceries. The studio system made a lot of movies and got them onto a lot of screens. AT&T successfully connected a lot of phone calls.

When you're a monopolist, being good at bigness comes with the territory. If being good at scale was a defense against antitrust claims, virtually every monopolist would be off the hook.

Over the past three decades, U.S. antitrust law has adopted a narrow focus on "consumer harm" (effectively, "Did this company raise prices in the short term after buying its competitor?"). Today, lawmakers, regulators, and scholars are revisiting antitrust and asking whether it's time to bring back our stronger trustbusting traditions.

Tech is ripe for antitrust disruption: as an industry, its focus has shifted from growth through innovation to growth through acquisition (albeit while innovating on product deployment at scale). Regulators can and should subject tech's mergers and acquisitions to skeptical scrutiny, and revisiting the mergers that created a Web of five giant sites, filled with screenshots of text from the other four.

This is bound to rouse defenses of Big Tech that are based on excellence in bigness, and the corollary, that without Big Tech, the YouTubes, Skypes and Instagrams of the world would be doomed to endless brownouts, Fail Whales and buffering errors.

The rejoinder is obvious, and comes from tech giants themselves: inevitably, the keystones of their technical excellence at scale come from . . . acquisitions. Tech companies buy hardware startups. Cluster managements startups. Cloud startups. Customer service startups.

The companies that might otherwise be offering scalable computing, storage, and management tools to startups that are drowning in their own success are, often as not, already part of the tech giants.

Growing gracefully is hard, but it's not impossible. A vibrant, competitive market in growth support systems is the scaffolding we need to support the innovators who do manage to delight and surprise so many customers that they grow so fast that they are in danger of toppling.

Protecting Your Privacy if Your Phone is Taken Away

Thu, 06/04/2020 - 14:05

Your phone is your life. It’s where you communicate, get your news, take pictures and videos of your loved ones, relax and play games, and find a significant other. It can track your health, give you directions, remind you of events, and much more. It’s an incredibly helpful tool, but it can also be used against you by malicious actors. It’s important to know what your phone contains and how it can also make you vulnerable to attacks.

Your threat model is unique and personal. And you will have to decide which solutions are the best for you. The best protection is to avoid creating the opportunity for an attacker to gain physical access to your phone or its metadata. The safest solution would be not to bring your phone to high-risk activities, such as protesting, but this might not be feasible for everyone.

What could someone without access to your phone know about you?

Without any physical access to your phone by an attacker, you might think your privacy is safe. However, your phone constantly communicates with cell towers to be able to transfer data (for your browsing or apps), or receive and send text messages or calls. To do this, the network needs to know which cell phone tower is giving you coverage. In other words, the network knows where you are. This allows parties with access to location data held by your service provider to discover your location and movements.

To protect against this:

  • Airplane mode will disable communication with the cellular network.  If your phone is not talking to the cellular network, its location can’t be tracked that way. Make sure WiFi and Bluetooth are also disabled since they could also leak information. However, this will also mean you won’t be able to use data or get messages or calls.
  • Avoid using SMS or regular phone calls. These aren’t encrypted and, along with your location, can be seen by your service provider and be intercepted with the use of IMSI catchers. Use secure messaging instead, like Signal.
What can someone with physical access to your phone know?

With physical access to your phone, an attacker can get all of the data stored in it. This contains your messages, photos, browsing history, and apps. But it also contains much more like:

  • Phone call history
  • Messages: This includes SMS/MMS and any other messaging apps that you have.
  • Calendar and notes
  • Photos/videos/audio
  • Passwords, if stored insecurely, or if the attacker also has access to your password manager (This could be possible if you used a weak master password, thumbprint, or Face ID, or your password manager was unlocked when the police seized your device.)
  • Account logins
  • Cloud data and backups
  • Deleted data: Even if you deleted something from your phone,  it can still live in many places in the memory and logs, and it can be recovered. Do not rely on something being deleted.
  • App switching screenshots: When you switch or close an app, many devices offer you an overview of the apps running and what they are or were doing. To achieve this, what they do is create a screenshot of the last thing happening on screen within the app. That screenshot is stored and it can be retrieved by an attacker. Some apps will obfuscate this, but most will not. This can expose encrypted messages, passwords, or other private information.
  • Location: Your phone constantly logs many details that reveal your movements, such as WiFi access points you’ve joined, logs from your cell phone service, coordinates when you take a photo. Many apps use your current location to provide “relevant” results to searches, weather updates, or for a multitude of reasons.
  • Logs: Your phone and apps have all sorts of files logging what it did, errors, and crashes. All of this information is stored and can reveal how you used your phone, who you contacted, and where you were. It’s a vast list that provides a wealth of information to an attacker.

Needless to say, you need to protect your data and access to your phone. The best way to do so is with full disk encryption enabled and with a strong password. Not all devices are equal and you need to verify your device offers full disk encryption. The latest versions of Android and iOS offer full disk encryption by default. To make sure it’s enabled you will have to add a strong password. Do not use passcodes (only numbers) or weak passwords, since there are many tools that can break them easily. If your phone has an SD Card this can also contain information that might not be encrypted by your device.

Some courts have found that you can be forced to unlock a phone protected with a biometric such as face or fingerprint identification without your consent, so it is advised to not enable either option. If this is not feasible, turning your device off will on most devices require the password when turned back on.

Be careful with cloud backups. Although useful to restore your apps and backup messages and images, they can also provide an avenue for an attacker to get your data. Or, if the attacker already has access to your phone, they could use your backups to recover old information like backups of photos and messages. If you can, disable access to them during high risk scenarios.

  1. Enable full-disk encryption on your device with a strong password.
  2. Disable  Face ID and Fingerprint ID
  3. Disable cloud backups
  4. Turn off your phone

To know more on how to secure your digital life we have compiled advice at

What if you get your phone back?

Suppose your phone was taken by the attacker and you managed to recover it at a later stage. What should you do?

 If you can afford it and your threat model includes it: get a new phone.


  •     Change all of your passwords.
  •     Verify if there’s been access to your accounts. (Some email providers and social media sites show the list of IPs that accessed your account.)
  •     Factory reset your phone. Make sure to verify what it means for your particular device. Some will wipe the master key for the encryption, others will keep some data. You need to wipe all of the data.
  •     Sign into your phone with a new Apple ID/Google account to avoid loading potentially compromised cloud backups.

EFF Offering Assistance with Attorney Referrals for Protesters

Thu, 06/04/2020 - 12:15

In light of the current protests across the country against racism and police brutality, we want to call attention to EFF’s attorney referral services. We are opening up our Cooperating Attorneys list to people facing legal troubles as a result of their participation in the ongoing demonstrations, especially those involving surveillance or devices such as phones. We urge anyone in such a position to contact us for help in finding representation.

Our referral list is comprised of hundreds of lawyers from around the nation who share an interest in our issues. Like EFF, the attorneys on this list focus on issues where technology and the law intersect, so we especially encourage those whose arrests involved digital rights issues to contact us. For instance, if you believe your phone’s contents were accessed and stored after arrest, or if you are a journalist being compelled to share your footage with law enforcement, we want to hear from you.

Protesters and reporters in need of legal assistance in relation to their protesting activities should reach out to us at We treat requests that come in to us as confidential, and always protect the identities and information of those that come to us for legal assistance.

Again, there are attorneys from around the country on the list, so we accept requests for assistance from anywhere in the US. We cannot guarantee that we can find you pro bono representation, so we still encourage those in need to seek representation through all avenues available to them. However, many of the attorneys on the list are very interested in helping, and we will certainly do our best to find you pro bono assistance.

If you are an attorney that wants to join this list, we want to hear from you, too! Send us an email at and we’ll get you added.

EFF stands with communities taking to the streets and exercising their First Amendment rights to protest police brutality and racism. We are glad to offer this referral service during this critical moment.

Please also note that EFF’s Intake Department is open Monday through Thursday 9am to 5pm, but we will do our best to process your request as quickly as possible.

EFF Files Amicus Brief with Top French Court to Bring Down Controversial Avia Bill

Wed, 06/03/2020 - 11:21

Legislative efforts to regulate online platforms are underway in many countries. Unfortunately, instead of reflecting about how to put users back in control of their online experiences and how to foster innovation, many governments are opting to make online platforms into the new speech police.

The French Avia Bill is an example of such privatized enforcement: it forces social media platforms to take down content which could qualify as illegal hate speech within 24 hours, or as quickly as within an hour of its reporting, depending on the type of speech involved. The new legal act against hate speech will have a profound impact on freedom of speech of users,  and may inspire the EU’s ongoing work to reform the rules governing online platforms through the so-called Digital Services Act.

On May 18, 60 French senators filed a challenge with the French Supreme Court against the Avia Bill before its promulgation, and after it passed in the National Assembly on May 13, 2020.  The Court has to issue a decision by June 18, 2020. EFF teamed-up with the French American Bar Association (FABA) and Nadine Strossen, the John Marshall Harlan II Professor of Law, Emerita at New York Law School, to file an amicus brief [PDF, in French] with the French Supreme Court. 

We argue that this bill is unconstitutional because the take down timing requirements will cause over-censorship of perfectly legal speech. The bill imposes an unconstitutional prior restraint regime over speech, and leads to a privatization of police power. The bill also conflicts with the European Union’s Directive on Electronic Commerce, which was already brought up against the French government by the European Commission prior to the Bill’s adoption. 

The authors of the amicus brief warn the Court’s justices that the Avia Bill represents the continuation of a failed public and criminal censorship policy which has been unable to remedy the so-called social harms that hate speech has been claimed to generate.

When the Senate Talks About the Internet, Note Who They Leave Out

Tue, 06/02/2020 - 19:10

In the midst of pandemic and protest, the Senate Judiciary Committee continued on with the third of many planned hearings about copyright. It is an odd moment to be considering whether or not the notice-and-takedown system laid out by Section 512 of the Digital Millennium Copyright Act is working or not, but since Section 512 is a cornerstone of the Internet and because protestors and those at home trying to avoid disease depend on the Internet, we watched it.

There was not a lot said at the hearing that we have not heard before. Major media and entertainment companies want Big Tech companies to implement copyright filters. Notice and takedown is burdensome to them, and they believe that technologists surely have a magic solution to the complicated problem of reconciling free expression with copyright that they simply have not implemented because Section 512 doesn’t require them to.

Artists have real problems and real concerns. In many sectors, including publishing and music, profits are high, but after the oligopolists of media and technology have taken their cut, there’s little left for artists. But the emphasis on Section 512 as the problem is misplaced and doesn’t ultimately serve artists. Before the DMCA created a way to take down material by sending a notice to platforms, the only remedy was to go to court. DMCA takedowns, by comparison, are as simple as sending email—or hiring an outside company to send emails on an artist’s behalf. The call for more Internet speech to be taken down automatically, on the algorithmic decision of some highly mistrusted tech monopolists, and without even an unproven allegation of infringement, is calling for a remedy without a process. It is calling for legal, protected expression to be in danger.

Artists are angry, as so many are, at Big Tech. But Big Tech can already afford to do the things that rightsholders want. And large rightsholders—like Hollywood studios and major music labels—likewise have an interest in taking down as much as they can, be it protected fair uses of their works or true infringement. That places Internet users in between the rock of Big Tech and the hard place of major entertainment companies. Artists and Internet users deserve alternatives to both Big Tech and major entertainment companies. Requiring tech companies to have filters, to search out infringement on their own, or any proposals requiring tech companies to do more will only solidify the positions of companies like Google and Facebook, which can afford to do these measures, and create more barriers for new competitors.

As Meredith Rose, Policy Counsel at Public Knowledge, said during the hearing:

This is not about content versus tech. I am here to speak about how Section 512 impacts the more than 229 million American adults who use the Internet as more than just a delivery mechanism for copyrighted content. They use it to pay bills, to learn, to work, to socialize, to receive healthcare. And yet they are missing from the Copyright Office’s Section 512 report, they are missing from the systems and procedures that govern their rights, and too often they are missing from the debate on Capitol Hill.

 We likewise note the absence of Internet users—a group that grows and grows and, whether they identify themselves as such or not, now includes 90% of Americans.

During the hearing, a witness wondered if there was a generation of artists who will be lost because it is just too difficult to police their copyrights online. This ignores the generation of artists who already share their work online, and who run into so many problems asserting their fair use rights. We note their absence as well.

We have already gone into depth about how the Copyright Office’s report on Section 512—mentioned quite a bit in the hearing—fails to take users and the importance of Internet access into account. Changing the foundation of the Internet, throwing up roadblocks to people expressing themselves online, creating new quasi-courts for copyright, or forcing the creation and adoption of faulty and easily abused automated filters will hurt users. And we are, almost all of us, Internet users.

California: Stand Up to Face Surveillance

Tue, 06/02/2020 - 16:43

EFF has joined a broad coalition of civil liberties, civil rights, and labor advocates to oppose A.B. 2261, which threatens to normalize the increased use of face surveillance of Californians where they live and work. Our allies include the ACLU of California, Oakland Privacy, the California Employment Lawyers Association, Service Employees International Union (SEIU) of California, and California Teamsters.

A.B. 2261 is currently before the Assembly Appropriations Committee. It purports to regulate face surveillance in the name of privacy concerns during this pandemic. In fact, as written, this bill will give a legislative imprimatur to the dangerous and invasive use of face surveillance, by setting weak minimum standards that allow governments and corporation to pay lip service to privacy without actually preventing the harms of face surveillance. The risk is greater now than ever. Government officials already are pushing to use pandemic management tools to surveil and control protests across the country against racism and police brutality.


Stand Up to Face Surveillance

Any bill that smooths a path for increased face surveillance is not the answer, especially as we face the moment’s crises. Several companies and government agencies have proposed expanding the use of this technology in light of the pandemic, even though there is no proof that face surveillance can be a meaningful tool to address the COVID-19 crisis. What is well-documented is how using this technology exacerbates existing biases in policing. It also harms our privacy, by making it impossible to go about our lives without government and corporations monitoring where we go, what we are doing, and who we are with. It also chills our First Amendment rights to gather and protest.

Surveillance infrastructure set up in times of crisis is not easily rolled back. Many governments already employ powerful spying technologies in ways that harm minority communities. This includes spying on the social media of activists, particularly advocates for racial justice such as participants in the Black-led movement for racial and economic justice. Also, police watch lists are often over-inclusive and error-riddled, and cameras often are over-deployed in minority areas—effectively criminalizing entire communities.  If history is any guide, we expect police will engage in racial profiling with face surveillance technology, too.

This is a flawed and dangerous technology at any time, and especially now when its use could further target communities of color who already are disparately impacted by both the pandemic and police violence.

We urge Chairperson Gonzalez and the members of the Assembly Approriations Committee to stop this bill from moving forward, and listen to the voices of their constituents who are concerned about the harmful effect it could have in their everyday lives as they seek some sense of normalcy. Californians: please tell your lawmakers to stand against face surveillance.


Stand Up to Face Surveillance

Join EFF for a Reddit AMA on EARN IT

Tue, 06/02/2020 - 13:32

Over the last few months, the EARN IT Act, a bill that would have disastrous consequences for free speech and security online, has gained thousands of critics. Digital rights advocates and technologists have been joined by human rights groups, online platforms, sex worker advocatespolicy organizations and think tanks, and Senators alike have joined together to stop the bill, which would drastically undermine encryption and violate the Constitution at the same time. 

This is no exaggeration: buried in this bill is language that gives government officials like Attorney General William Barr the power to compel online service providers to break encryption or be exposed to potentially crushing legal liability. At a time of increasing necessity for secure and safe messages, encryption is even more important than ever. But the Senators promoting this bill are pushing for an Internet where the law requires every message sent to be read by government-approved scanning software. Companies that handle such messages wouldn’t be allowed to securely encrypt them, or they’d lose legal protections that allow them to operate. 

We can't let that happen.

Join EFF's technologists, lawyers, activists, and lobbyists on Wednesday, June 3rd, for a Reddit AMA about all things EARN IT, including encryption, keeping online conversations private and secure, and Section 230. 

And for now: please join the tens of thousands of individuals who have let their Senators know how that they must stop this plan to scan every message online. 



Surveillance Self-Defense: Attending Protests in the Age of COVID-19

Tue, 06/02/2020 - 13:11

In the wake of nationwide protests against the police killings of George Floyd and Breonna Taylor, we urge protestors to stay safe, both physically and digitally. Our Surveillance Self Defense (SSD) Guide on attending a protest offers practical tips on how to maintain your privacy and minimize your digital footprint while taking to the streets.

These demonstrations have taken place against the backdrop of the COVID-19 pandemic, so for many, public health concerns have added an extra dimension to the subtle calculus of when to stay inside and when to engage in street protest. This unique context provides us enough novelty to warrant a “reader's guide” to our normal SSD post on attending a protest.

Many of our tips for preparing to attend a protest remain the same: enable full disk encryption for your device, install an encrypted messenger app such as Signal (for iOS or Android) to communicate with friends, and remove biometric identifiers like fingerprint or FaceID. Under current U.S. law—which is still in flux—using a memorized passcode generally provides a stronger legal footing to push back in court against compelled device unlocking/decryption. Wearing a mask during a protest is certainly more commonplace (and advisable) this year, and it will also impede your ability to unlock your device with FaceID. This is all the more reason to remove that particular unlock mechanism.

The widespread use of face masks has prompted technology companies to increase research and development on novel methods of identifying people from footage. Biometric identifiers that can be observed despite facial covering, such as eyes and cheekbones, are increasing the trackability and surveillability of those on the street. Accordingly, be mindful when taking photos that include protestors and bystanders. Consider blurring out faces and other identifiable features like clothing, colored hair, and tattoos, and remove metadata from those photos before posting them. Taking basic precautions to protect yourselves and the protestors around you goes a long way.

If you are buying a prepaid cell phone for the protest, be sure to disinfect the device thoroughly. Digital devices are particularly nasty vectors for the spread of germs, especially if you’re purchasing them second-hand. Follow best practices for cleaning digital devices to stop the spread of the disease.

Caravan protests are gaining popularity as a way to remain safe while amplifying one’s message. While a good way to protect against COVID-19, it puts those protestors at increased risk of being tracked by Automated License Plate Readers (ALPRs). Riding a bicycle or walking to the protest will help avoid invasive tracking technologies like ALPRs. Use the best judgement that applies to your particular threat model.

Gathering in large crowds increases the risk of protestors being targeted with weapons such as tear gas, which can cause severe respiratory problems and even spread COVID-19. If you have a burner device with you, be sure to save the number for an emergency health contact who will be available in case you need them.

At least 40 cities have now imposed curfews limiting the right of movement and protest for their residents. What’s more, at least one state has started appropriating the language of public health in an attempt to extend the application of surveillance technologies to protesters. At EFF, we strongly oppose efforts by the state to use COVID as a justification for extending the use of surveillance technologies that are already disproportionately targeted at communities of color. And as the protests continue, we will be especially vigilant against the government using the justification of “public safety” to introduce more invasive technologies on ordinary citizens.

Stay safe, stay healthy, and stay heard.

The Executive Order Targeting Social Media Gets the FTC, Its Job, and the Law Wrong

Mon, 06/01/2020 - 23:09

This is one of a series of blog posts about President Trump's May 28 Executive Order. Other posts are here, here, and here.

The inaptly named Executive Order on Preventing Online Censorship seeks to insert the federal government into private Internet speech in several ways. In particular, Sections 4 and 5 seek to address possible deceptive practices, but end up being unnecessary at best and legally untenable at worst.

These provisions are motivated in part by concerns, which we share, that the dominant platforms do not adequately inform users about their standards for moderating content, and that their own free speech rhetoric often doesn’t match their practices. But the EO’s provisions either don’t help, or introduce new and even more dangerous problems.

Section 4(c) says, “The FTC (Federal Trade Commission) shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to section 45 of title 15, United States Code. Such unfair or deceptive acts or practice may include practices by entities covered by Section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.”

Well, sure. Platforms should be honest about their restriction practices, and held accountable when they lie about them. The thing is, the FTC already has the ability to “consider taking action” about deceptive commercial practices.

But the real difficulty comes with the other parts of this section. Section 4(a) sets out the erroneous legal position that large online platforms are “public forums” that are legally barred from exercising viewpoint discrimination and have little ability to limit the categories of content that may be published on their sites. As we discuss in detail in our post dedicated to Section 230, every court that has considered this legal question has rejected it, including recent decisions by U.S. District Courts of Appeal for the Ninth and D.C. Circuits. And for good reason: treating social media companies like “public forums” gives users less ability to respond to misuse, not more.

Instead, those courts have correctly adopted the rule on editorial freedom from the Supreme Court’s 1974 decision in Miami Herald Co. v Tornillo. In that case, the court rejected strikingly similar arguments—that the newspapers of the day were misusing their editorial authority to favor one side over the other in public debates and that government intervention was necessary to “insure fairness and accuracy and to provide for some accountability." Sound familiar?

The Supreme Court didn’t go for it: the “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.”

The current Supreme Court agrees. Just last term, in Manhattan Community Access v Halleck, the Supreme Court affirmed that the act of serving as a platform for the speech of others did not eliminate that platform’s own First Amendment right to editorial freedom.

But the EO doesn’t just get the law wrong—it wants the FTC to punish platforms that don’t adhere to the erroneous position that online platforms are “public forums” legally barred from editorial freedom. Section 4(d) commands the FTC to consider whether the dominant platforms are inherently engaging in unfair practices by not operating as public forums as set forth in Section 4(a). This means that a platform could be completely honest, transparent, and open about its content moderation practices but still face penalties because it did not act like a public forum. So, platforms have a choice—take their guidance from the Supreme Court or from the Trump administration.

Additionally, Section 4(b) refers to the White House’s Tech Bias Reporting Tool launched last year to collect reports of political bias. The EO states that 16,000 reports were received and they will be forwarded to the FTC. We filed a Freedom of Information Act (FOIA) request with the White House’s Office of Science and Technology Policy for those complaints last year and wer told that that office had no records (

Section 5 commands the Attorney General to convene a group to look at existing state laws and propose model state legislation to address unfair and deceptive practices by online platforms. This group will be empowered to collect publicly available information about: how platforms track user interactions with other users; the use of “algorithms to suppress political alignment or viewpoint”; differential policies when applied to the Chinese government; reliance on third-party entities with “indicia of bias,” and viewpoint discrimination with respect to user monetization. To the extent that this means that decisions will be made based on actual data rather than anecdote and supposition, that is a good thing. But given this pretty one-sided list, there does seem to be a predetermined political decision the EO wants to reach, and the resulting proposals that come out of this may create yet another set of problems.

All of this exacerbates a growing environment of legal confusion for technology and its users that bodes ill for online expression. Keep in mind that “entities covered by section 230” describes a huge population of online services that facilitate online user communication, from Wikimedia to the Internet Archive to the comments section of local newspapers. However you feel about Big Tech, rest assured that the EO’s effects will not be confined to the small group of companies that can afford to navigate these choppy waters.

Trump’s Executive Order Threatens to Leverage Government’s Advertising Dollars to Pressure Online Platforms

Mon, 06/01/2020 - 18:32

This is one of a series of blog posts about President Trump's May 28 Executive Order. Other posts are here, here, and here.

The inaptly named  Executive Order on Preventing Online Censorship (EO) seeks to insert the federal government into private Internet speech in several ways. Section 3 of the EO threatens to leverage the federal government’s significant online advertising spending to coerce platforms to conform to the government’s desired editorial position.

This raises significant First Amendment concerns.

The EO provides:

Sec. 3.  Protecting Federal Taxpayer Dollars from Financing Online Platforms That Restrict Free Speech.  (a)  The head of each executive department and agency (agency) shall review its agency’s Federal spending on advertising and marketing paid to online platforms.  Such review shall include the amount of money spent, the online platforms that receive Federal dollars, and the statutory authorities available to restrict their receipt of advertising dollars.

(b)  Within 30 days of the date of this order, the head of each agency shall report its findings to the Director of the Office of Management and Budget.

(c)  The Department of Justice shall review the viewpoint-based speech restrictions imposed by each online platform identified in the report described in subsection (b) of this section and assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices.

The First Amendment is implicated by this provision because it is, at its essence, the government punishing a speaker for expressing a political viewpoint. The Supreme Court has recognized that "[t]he expression of an editorial opinion . . . lies at the heart of First Amendment protection." The First Amendment thus generally protects speakers against enforced neutrality.

Although the government may have broad leeway to decide where it wants to run its advertisements, here it seems that the government would otherwise place advertisements on these platforms but for the sole fact that it dislikes the political viewpoint reflected by the platform's editorial and curatorial decisions. This is true regardless of whether the platform actually has an editorial viewpoint or if the government simply perceives a viewpoint it finds inappropriate.

This decision is especially suspect when the platform’s speech is unrelated to the advertisement or the government program or policy being advertised. It might present a different situation if the message in the government’s advertisement would be undermined by the platform’s editorial decisions, or,  if by advertising, the government would be perceived as adopting the platform’s viewpoint. But neither of those is contemplated by the EO.

The EO thus seems purely retaliatory, and designed solely to coerce the platforms to meet the government’s conception of acceptable “neutrality”—a severe penalty for having a political viewpoint. The goal of federal government advertising is to reach the broadest audience possible: think of the Consumer Product Safety Commission’s Quinn the Quarantine Fox ads, or the National Park Service’s promotions about its units. This advertising is not a reward for the platform for its perceived neutrality. It’s a service to Americans who need vital information.

In other contexts, the Supreme Court has made clear that the government’s spending decisions can generally not be “the product of invidious viewpoint discrimination.” The court has applied this rule to strike down a property tax exemption that was available only to those who took loyalty oaths, explaining that “the deterrent effect is the same as if the State were to fine them for this speech.” And the court also applied it when a county canceled a contract with a trash hauler who was a fervent critic of the county’s government. Even when the court rejected a First Amendment challenge to a requirement that the National Endowment for the Arts consider “general standards of decency and respect for the diverse beliefs and values of the American public” as one of many factors in awarding arts grants, it emphasized that the criterion did not give the government authority to “leverage its power to award subsidies on the basis of subjective criteria into a penalty on disfavored viewpoints,” and funding decisions should not be “calculated to drive certain ideas or viewpoints from the marketplace.”

By denying ad dollars that it would otherwise spend solely because it disagrees with a platform’s editorial views, or dislikes that it has editorial views, the government violates these fundamental principles. And this in turn harms the public, which may need or want information contained in government advertisements.

Sex Worker Rights Advocates Raise the Alarms about EARN IT

Mon, 06/01/2020 - 16:21

June 2nd is recognized around the world as the chosen date of countless direct actions and protests in support of the sex workers' rights movement. Since its inception nearly 45 years ago, International Whores Day reclaims a sometimes derogatory word to set the tone for a day of unrest and political action. June also marks International LGBTQ+ Pride month, and this is the first in a series of blog posts that aims to highlight different facets within the broader LGBTQ+ community.

In 2018, the backdrop for many International Whores Day actions were to raise awareness around SESTA/FOSTA, a bad bill that turned into a worse bill and then was rushed through votes in both houses of Congress. Sex work advocacy organizations warned how dangerous that bill would be in undermining 47 U.S.C. § 230, originally enacted as part of the Communications Decency Act, and thus silencing online speech by forcing Internet platforms to censor their users. It ultimately passed, and unfortunately the grim predictions those advocacy organizations laid out were proven right.

This year, many of the same communities are striking a similar pitch with raising awareness around another proposed bill that's aimed to weaken Section 230: EARN IT, which we've previously written about.

What Sex Worker Rights Activists Are Saying About EARN IT:

The Sex Workers Outreach Project (SWOP) is a national network of social justice organizations that dedicate their efforts to advocating for the human rights of sex workers. The LA chapter has been raising awareness in their community to how EARN IT could threaten access to encrypted communications, a tool that many in the sex industry rely on for harm reduction. This warning was also taken up by popular secure messaging app Signal, which raised concern that if EARN IT were to pass, it could bring the end of their software in American markets.

Hacking//Hustling is a collective of sex work activists and data analysts that originally formed in response to SESTA/FOSTA. They've hosted community teach-ins, digital harm reduction workshops, and direct action protests to raise awareness about this threatening legislation. Founder of Hacking//Hustling, Danielle Blunt says “Denying access to these technologies should be understood as a form of structural violence.”

Decriminalize Sex Work, an organization whose tag line is "End Human Trafficking, Promote Health and Safety", warns that this bill would once again be a means to more easily facilitate the arrest of sex workers. They describe that if it passed, it would further endanger already marginalized communities without any meaningful effect toward ending human trafficking.

Reaching a broader audience

Section 230 is at the crux of protecting freedom of expression online, so we keep a close eye on it at EFF. This year has made it painfully clear that many more people are relying on their ability to safely exist online. Upholding Section 230 protections will continue to give marginalized communities the resources they need to practice communal self care and promote harm reduction.

Internet Users of All Kinds Should Be Concerned by a New Copyright Office Report

Mon, 06/01/2020 - 16:19

Outside of the beltway, people all over the United States are taking to the streets to demand fundamental change. In the halls of Congress and the White House, however, many people seem to think the biggest thing that needs to be restructured is the Internet. Last week, the president issued an order taking on one legal foundation for online expression: Section 230. This week, the Senate is focusing on another: Section 512 of the Digital Millennium Copyright Act (DMCA).

The stage for this week’s hearing was set by a massive report from the Copyright Office that’s been five years in the making. We read it, so you don’t have to.

Since the DMCA passed in 1998, the Internet has grown into something vital that we all use. We are the biggest constituency of the Internet—not Big Tech or major media companies—and when we go online we depend on an Internet that depends on Section 512

Section 512 of the DMCA is one of the most important provisions of U.S. Internet law. Congress designed the DMCA to give rightsholders, service providers and users relatively precise “rules of the road” for policing online copyright infringement. The center of that scheme is the “notice and takedown” process. In exchange for substantial protection from liability for the actions of their users, service providers must promptly take down any content on their platforms that has been identified as infringing, and take several other prescribed steps. Copyright owners, for their part, are given a fast, extra-judicial procedure for obtaining redress against alleged infringement, paired with explicit statutory guidance regarding the process for doing so, and provisions designed to deter and remedy abuses of that process.

Without Section 512, the risk of crippling liability for the acts of users would have prevented the emergence of most social media outlets and online forums we use today. With the protection of that section, the Internet has become the most revolutionary platform for the creation and dissemination of speech that the world has ever known. Thousands of companies and organizations, big and small, rely on it every day. Interactive platforms like video hosting services and social networking sites that are vital to democratic participation, and also to the ability of ordinary users to forge communities, access information, and discuss issues of public and private concern, rely on Section 512 every day.

But large copyright holders, led by major media and entertainment companies, have complained for years that Section 512 doesn’t put enough of a burden on service providers to actively police online infringement. Bowing to their pressure, in December of 2015, Congress asked the Copyright Office to report on how Section 512 is working. Five years later, we have its answer—and overall it’s pretty disappointing.

Just Because One Party Is Unhappy Doesn’t Mean the Law is Broken

The Office believes that because rightsholders are dissatisfied with the DMCA, the law’s objectives aren’t being met. There are at least two problems with this theory. First, major rightsholders are never satisfied with the state of copyright law (or how the Internet works today in general)—they constantly seek broader restrictions, higher penalties, and more control over users of creative work. Their displeasure with Section 512 may in fact be a sign that the balance is working just fine.

Second, Congress’s goal was to ensure that the Internet would be an engine for innovation and expression, not to ensure perfect infringement policing. By that measure, Section 512, though far from perfect, is doing reasonably well when we consider the ease of which we can distribute knowledge and culture.

Misreading the Balance, Discounting Abuse

Part of the problem may be that the Office fundamentally misconstrues the bargain that Congress struck when it passed the DMCA. The report repeatedly refers to Section 512 as a balance between rightsholders and service providers. But Section 512 is supposed to benefit a third group: the public.

We know this because Congress built-in protections for free speech, knowing that the DMCA could be abused. Congress knew that Section 512’s quick and easy takedown process could result in lawful material being censored from the Internet, without any court supervision, much less advance notice to the person who posted the material, or any opportunity to contest the removal. To inhibit abuse, Congress made sure that the DMCA included a series of checks and balances. First, it created a counter-notice process that allows for putting content back online after a two-week waiting period. Second, Congress set out clear rules for asserting infringement under the DMCA. Third, it gave users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith.

With these provisions, Section 512 creates a carefully crafted system. When properly deployed, it gives service providers protection from liability, copyright owners tools to police infringement, and users the ability to challenge the improper use of those tools.

The Copyright Office’s report speaks of the views of online service providers and rightsholders, while paying only lip service to the millions of Internet users that don’t identify with either group. That may be what led the Office to give short shrift to the problem of DMCA abuse, complaining that there wasn’t enough empirical evidence. In fact, a great deal of evidence was submitted into the record, among them a detailed study by Jennifer Urban, Joe Karaganis, and Brianna Schofield. Coming on the heels of a lengthy Wall Street Journal report describing how people use fake DMCA claims to get Google to take news reports offline, the Office’s dismissive treatment of DMCA abuse is profoundly disappointing.

Second-Guessing the Courts

An overall theme of the report is that courts all over the country have been misinterpreting the DMCA ever since its passage in 1998.

One of the DMCA’s four safe harbors covers “storage at the direction of a user.” The report suggests that appellate courts “expanded” the DMCA when they concluded, one court after another, that services such as transcoding, playback, and automatically identifying related videos, qualify as part of that storage because they are so closely related to it.  The report questions another appellate court ruling that peer-to-peer services qualify for protection.

And the report is even more critical of court rulings regarding when a service provider is on notice of infringement, triggering a duty to police that infringement. The report challenges one appellate ruling which requires awareness of facts and circumstances from which a reasonable person would know a specific infringement had occurred. Echoing an argument frequently raised by rightsholders and rejected by courts, the report contends that general knowledge that infringement is happening on a platform should be enough to mandate more active intervention.

What about the subsection of the DMCA that says plainly that service providers do not have a duty to monitor for infringement? The Office concludes that this provision is merely intended to protect user privacy.

The Office also suggests the Ninth Circuit’s decision in Lenz v Universal Music was mistaken. In that case, the appeals court ruled that entities who send takedown notices must consider whether the use they are targeting is a lawful fair use, because failure to do so would necessarily mean they could not have formed a good faith belief that the material was infringing, as the DMCA requires. The Office worries that, if the Ninth Circuit is correct, rightsholders might be held liable for not doing the work even if the material is actually infringing.

This is nonsensical—in real life, no one would sue under Section 512(f) to defend unlawful material, even if the provision had real teeth, because doing so would risk being slapped with massive and unpredictable statutory damages for infringement. And the Office’s worry is overblown. It is not too much to ask a person wielding a censorship tool as powerful as Section 512, which lets a person take others’ speech offline based on nothing more than an allegation, to take the time to figure out if they are wielding that tool appropriately. Given that one Ninth Circuit judge concluded that the Lenz decision actually “eviscerates § 512(f) and leaves it toothless against frivolous takedown notices,” it is hard to take rightsholders’ complaints seriously—but the Office did.

In short, the Office has taken upon itself to second-guess the many judges actually tasked with interpreting the law because it does not like their conclusions. Rather than describe the state of the law today and advise Congress as an information resource, it argues for what the law should be per the viewpoint of a discrete special interest. Advocacy for changing the law belongs to the public and their elected officials. It is not the Copyright Office’s job, and it sharply undermines any claim the Report might make to a neutral approach.

Mere Allegations Can Mean Losing Internet Access for Everyone on the Account

In order to take advantage of the safe harbor included in Section 512 of the DMCA, companies have to have a “repeat infringer” policy. It’s fairly flexible, since different companies have different uses, but the basic idea is that a company must terminate the account of a user who has repeatedly infringed. Perhaps the most famous iteration of this requirement is YouTube’s “Three Strikes” policy: if you get three copyright strikes in 90 days on YouTube, your whole account is deleted, all your videos are removed, and you can’t create new channels.

Fear of getting to three strikes has not only made YouTubers very cautious, it has created a landscape where extortion can flourish. One such troll would make bogus copyright claims, and then send messages to users demanding money in exchange for withdrawing the claims. When one user responded with a counter-notification—which is what they are supposed to do to get bogus claims dismissed—the troll allegedly “swatted” the user with the information in the counter-notice.

And that’s just the landscape for YouTube. The Copyright Office’s report suggests that the real problem of repeat infringer policies is that courts aren’t requiring service providers to create and enforce stricter ones, kicking more people off the Internet.

The Office does suggest that a different approach might be needed for students and universities, because students need the Internet for “academic work, career searching and networking, and personal purposes, such as watching television and listening to music,” and students living in campus housing would have no other choice for Internet access if they were kicked off the school’s network.

But all of us, not just students, use the Internet for work, career building, education, communication, and personal purposes. And few of us could go to another provider if an allegation of infringement kicked us off the ISP we have. Most Americans have only one or two high-speed broadband providers with a majority of us stuck with a cable monopoly for high-speed access.

The Internet is vital to people’s everyday lives. To lose access entirely because of an unproven accusation of copyright infringement would be, as the Copyright Office briefly acknowledges, “excessively punitive.”

 The Copyright Office to the Rescue?

Having identified a host of problems, the Office concludes by offering to help fix some of them. Its offer to provide educational materials seems appropriate enough, though given the skewed nature of the Report itself, we worry that those materials will be far from neutral.

Far more worrisome, however, is the offer to help manufacture an industry consensus on standard technical measures (STMs) to police copyright infringement. According to Section 512, service providers must accommodate STMs in order to receive the safe harbor protections. To qualify as an STM, a measure must (1) have been developed pursuant to a broad consensus in an “open, fair, voluntary, multi-industry standards process”; (2) be available on reasonable and nondiscriminatory terms; and (3) cannot impose substantial costs on service providers. Nothing has ever met all three requirements, not least because no “open, fair, voluntary, multi-industry standards process” exists.

The Office would apparently like to change that, and has even asked Congress for regulatory authority to help make it happen. Trouble is, any such process is far too likely to result in the adoption of filtering mandates. And filtering has many, many, \issues, such that the Office itself says filtering mandates should not be adopted, at least not now.

The Good News

Which brings us to the good news. The Copyright Office stopped short of recommending that Congress require all online services to filter for infringing content—a dangerous and drastic step they describe with the bland-sounding term “notice and staydown”—or require a system of website blocking. The Office wisely noted that these proposals could have a truly awful impact on freedom of speech. It also noted that filtering mandates could raise barriers to competition for new online services, and entrench today’s tech giants in their outsized control over online speech—an outcome that harms both creators and users. And the Office also recognized the limits of its expertise, noting that filtering and site-blocking mandates would require “an extensive evaluation of . . . the non-copyright implications of these proposals, such as economic, antitrust, [and] speech. . . .”

The Can of Worms Is Open

Looking ahead, the most dangerous thing about the Report may be that some Senators are treating its recommendations for “clarification” as an invitation to rewrite Section 512, inviting the exact legal uncertainty the law was intended to eliminate. Senators Thom Tillis and Patrick Leahy have asked the Office to provide detailed recommendations for how to rewrite the statute – including asking what it would do if it were starting from scratch.

Based on the report, we suspect the answer won’t include strong protections for user rights.

Tech Learning Collective: A Grassroots Technology School Case Study

Mon, 06/01/2020 - 15:57

Grassroots education is important for making sure advanced technical knowledge is accessible to communities who may otherwise be blocked or pushed out of the field. By sharing this invaluable knowledge and skills, local groups can address and dissolve these barriers to organizers hoping to step up their cybersecurity.

The Electronic Frontier Alliance (EFA) is a network of community-based groups across the U.S.  dedicated to advocacy and community education at the intersection of the EFA’s five guiding principles: privacy, free expression, access to knowledge, creativity, and security. Tech Learning Collective, a radical queer and femme operated group headquartered in New York City, sets itself apart as an apprenticeship-based technology school that integrates their workshops into a curriculum for radical organizers. Their classes range from fundamental computer literacy to hacking techniques and aim to serve students from historically marginalized groups.

We corresponded with the collective over email to discuss the history and strategy of the group's ambitious work, as well as how the group has continued to engage their community amid the COVID-19 health crisis. Here are excerpts from our conversation:

What inspired you all to start the Tech Learning Collective? How has the group changed over time?

In 2016, a group of anarchist and autonomist radicals met in Brooklyn, NY, to seek out methods of mutual self-education around technology. Many of us did not have backgrounds in computer technology. What we did have was a background in justice movement organizing at one point or another, whether at the WTO protests before the turn of the century, supporting whistleblowers such as Chelsea Manning, participating in Occupy Wall Street, or in various other campaigns.

This first version of Tech Learning Collective met regularly for about a year as a semi-private mutual-education project. It succeeded in sowing the seeds of what would later become several additional justice-oriented technology groups. None of the members were formally trained or have ever held computer science degrees. Many of the traditional techniques and environments offering technology education felt alienating to us.

So, after a (surprisingly short!) period of mutual self-education, we began offering free workshops and classes on computer technologies specifically for Left-leaning politically engaged individuals and groups. Our goal was to advocate for more effective use of these technologies in our movement organizing.

We quickly learned that courses needed to cater to people with skill levels ranging from self-identified “beginners” to very experienced technologists, and that our efforts needed to be self-sustaining. Partly, this was because many of our comrades had sworn off technical self-sufficiency as a legitimate avenue for liberation in a misguided but understandable reaction to the poisonous prevalence of machismo, knowledge grandstanding, and blatant sociopathy they saw exhibited by the overwhelming majority of “techies.” It was obvious that our trainers needed to exemplify a totally new culture to show them that cyber power, not just computer literacy, was a capability worth investing their time in for the sake of the movement.

Tech Learning Collective’s singular overarching goal is to provide its students with the knowledge and abilities to liberate their communities from corporate and government overseers, especially as it relates to owning and operating their own information and communications infrastructures, which we view as a necessary prerequisite for meaningful revolutionary actions. Using these skills, our students assist in the organization of activist work like abortion access and reproductive rights, anti-surveillance organizing, and other efforts that help build collective power beyond mere voter representation.

Who is your target audience?

Anyone who is serious about gaining the skills, knowledge, and power they need to materially improve the lives of their community, neighbors, and friends and who also shares our pro-social values is welcome at our workshops and events.

Importantly, this means that self-described “beginners” are just as welcome at our events as very experienced technologists, and we begin both our materials and our methodology at the actual beginning of computer foundations...

 We know what it's like to wade into the world of digital security as a novice because we've all done it at one point or another. We felt confounded or overwhelmed by the vast amount of information suddenly thrown at us. Worse, much of this information purported to be “for beginners”, making us feel even worse about our apparent inability to understand it. “Are we just stupid?”, we often asked ourselves.

You are not stupid. [...]  We insist that you can understand this stuff.

The TLC is incredibly active, with an impressive 15 events planned for June. How does your group share this workload and avoid burnout among collective members?

There are three primary techniques we use to do this. These will be familiar to anyone who has ever worked in an office or held a position in management. They are automation, separation of concerns, and partnerships. After all, just because we are anti-capitalist does not mean we ignore the obviously effective tools and techniques we have at our disposal for realizing our goals.

The first pillar, automation, is really what we are all about. It's what almost all of our classes teach in one form or another. In a Tech Learning Collective class, you will often hear the phrase, “If you ever do one thing on a computer twice, you've made a mistake the second time.” This is a reminder that computers were built for automation. That's what they're for. So, almost every component of Tech Learning Collective's day-to-day operations is automated. [...]  The only time a human needs to be involved is when another human wants to talk to us. Otherwise, the emails you're getting from us were written many months ago and are being generated by scripts and templates.

Without that we would need to at least double if not triple or quadruple the number of people who could devote many hours to managing the logistics of making sure events happen. But that's boring, tedious, repetitive work, and that's what computers are for.

Secondly, separation of concerns: this is both a management and a security technique. In InfoSec, we call this the compartmentalization principle. You might be familiar with it as “need to know,” and it states that only the people who need to be concerned with a certain thing should have to spend any brainpower on it in the first place, or indeed have any access to it at all. This means that when one of our teachers wants to host a workshop, they don't need to involve anyone else in the collective. They are autonomous, free to act however they wish within the limits of their role. This makes it possible for our collective members to dip in and out whenever they need to, thus avoiding burnout while increasing quality. If one of us has to step away for a while, the collective can still function smoothly.

Finally, partnerships allow us to do things we could not do on our own. This also helps distribute the overarching workload, like creating practice labs or writing educational materials for new workshops. We work extremely closely with a number of other groups [since] our core collective members straddle several other activist and educational collectives.

At the time of writing, we are in the middle of the COVID-19 health crisis. Many groups are struggling with shelter-in-place, but fortunately TLC seems to have adapted very well. What are some strategies you are employing to continue your work?

This is almost an unfair question, because the nature of what we do at Tech Learning Collective lends itself well to the current crises.

The biggest change that the COVID-19 pandemic has forced us to adapt to is the shuttering of our usual venues for in-person workshops. Fortunately, we were already ramping up our online and distance learning options even before the pandemic. So we simply put that into high gear. The easily automatable nature of handling logistics for online events also made it possible to do many more of them, which is one reason you're seeing so much more activity from us these days.

In certain ways, for many in our collective, this "new normal" is actually a rather dated 90's-era cyberpunk dystopia that we've been experiencing for many, many years. In that sense, we're happy that you don't have to enter this reality alone and defenseless. We kinda’ built Tech Learning Collective for exactly this scenario. We want to help you thrive here.

Finally, what does the future look like for TLC?

We're not sure! When we started TLC, we never thought it would end up becoming an online, international, radical political hacker school. In just the last two months since we've been forced to become a wholly virtual organization, we've held classes with students from Japan, Italy, New Zealand, the UK, Mexico, and beyond, as well as many parts of the United States of course. Many of them are now repeat participants working their way through our entire curriculum, which is the best compliment we could have asked for. We hope they'll stick around to join our growing alumni community after that. We're also (slowly) expanding our “staff” outside of New York City, which isn't something we thought would happen for many years, if at all.

But right now, we're primarily focused on moving the rest of our in-person curriculum online and creating new online workshops. Many of the workshops unveiled this month or planned for next month are new, like our workshops on writing shell scripts, exploiting Web applications, auditing firewalls and other network perimeter defenses, and an exciting "spellwork" workshop to learn about the "spirits" that live on in the magical place inside every computer called the Command Line. So in the near future, expect to see more workshops like these, as well as more of our self-paced “Foundations” learning modules that you can try out anytime for free right in your Web browser from our Web site.

After that? Well, some say another world is possible. We're hackers. Hacking is about showing people what's possible, especially if they insist it could never happen.

Our thanks to Tech Learning Collective for their continued efforts to bring an empowering technology education to marginalized peoples in New York City and, increasingly, around the world. You can find and support other Electronic Frontier Alliance affiliated groups near you by visiting

If you are interested in holding workshops for your community, you can find freely available workshop materials at the EFF’s Security Education Companion and security guides from our Surveillance Self-Defence project. Of course, you can also connect to similar groups by joining the Electronic Frontier Alliance.

Trump’s Executive Order Seeks To Have FCC Regulate Platforms. Here’s Why It Won’t Happen

Mon, 06/01/2020 - 15:12

This is one of a series of blog posts about President Trump's May 28 Executive Order. Other posts are here, here, and here.

The inaptly named  Executive Order on Preventing Online Censorship seeks to insert the federal government into private Internet speech in several ways. Through Section 2 of the Executive Order (EO), the president has attempted to demand the start of a new administrative rulemaking. Despite the ham-fisted language, such a process can’t come into being. No matter how much someone might wish it.

The EO attempts to enlist the Secretary of Commerce and Attorney General to draft a rulemaking petition with the Federal Communications Commission (FCC) that asks it  that independent agency to interpret 47 U.S.C. § 230 (“Section 230”), a law that underlies much of the architecture for the modern Internet.

Quite simply, this isn’t allowed.

Specifically, the petition will ask the FCC to examine:

“(i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher or speaker for making third-party content available and does not address the provider’s responsibility for its own editorial decisions;

“(ii)  the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly whether actions can be “taken in good faith” if they are:

“(A)  deceptive, pretextual, or inconsistent with a provider’s terms of service; or

“(B)  taken after failing to provide adequate notice, reasoned explanation, or a meaningful opportunity to be heard; and

“(iii)  any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.”

There are several significant legal obstacles to this happening.

First, the Federal Communications Commission (FCC) has no regulatory authority over the platforms the President wishes the agency to regulate. The FCC is a telecommunications/spectrum regulator and only the communications infrastructure industry (companies such as AT&T, Comcast, Frontier as well as airwaves) are subject to the agency’s regulatory authority. This is the position of both the current, Trump-appointed FCC Chair as well as the  courts that have considered the question.

In fact, this is why the issue of net neutrality is legally premised on whether or not broadband companies are telecommunications carriers. While that question, whether broadband providers are telecommunications carriers under the law, is one where we disagree with current FCC leadership, neither this FCC nor any previous one has taken the position that social media companies are telecommunications carriers. So to implement regulations targeting social media companies, the FCC would have to explain how—under what legal authority—it is allowed to issue regulations aimed at social media companies. We don’t see it doing so.  

But say the FCC ignores this likely fatal flaw and proceeds anyway. The EO triggers a long and slow process which is unlikely to be completed, much less one that results in an enforcement action, this year. That process will involve a Notice of Proposed Rules (NPRM), with the FCC issuing a statement explaining its rationale for regulating these companies, what authorities it has to regulate them, and the possible regulations the FCC intends to produce. The commission must then solicit public comment in response to its statement.

The process also involves public comment periods and agreement by a majority of FCC Commissioners on the regulations they want to issue. Absent a majority, nothing can be issued and the proposed regulations effectively die from inaction. If a majority of FCC Commissioners do agree and move forward, a lawsuit will inevitably follow to test the legal merits of the FCC’s decision, both on whether the government followed the proper procedures in issuing the regulation and whether it has the legal authority to issue rules in the first place.

Needless to say, the EO has initiated a long and uncertain process. Certainly one that will not be completed before the November election, if ever.

California Cops Can No Longer Pass the Cost of Digital Redaction onto Public Records Requesters

Mon, 06/01/2020 - 13:17

At a dark time when the possibility of police accountability seems especially bleak, there is a new glimmer of light courtesy of the California Supreme Court. Under a new ruling, government agencies cannot pass the cost of redacting police body-camera footage and other digital public records onto the members of the public who requested them under the California Public Records Act (CPRA).

The case, National Lawyers Guild vs. Hayward was brought by civil rights groups against the City of Hayward after they filed requests for police body-camera footage related to protests on UC Berkeley’s campus following the deaths of Eric Garner and Michael Brown. Hayward Police agreed to release the footage, but not before assessing nearly $3,000 for redacting the footage and editing that they claimed NLG needed to pay before they’d release the video.

The California Supreme Court sided with NLG, as well as the long list of transparency advocates and news organizations that filed briefs in the case. The court ruled that:

“Just as agencies cannot recover the costs of searching through a filing cabinet for paper records, they cannot recover comparable costs for electronic records. Nor, for similar reasons, does ‘extraction’ cover the cost of redacting exempt data from otherwise producible electronic records.”

The court further acknowledged that such charges “could well prove prohibitively expensive for some requesters, barring them from accessing records altogether.”

This is an unqualified victory for government transparency. So what does this mean in practical terms for public records requesters? As people march against police violence across the Golden State, many members of the press and non-profits will likely use the CPRA to obtain evidence of police breaking the law or otherwise violating people’s civil rights.

These videos can prove to be invaluable records of police activity and misconduct, though they can also capture individuals suffering medical emergencies, violence, and other moments of distress. The CPRA attempts to balance these and other interests by allowing public agencies to redact personally identifying details and other information while still requiring that the videos be made public.

So when making a request for body-camera footage, the first thing requesters should know is that sometimes the individuals handling public records requests are not keeping up with legal decisions, particularly one issued last week. To preempt these misinterpretations of the law, requesters could consider including a line in their letters that says something like:

“Pursuant to NLG vs. Hayward, S252445 (May 28, 2020), government agencies may not charge requesters for the cost of redacting or editing body-worn camera footage.”

More broadly, the decision’s reasoning doesn’t just apply to body-camera footage, but all digital records. This is because the court’s ruling recognizes that because the CPRA already prohibits agencies from charging requesters for redacting non-digital records, that same prohibition applies to digital records.

So, in requests for electronic information, such as emails or datasets, you could include the line:

“Pursuant to NLG vs. Hayward, S252445 (May 28, 2020), government agencies may not charge requesters for the cost of redacting digital records.”

Additionally, people filing CPRA requests for digital records should know that the law does permit agencies to charge for the costs of duplicating records, though in the case of digital records that cost should be no more than the price of media the copy is written to - in NLG’s case, it was $1 for a USB memory stick.

The CPRA also permits agencies, in certain narrow circumstances, to charge for its staff’s time spent programming or extracting data to respond to a public records request. The good news is that the California Supreme Court’s decision last week significantly narrowed the circumstances under which an agency can claim these costs and pass them along to requesters.

According to the court, data “extraction” under the CPRA “refers to a particular technical process—a process of retrieving data from government data stores—when this process is” required to produce a record that can be released. The court said the provision would permit charges when, for example, a request for demographic data of state employees requires an agency to pull that data from a larger human resources database. But “extraction” does not cover the time spent searching for responsive records, such as when an official has to search through email correspondence or a physical file cabinet.

Requesters should thus be prepared to push back on any agency claims that seek to assess charges for merely searching for responsive records. And requesters should also be on the lookout for exorbitant charges associated with data “extraction” even when the CPRA permits it, as such techniques in practice can amount to little more than a database query or formula.

Don’t Mix Policing with COVID-19 Contact Tracing

Mon, 06/01/2020 - 13:14

Over the weekend, Minnesota’s Public Safety Commissioner analogized COVID-19 contact tracing with police investigation of arrested protesters. This analogy is misleading and dangerous. It also underlines the need for public health officials to practice strict data minimization—including a ban on sharing with police any personal information collected through contact tracing.

On May 30, at a press conference about the ongoing protests in Minneapolis against racism and police brutality, Commissioner John Harrington stated:

As we’ve begun making arrests, we have begun analyzing the data of who we have arrested, and begun, actually, doing what you would think as almost pretty similar to our COVID. It’s contact tracing. Who are they associated with? What platforms are they advocating for?

We strongly disagree. Contact tracing a public health technique used to protect us from a deadly pandemic. In its traditional manual form (not to be confused with automated contact tracing apps), contact tracing involves interviews of people who have been infected, to ascertain who they have been in contact with, in order to identify other infected people before they infect still more people.

On the other hand, interrogating arrested protesters about their beliefs and associations is a longstanding police practice. So is social media surveillance by police of dissident movements. These practices must be carefully restricted, lest they undermine our First Amendment rights to associate, assemble, and protest, and our Fourth Amendment rights to be free from unreasonable searches and seizures. We have similar concerns about a notorious practice that the NSA calls “contact chaining”: automated analysis of communications metadata in order to identify connections between people.

Any blurring of police work with contact tracing can undermine public health. In prior outbreaks, people who trusted public health authorities were more likely to comply with containment efforts. On the other hand, a punitive approach to containment can break that trust. For example, people may avoid testing if they fear the consequences of a test result.

Thus, we must ensure strict data minimization in COVID-19 contact tracing. At a minimum, this means that police must have no access to any personal information collected by public health officials in the course of interviewing COVID-19 patients about their movements, activities, and associations. People are less likely to cooperate with contact tracing if they fear the consequences. For this reason, EFF also opposes police access to the home addresses of COVID-19 patients.

Of course, there is much more to data minimization:

  • Public health officials conducting COVID-19 contact tracing must collect as little personal information as possible for containment purposes. For example, they don’t need to know about a patient’s movements months earlier, because COVID-19 patients are only infectious for 14 days.
  • Public health officials must delete the personal information they collect as soon as it is no longer helpful to contact tracing. This may be a very short retention period, given the very short infectiousness period.
  • Public health officials must not disclose this information to other entities, especially if those entities are likely to use the information for anything other than contact tracing. For example, they must not disclose this information to police departments, immigration enforcement agencies, or intelligence services.
  • When corporations assist with contact tracing, they must abide by data minimization rules, too. For example, they must not be allowed to use it for targeted advertising, or to monetize it in any other manner.

We need new laws to guarantee such data minimization, not just for contact tracing, but for all COVID-19 responses that gather personal information.

Finally, we must not allow police to “COVID wash” controversial police practices. Manual contact tracing is a public health measure that many people view as necessary and proportionate to the ongoing public health crisis. On the other hand, when police investigate the political beliefs and associations of protesters, whether by interrogation or social media snooping, abuse often follows. The misplaced analogy between these two very different practices can unduly blunt justified criticisms of police responses to protesters.

From Tunis to Minneapolis—and Beyond—Social Media Keeps Us Connected

Mon, 06/01/2020 - 07:25

In January 2011, after hearing about the unrest unfolding in Sidi Bouzid, Tunisian blogger Lina Ben Mhenni (who passed away in January of this year from a chronic illness) began traveling around the country to document the nascent protests and the government’s response to them.

“There are no journalists doing this,” she told Newsweek at the time. “And moreover, the official media started to tell lies about what was happening.”

Despite widespread censorship and surveillance both online and off, and the fact that her own blog, Facebook, and Twitter accounts were blocked by the Ben Ali government, Ben Mhenni chose to continue blogging under her real name, saying “Even if you use a nickname, they can reach you.”

Ben Mhenni’s reports from Tunisia’s interior were invaluable at a time when foreign press had limited access to the country, and domestic media had its hands tied either by fear, cooptation, or censorship. Her bravery and ingenuity helped both Tunisians and the rest of the world understand what was happening in the country—information that for better or worse helped spark protests elsewhere in the region.

Her story speaks to the importance of a free press, but it also speaks to the dire need for citizen documentation and a free and open internet. At this very moment in the United States, citizens across the country are sharing images, videos, opinions, and analysis, often on social media platforms that many see as trivial. And while the U.S. still has an ostensibly free press—and indeed, many courageous journalists both freelance and otherwise willing to put themselves on the front lines to capture the zeitgeist—over the course of the last few days, members of the mainstream press have been assaulted and detained by police while reporting. Furthermore, even in the best of times, the press cannot be everywhere at once, nor can we rely upon them to get every story...or report without ingrained bias.

Like many of today’s citizen journalists and documentations, Ben Mhenni was not neutral. She was a revolutionary, and her online activities consisted of activism as well as documentation—just like many of the brave individuals raising their voices online right now.

As with the protests that rocked Tunisia in 2011, social media has been vital to those calling for justice and accountability in the face of police violence against Black people in the United States.oth in terms of raising awareness and support, and in terms of providing a space for alternative reporting. As Ashley Yates, a prominent leader and organizer during the uprising in Ferguson, MO, told a reporter in 2016, “We started to use Twitter and Facebook and Instagram as a way to just get the word out, to contrast the stark mainstream media blackout that was occurring”, or as activist Deray McKesson succinctly put it: “...In Ferguson we became unerased, and that was solely because of social media. We didn’t invent resistance, we didn’t discover injustice. The only thing that is different about this movement is our ability to story tell it and use the power of storytelling as actual power.”

But just as Ben Mhenni faced censorship, so too have many of the observers and participants in the demonstrations. In the years since the 2012 killing of Trayvon Martin, we (and many others) have documented numerous instances where tech platforms have wrongly removed posts by activists supportive of the movement for Black lives. 

While the current media cycle focuses on Twitter’s decision to fact-check President Trump, it can be easy to forget that those most impacted by corporate speech controls are not politicians, celebrities, or right-wing provocateurs, but some of the world’s most vulnerable people who lack the access to corporate policymakers to which states and Hollywood have become accustomed.

The slippery slope of platform censorship began not with the fact-checking of the U.S. president or the banning of Alex Jones, but with the silencing of Moroccan atheists, Egyptian activists, indigenous women, Syrian citizen journalists, the LGBTQ community, and countless others.

And as we continue to debate what to do about platforms, it is vital that we do not lose sight of that.

Dangers of Trump’s Executive Order Explained

Mon, 06/01/2020 - 00:41

This is one of a series of blog posts about President Trump's May 28 Executive Order. Links to other posts are below.

The inaptly named Executive Order on Preventing Online Censorship (EO) is a mess on many levels: it’s likely unconstitutional on several grounds, built on false premises, and bad policy to boot. We are no fans of the way dominant social media platforms moderate user content. But the EO, and its clear intent to retaliate against Twitter for marking the president’s tweets for fact-checking, demonstrates that governmental mandates are the wrong way to address concerns about faulty moderation practices.

The EO contains several key provisions. We will examine them in separate posts linked here:

1. The FCC rule-making provision
2. The misinterpretation of and attack on Section 230
3. Threats to pull government advertising
4. Review of unfair or deceptive practices.

Although we will focus on the intended legal consequences of the EO, we must also acknowledge the danger the Executive Order poses even if it is just political theater and never has any legal effect. The mere threat of heavy-handed speech regulation can inhibit speakers who want to avoid getting into a fight with the government, and deny readers information they want to receive. The Supreme Court has recognized that “people do not lightly disregard public officers’ thinly veiled threats” and thus even “informal contacts” by government against speakers may violate the First Amendment.

The EO’s threats to free expression and retaliation for constitutionally-protected editorial decisions by a private entity are not even thinly veiled: they should have no place in any serious discussion about concerns over the dominance of a few social media companies and how they moderate user content.

That said, we too are disturbed by the current state of content moderation on the big platforms. So, while we firmly disagree with the EO, we have been highly critical of the platforms’ failure to address some of the same issues targeted in the EO’s policy statement, specifically: first, that users deserve more transparency about how, when and how much content is moderated; second, that decisions often appear inconsistent; and, third, that content guidelines are often vague and unhelpful. Starting long before the president got involved, we have said repeatedly that the content moderation system is broken and called for platforms to fix it. We have documented a range of egregious content moderation decisions (see our, Takedown Hall of Shame, and TOSsed Out projects). We have proposed a human rights framing for content moderation called the Santa Clara Principles, urged companies to adopt it, and then monitored whether they did so (see our 2018 and 2019 Who Has Your Back reports).

But we have rejected government mandates as a solution, and this EO demonstrates why it is indeed the wrong approach. In the hands of a retaliatory regime, government mandates on speech will inevitably be used to punish disfavored speakers and platforms, and for other oppressive and repressive purposes. Those decisions will disproportionately impact the marginalized. Regardless of the dismal state of content moderation, it is truly dangerous to put the government in control of online communication channels.

The EO requires the Attorney General to “develop a proposal for Federal legislation that would be useful to promote the policy objectives of this order.” This is a dangerous idea generally because it represents another unwarranted government intrusion into private companies’ decisions to moderate and curate user content. But it’s a particularly bad idea in light of the current Attorney General’s very public animus toward tech companies and their efforts to provide Internet users with secure ways to communicate, namely through end-to-end encryption. Attorney General William Barr already has plenty of motivation to break encryption, including through the proposed EARN IT Act; the EO’s mandate gives Barr more ammunition to target Internet users’ security and privacy in the name of promoting some undefined “neutrality.”

Some have proposed that the EO is simply an attempt to bring some due process and transparency to content moderation. However, our analysis of the various parts of the EO illuminate why that’s not true. 

What about Competition?

For all its bluster, the EO doesn’t address one of the biggest underlying threats to online speech and user rights: the concentration of power in a few social media companies.

If the president and other social media critics really want to ensure that all voices have a chance to be heard, if they are really concerned that a few large platforms have too much practical power to police speech, the answer is not to create a new centralized speech bureaucracy, or promote the creation of fifty separate ones in the states. A better and actually constitutional option is to reduce the power of the social media giants and increase the power of users by promoting real competition in the social media space. This means eliminating the legal barriers to the development of tools that will let users control their own Internet experience. Instead of enshrining Google, Facebook, Amazon, Apple, Twitter, and Microsoft as the Internet’s permanent overlords, and then striving to make them as benign as possible, we can fix the Internet by making Big Tech less central to its future.

The Santa Clara Principles provide a framework for making content moderation at scale more respectful of human rights. Promoting competition provides a way to make the problems caused by content moderation by the big tech companies less important. Neither of these seem likely to be accomplished by the EO. But the chilling effect the EO will likely have on hosts of speech, and, consequently, the public—which relies on the Internet to speak out and be heard—is likely very real.