EFF News feed

Should You Really Delete Your Period Tracking App?

3 hours 15 minutes ago

Since the Supreme Court’s Dobbs decision depriving people of the right to abortion leaked last month, some have advised deleting period tracking apps to prevent that data from being used to target people seeking abortion care. But it’s useful to distinguish between the security and privacy threats that abortion seekers are actively experiencing now versus threats that may come in the future. Dragnet surveillance of period tracking apps in order to identify people with menstrual irregularities or other supposed evidence of terminated pregnancy fits squarely in the latter category.

So, should you delete your period tracking app? The short answer is: not necessarily. You may want to review your choice of app, along with other digital practices depending on what kinds of privacy invasions and threats you are most concerned about. Abortion seekers face much more urgent threats right now, and period tracking apps are not at the top of the list of immediate concerns. In the meantime, the companies behind period tracker apps have some serious shaping up to do, and legislators must move forward common-sense privacy legislation to protect not only health-related data but the full range of consumer data that could be weaponized against abortion seekers.

More Immediate Threats

Right now, the most common scenario in which people are criminalized for their pregnancy outcomes is when a third party—like hospital staff, a partner, family member, or someone else they trust—turns them in to law enforcement, who may pressure them into a device search. The most common types of evidence used in the resulting investigations are text messages, emails, browser search histories, and other information that could straightforwardly point to someone’s intention to seek an abortion. This type of criminalization is nothing new, and it has disproportionately affected people of color and people dependent on state resources.

With that immediate scenario in mind, think carefully about who you trust with information about your pregnancy. Use end-to-end encrypted messengers with disappearing messages turned on whenever possible. This functionality is available on both WhatsApp and Signal, and we have step-by-step guides for how to turn it on for Signal on iOS and Android. Refer to our security tips for people seeking an abortion and Surveillance Self-Defense guides for the abortion movement for information about other privacy considerations and steps.

Of course, it’s still important to prepare for future threats and make sure you have a period tracking method that works for you and protects your privacy. Read on to learn more about how period tracking apps work, what to look for in choosing one, and what companies can do to protect their users.

What Period Tracking Apps Collect and How It Can Be Abused

Besides the information around your reproductive health that you would expect your period tracking app to collect, there is a wide array of other types of data it may be picking up on: your phone’s device identifier, the location you are using the app from, the Ad ID that your phone uses as a nametag to communicate with advertisers across all your apps, your contact list, photos, and more. Individually, some of these pieces of data may seem relatively harmless, but they can also be combined and shared across the huge industry of web tracking and advertising. Anyone, not just advertisers, may be able to buy the resulting datasets. It isn’t a far reach to imagine dire consequences from this data collection and sharing—but again, this is not the primary strategy being used to criminalize abortion seekers right now.

Also remember that, just because you may delete an app from your phone, the data you’ve generated with it can live on in that app's product servers and anywhere else they’ve shared the data they’ve collected. From there, it’s very difficult to delete and confirm it’s actually deleted.

This is why it’s important to be especially careful in choosing a period tracker app that is mindful of user privacy.

Choosing the Right Period Tracking App

If you’re using a period tracker already, consider switching to a more privacy-focused app. Consumer Reports, for instance, analyzed a number of period trackers and found Euki, Drip, and Periodical to be on the side of users when it comes to data retention policies and practices as well as avoiding third-party trackers.

Regardless of the app you choose, carefully examine its privacy settings and privacy policy statements. The privacy settings page is where you are able to configure different controls based on your preferences for how the app ought to collect and share your data. In fact, some apps don’t even have a privacy settings page—if yours doesn’t, consider it a red flag.

The privacy policy statement is where the app will describe the ways in which it manages, collects, and stores your data. These statements are often confusing and full of inaccessible legalese, so just do your best and don’t be discouraged if making sense of it is a challenge. Look for specific sections with phrases like “Data Collection” and “Sharing.” These paragraphs are often where you’ll find how the app plans on collecting your data and sharing it with others (often for a profit). A keyword search for phrases like “legal process” or “subpoena” will usually point to the section on how the provider will respond to police demands. For example, searching for these terms on Flo’s privacy policy reveals that Flo will share your info with police “to the extent permitted and as restricted by law” (emphasis added), which means Flo reserves the right to voluntarily comply with a police data demand so long as it is not specifically illegal to do so.

Companies and Lawmakers Must Step Up

Any company that collects user information must prepare for the possibility that data from their apps will be used to target or punish people seeking abortion and other reproductive healthcare. Our recommendations for tech companies in a post-Roe world certainly apply to the sensitive data and high level of user trust involved in period tracking apps.

And we shouldn’t have to hope companies will implement best practices out of the kindness of their hearts. Proposed legislation at the state and federal level would strengthen data privacy protections for health information. But we know that information directly related to health isn’t the only kind of data that could be used to target abortion seekers or any number of vulnerable, targeted populations. That’s why passing comprehensive state and federal consumer data privacy legislation is one of our strongest tools to prepare for future privacy threats, both the ones we can imagine and the ones that have yet to emerge.

Users shouldn’t have to scramble to find a tool that both helps them manage their health and protects their privacy. While we do not believe data from period tracking apps is an immediate threat, it could be in the future—and that is why now is the time for companies and lawmakers to take concrete steps to minimize the potential for harm.

Gennie Gebhart

EFF to File Amicus Brief in First U.S. Case Challenging Dragnet Keyword Warrant

3 hours 48 minutes ago

Should the police be able to ask Google for the name of everyone who searched for the address of an abortion provider in a state where abortions are now illegal? Or who searched for the drug mifepristone? What about people who searched for gender-affirming healthcare providers in a state that has equated such care with child abuse? Or everyone who searched for a dispensary in a state that has legalized cannabis but where the federal government still considers it illegal?

The answer is no. And in an amicus brief EFF intends to file today in Colorado, we explain why these searches are totally incompatible with constitutional protections for privacy and freedom of speech and expression.

The case is People v. Seymour, and it is perhaps the first U.S. case to address the constitutionality of a keyword warrant. The case involves a tragic home arson in which several people died. Police didn’t have a suspect, so they used a keyword warrant to ask Google for identifying information on anyone and everyone who searched for variations on the home’s street address in the two weeks prior to the arson.

Like geofence warrants, keyword warrants cast a dragnet that requires a provider to search its entire reserve of user data—in this case queries by more than one billion Google users. As in this case, the police generally have no identified suspects when they obtain a keyword search warrant. Instead, the sole basis for the warrant is the officer’s hunch that the suspect might have searched for something in some way related to the crime.

Keyword warrants are possible because it is virtually impossible to navigate the modern Internet without entering search queries into a search engine. By some accounts, there are over 1.15 billion websites, and tens of billions of webpages. Google Search processes as many as 100,000 queries every second. Many users have come to rely on search engines to such a degree that they routinely search for the answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant, even friends, family members, doctors, or clergy. Over the course of months and years, there is little about a user’s life that will not be reflected in their search keywords, from the mundane to the most intimate. The result is a vast record of some of users’ most private and personal thoughts, opinions, and associations.

Google appears to keep a record of every single search—even if a user isn’t logged into a Google account at the time. Google links search queries to IP addresses and ISP information and discloses that information to police in response to a keyword warrant. Given this, it is very difficult for the average person to hide their Internet searches from Google—and, by extension, the government.

All keyword warrants have the potential to implicate innocent people who just happen to be searching for something an officer believes is somehow linked to the crime. For example, the warrant in People v. Seymour sought everyone who searched for a specific address on “Truckee” street, where the crime took place. However, there are streets named “Truckee” in several cities and towns in Colorado, as well as in Arizona, California, Idaho, and Nevada. Keyword warrants could also allow officers to target people based on political speech and by their association with others by requesting information on everyone who searched for the location or the organizers of a protest. Police used similar dragnet warrants to try to identify people at political protests in Kenosha, Wisconsin and Minneapolis after police killings in those cities.

In our brief, we will be telling the court that keyword warrants — which explicitly target protected speech and the right to receive information — are overbroad and violate both the U.S. and Colorado state constitutions. Search engines are an indispensable tool for finding information on the Internet, and the right to use them—and use them anonymously—is critical to a free society. If providers can be forced to disclose users’ search queries, this will chill users from seeking out information on anything that might result in government scrutiny.

The U.S. Supreme Court and the Colorado Supreme Court have both recognized that police searches that target speech are so concerning that they should be reviewed with “heightened scrutiny.” The Colorado Supreme Court has taken this one step further. In a 2002 case called Tattered Cover v. Thornton, the court noted that there are some circumstances where a search so threatens freedom of expression, association, and the right to receive information that “the police should be entirely precluded from executing the warrant.” This is just such a case.

Dragnet warrants that target speech have no place in a democracy. We will continue to fight to convince courts and legislatures they are unconstitutional and should be outlawed.

EFF Amicus Brief in People v. Seymour

Jennifer Lynch

Digital Rights Updates with EFFector 34.4

7 hours 2 minutes ago

Want the latest news on your digital rights? Well, you're in luck! Version 34, issue 4 of our EFFector newsletter is out now. Catch up on the latest EFF news by reading our newsletter or listening to the audio version below. This issue covers EFF's current work on reproductive rights issues, including digital safety tips for those seeking an abortion and why we support the "My Body, My Data" Act.

LISTEN ON YouTube

EFFECTOR 34.4 - My Body, My Data

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Keeping Your Smart Home Secure & Private

9 hours 26 minutes ago

Here at EFF, we fight hard to ensure your security and privacy rights are maintained in the digital world. Back when we were founded in 1990, a dream of a world united by the internet was accompanied by forward-thinking visions of connected devices of all kinds making our lives more convenient and luxurious. The last two decades have seen the internet move from living-room and office terminals to our phones, watches, appliances and lighting fixtures. And although so-called smart devices and the Internet of Things (IoT) have allowed us to automate some aspects of our lives, they’ve also been plagued with privacy and security problems, giving hackers and data miners unprecedented access to our personal and behavioral information.

Examples of large botnets such as the well-known Mirai and more recent Fronton—which consist of Internet-connected IoT devices—have caused significant damage, and have given IoT a terrible reputation when it comes to security. Governments have started to take note, and the passage of the IoT Cybersecurity Improvement Act of 2020 in the US, while welcome, has only begun to tackle this issue. On the privacy front, our connected devices and appliances are delivering potentially hundreds of discrete data points per day to companies without any meaningful limits on or insight into what they are doing with this data. And homeowners who wish to add smart devices to their homes are often directed to install apps which control these devices, but also deliver data to third parties without notification.

Mozilla provides a useful tool, *privacy not included, to search your own smart devices for what they may be sending to the cloud. If, for instance, you own a Furbo Dog Camera with Dog Nanny, you are subject to a privacy policy which states Furbo can “collect any audio, video or pictures you create, upload, save or share” and “collect video and audit information of individuals when they pass in front of the camera or speak when the Furbo Dog Camera is on.” Unfortunately, this policy is not atypical. Researchers at Northeastern University and Imperial College London found in a survey of IoT devices across the industry that 72 of the 81 they looked at were sending information to third parties.

The nuances of adding connected automation and functionality to the home while preserving one's privacy and security seems an obtuse and difficult task. Many otherwise enthusiastic consumers have encountered untold frustrations, and become victims of the failures of a data-hungry industry. The myriad of difficulties has even prompted users to abandon smart devices altogether.

Despair not, for there is hope. In the last few years, numerous projects and protocols have been and are actively being developed which bring a greater deal of privacy and security to the connected home. And it all starts by moving the orchestration of all those devices from the cloud into your own network, with the help of a device called a “hub.”

Coordinating Your Smart Devices Locally With Home Assistant

Ideally, using a local hub gives us two benefits. It

  1. allows us to remove all the individual apps controlling the wide array of smart devices we may have, and
  2. ensures we are not delivering data about our device usage (and thus behaviors) to unaccountable third parties or companies.

However, not all hubs sever the ties of the device from the cloud completely—additional steps are often needed for this. Keep in mind that even if you do wish to disconnect your devices from the cloud, you will need some way to regularly update the firmware on the devices—this otherwise is often done automatically when these devices are networked.

For any local hub, you’ll need the hardware and a way to connect to it, usually an app on your smartphone. The hardware is usually a small machine which connects to your local network and allows the user a way to access it. For simplicity, there are commercial products available that just work out of the box. Hubitat offers a local hub for sale in the range of $100 USD.

For the more technically inclined, Home Assistant (HA) is an open source, community-driven hub software that can be installed on a variety of platforms, such as a Raspberry Pi or an old laptop you have lying around collecting dust. It doesn’t require much processing power or memory to operate—any Raspberry Pi 3b+ or later will do the job just fine. In this post, we’ll be describing a typical privacy-preserving high-level IoT layout using HA.

After installing HA, you’ll be able to add devices through a concept HA calles “integrations”; each integration allows the user to control a device or whole category of devices. The variety of integrations provided are vast, and the benefit of community-driven development really shines because even if your device isn’t specifically supported, it is probably available through the unofficial Home Assistant Community Store (HACS).

One nice thing is that HA will indicate if an integration relies on the cloud. You can see this with an icon in the upper-right corner of your integration.

For integrations which do not rely on the cloud, you may want to block the device from internet connectivity. While most smart electronics don’t make this easy, if you have a home firewall or configurable router you may be able to limit the connections it makes to your local network. On OpenWRT, for instance, you can add firewall rules through the Luci web interface. Here, we’ve specified MAC addresses of devices we want only to connect to the LAN, not the internet. Your configuration will vary based on your device MAC addresses and local network configuration:

In particularly nasty cases, a device may refuse to operate until it is able to reach the internet, even if it is able to be controlled locally (via a non-cloud integration). In most cases, however, a device will continue to allow local control when its internet connection is severed.

We now have a way to connect our existing smart devices to a local network hub and remove it from the internet.

Using Zigbee or Z-Wave to Create a Private Smart Mesh

Zigbee and Z-Wave are two wireless open protocols which were developed specifically for smart devices, and operate on a different network entirely than your home Wi-Fi (802.11) network. This provides a level of separation between smart devices supporting Zigbee/Z-Wave and the internet by design—though that separation is not necessarily maintained when an untrustworthy hub is used. Many companies provide Zigbee or Z-Wave hubs which will send your data and device status over the internet. This is why using a hub that is privacy-focused, such as the ones mentioned above, is important to keeping your data private.

In addition, both Zigbee and Z-Wave create a mesh network of your smart devices, which greatly improves the range of the devices. As long as there is another Zigbee device within range, a new Zigbee-enabled smart device can join the network through it, without having to be in range of the hub. This also allows for a theoretically limitless expansion of the network. Communication between devices and the coordinator (hub) is carried out relatively securely, using CCM mode and 128-bit symmetric keys to cryptographically secure communications, though when adding devices an open trust model which trusts upon initial pairing (similar to Trust On First Use) is used. Unfortunately, Zigbee and Z-Wave are separate protocols which do not interoperate with one another. In this example, we will demonstrate a Zigbee configuration, though Z-Wave is similar in operation and both can be used in combination with HA.

In order to communicate with Zigbee devices, a Zigbee USB gateway is needed. Once plugged into the HA machine, the hub can use the Zigbee Home Automation (ZHA) integration, which does not use the cloud, to discover new Zigbee devices, control them, display sensor data on them, etc.—and all this information is kept safe on your local hub.

Instead of directly interfacing the Zigbee USB gateway with HA, the USB device can communicate with a piece of versatile bridging software, zigbee2mqtt. The advantage of using zigbee2mqtt is that it translates all your Zigbee device communications to the MQTT protocol, which is an ultra lightweight protocol for transferring data and administering devices. As such, it is quickly becoming a universal language for IoT devices. zigbee2mqtt supports a wide range of Zigbee devices, and allows you to control delivery of OTA firmware updates. It supplies a standalone web interface which can be used to control devices, but is most often used as a piece of middleware to supply automation software (like HA) with Zigbee device control. To use it with HA, you can simply use the MQTT integration.

You can refer to the Zigbee Device Compatability Repository to see which devices are supported by ZHA and zigbee2mqtt and choose the option that is right for you.

Taking Back Control of Your Smart Devices

IoT security and privacy is an incredibly fraught subject, and in general manufacturers are extremely liberal with your data and its storage. In addition to cloud control of devices providing possible single points of failure and a lucrative target for malicious hackers, it adds an extra layer of complication whereby a user needs to install (and keep installed) as many apps as they have device vendors in their home. Some of these issues are slowly being addressed by initiatives such as Matter, but convenience and security is the focus of this new standard—user privacy is still delegated to the vendor, not the users themselves.

Hopefully we’ve shown one way to set up your smart home without sacrificing privacy and security for the sake of convenience. With a little extra effort, it is possible to get the most out of our smart devices without falling into the trappings and failures of IoT design.

Bill Budington

The Journalism Competition and Preservation Act Will Produce Neither Competition Nor Preservation

1 day 2 hours ago

In response to the very real pressures that online news outlets are facing, Congress continues to believe the very flawed Journalism Competition and Preservation Act (JCPA) is a magic solution. It is not. In fact, it is actively dangerous. And there’s a better solution available.

The way the JCPA is supposed to work is by giving an antitrust exemption to news sites, allowing them to negotiate as a bloc with sites like Google and Facebook, with the goal of getting paid every time those sites link to news articles. There are a few major, fundamental problems with that premise. For one, creating a new cartel to deal with existing monopolists is not competition, it’s the opposite. For another, creating an implicit right to control linking in any context won’t preserve journalism, it will let it rot away. Finally, the focus on getting paid for links makes even less sense when the problem, historically, has been the domination of the digital ad market by a few huge players. The Competition and Transparency in Digital Advertising Act actually targets that specific problem much more effectively than the JCPA.

Competition? Not Really.

As mentioned above, competition doesn’t flourish when a group—even one of smaller newsrooms—are allowed to form a cartel. It just means that both sides of this fight are now huge. Proposed changes to the bill will limit the organizations that could get compensation under this scheme to publications with 1,500 employees or fewer. But that won’t preserve competition, because the loss of local and independent news has already happened. Many smaller publications are now owned or backed by large corporations and venture capital funds. And the industry is consolidating at a rapid rate.

The large corporations and investment vehicles that dominate online journalism took advantage of the mess created by Facebook and Google’s ad domination. And the JCPA would allow them to reap the rewards of buying up, laying off, and click-baiting these newsrooms. That’s infuriating.

Preservation? Also Not Really.

It’s equally untenable to restrict who can link to publicly available pages on the web. That implies a sort of property right in links, an ownership of how information is shared. That has grave consequences for the entire internet, which depends on the ability to link to information sources from far and wide. Linking isn’t copyright infringement, at least under current law.  But the JCPA risks creating a new quasi-copyright law for linking, or even leading the courts to extend copyright law to cover some forms of linking.

Even if it applies only to Facebook and Google, the JCPA would act as a link tax. Link taxes have never worked whenever they have been tried in places like Australia and the European Union. And in those cases, there wasn’t a First Amendment to consider. The JCPA is also reported to prevent companies from simply refusing to link to certain outlets to avoid paying, which encroaches on those companies’ free speech rights to refuse certain content. Just as the law can’t require newspapers to include every viewpoint on a topic, it can’t require a news aggregator or search tool to link to sources it chooses not to feature.

And without a likely unconstitutional “must carry” provision, news aggregators and search engines will simply refuse to link to news outlets that demand payment, meaning that some of the most reliable sources of news and information will become far less accessible to the public.

That doesn’t just affect Google and Facebook. That affects everyone who shares articles online. It even affects journalists in smaller newsrooms, who base their reporting on the earlier reporting of others and link back to those stories. That’s good journalistic practice. It lets readers see where information is coming from and trace a story back to its inception. It’s the internet equivalent of a footnote. If it suddenly becomes fraught to link, readers lose valuable information and context.

Fix the Ads, Not the Links

Tech giants like Google and Facebook have indeed harmed journalism, but not by providing links to articles. Rather, their control of digital advertising markets and the vast majority of data in those markets means they can squeeze publications and advertisers by extracting higher shares of advertising revenue.

For example, starting in 2015, many online media companies started “pivoting to video,” gutting their traditional newsrooms and spending large amounts of money to build video journalism operations from scratch. Part of the impetus for that pivot was metrics showing that audiences preferred video to text—metrics provided, in large part, by Facebook. In 2014, Facebook claimed that “Facebook has averaged more than 1 billion video views every day.”

Those metrics turned out to be grossly inflated, by as much as 60 to 80 percent. Advertisers like video more than print, since video ads are harder to ignore than ads that can be scrolled past in a text post. And they were told people were watching these videos. Facebook and the like want more video to run ads in because it allows them to make more money. And by claiming that this is what "readers want," news media could be manipulated into creating more video.

Because the preference for video did not, in fact, extend to viewers, the pivot to video was devastating for news media—especially new, independent outlets who had placed a huge bet on costly video content based on Facebook’s misleading metrics. And none of that harm is related to news aggregation or linking. It’s related to the size and power of Facebook’s advertising division. With Facebook and Google dominating online advertising, publishers had no choice but to believe the metrics those companies were reporting. If there had been alternative ad networks and other effective business models for news media, there would have been more metrics to give the whole story—that Facebook’s numbers only held up if you counted someone as “watching” a video if three seconds of the video happened to play.

The Competition and Transparency in Digital Advertising Act, also called the Digital Advertising Act or DAA, targets the problem at its source. It breaks up the ad market into four components and prevents companies making more than $20 billion a year in advertising revenue from owning more than one of those components at a time.

Splitting ad empires apart holds the promise of a fairer ad market. Separating tech companies’ content and app businesses from their ad businesses, and splitting the sell-side and buy-side of ad technology, will make self-preferencing, bid-rigging, and other forms of fraud and cheating less profitable, less lucrative, and easier to detect. This will help media producers and individual creators get their rightful share of revenue from the ads that run against their work, and it will help protect small businesses and other advertisers from being price-gouged or defrauded by powerful, integrated ad-tech businesses.

This is the answer we need, not the JCPA.

Katharine Trendacosta

EFF to European Court: Keep Encryption Alive

2 days 2 hours ago

While encryption has been under attack in recent days, it’s still essential for private and secure electronic communications, especially for human rights defenders and journalists. EFF and our partners recently argued for the essentiality of encryption in a case before the European Court of Human Rights (ECtHR).  

In Telegram Messenger LLP and Telegram Messenger Inc. v. Russia, the company behind the popular messaging app Telegram refused to hand over confidential private user information to the Russian Federal Security Service. A Russian court subsequently convicted, fined, and briefly blocked internet access to Telegram. Following those actions, Telegram asked the ECtHR to find violations of its right to free expression, right to fair trial, and right to adequate legal remedies.  

EFF and our partners filed an amicus brief before the ECtHR, asking the ECtHR to safeguard encrypted online communications. Electronic communications have long become one of the main avenues of seeking and receiving information, and participating in activities and discussions, including on issues of political and social importance. Free expression rights are at stake, both for the individuals that use encrypted electronic messaging applications, and the intermediaries that offer them.  

As the UN Special Rapporteur on the Freedom of Opinion and Expression noted in his 2015 report, privacy is a “gateway for freedom of opinion and expression.” Encryption is key to free and safe communications, particularly for journalists, human rights defenders, lawyers, activists, and dissidents. That is why we invited the ECtHR to consider the risks to Internet users in Ukraine, and activists in Russia, if their private communications or their identities were revealed to the Russian authorities in the course of the current conflict.

Finally, EFF and our partners underlined the importance of adhering to the principles of necessity and proportionality. Requests to receive keys to access the communications of all users in order to access the communications of a particular individual are unlikely to be deemed necessary and proportional. Neither is blocking the intermediary for failure to comply with an order that requests encrypted user communications.  

The fate of this case and its enforcement is currently unclear. Russia was excluded from the Council of Europe following the invasion of Ukraine. EFF and partners hope that the ECtHR will view encryption as a public good. The court should uphold high standards for any restriction to freedom of expression using encrypted electronic services. 

You can read our complete brief here: 

Meri Baghdasaryan

EFF's Statement on Dobbs Abortion Ruling

6 days 9 hours ago

Today's decision deprives millions of people of a fundamental right, and also underscores the importance of fair and meaningful protections for data privacy. Everyone deserves to have strong controls over the collection and use of information they necessarily leave behind as they go about their normal activities, like using apps, search engine queries, posting on social media, texting friends, and so on. But those seeking, offering, or facilitating abortion access must now assume that any data they provide online or offline could be sought by law enforcement.

People should carefully review privacy settings on the services they use, turn off location services on apps that don’t need them, and use encrypted messaging services. Companies should protect users by allowing anonymous access, stopping behavioral tracking, strengthening data deletion policies, encrypting data in transit, enabling end-to-end message encryption by default, preventing location tracking, and ensuring that users get notice when their data is being sought. And state and federal policymakers must pass meaningful privacy legislation. All of these steps are needed to protect privacy, and all are long overdue.

More resources are available at our reproductive rights issue page. 

Cindy Cohn

The Bipartisan Digital Advertising Act Would Break Up Big Trackers

1 week ago

In May, Senators Mike Lee, Amy Klobuchar, Ted Cruz, and Richard Blumenthal introduced the “Competition and Transparency in Digital Advertising Act.” The bill, also called the “Digital Advertising Act” or just “DAA” for short, is an ambitious attempt to regulate, and even break up, the biggest online advertising companies in the world.

The biggest trackers on the internet, including Google, Facebook, and Amazon, are all vertically integrated. This means they own multiple parts of a supply chain - specifically, the digital advertising supply chain, from the apps and websites that show ads to the exchanges that sell them and the warehouses of data that are used to target them. These companies harm users by collecting vast amounts of personal information without meaningful consent, sharing that data, and selling services that allow discriminatory and predatory behavioral targeting. They also use vertical integration to crush competition at every level of the market, preventing less-harmful advertising business models from gaining a foothold.

The DAA specifically targets vertical integration in the digital advertising industry. The bill categorizes ad services into, roughly, four kinds of business:

  • Publishers create websites and apps, and show content directly to users. They sell ad space around that content.
  • Ad exchanges run auctions for ad space from many different publishers, and solicit bids from many different advertisers.
  • Sell-side brokerages work with publishers to monetize their ad space on exchanges. These are sometimes called “supply-side platforms” in the industry.
  • Buy-side brokerages work with advertisers to buy ad space via exchanges. These are sometimes called “demand-side platforms” in the industry.

In broad strokes, the bill would prevent any company that makes more than $20 billion per year in advertising revenue from owning more than one of those components at a time. It also creates new obligations for advertising businesses to operate fairly, without self-preferencing, and prohibits them from acting against the interests of their clients. The bill is complex and nuanced, and we will not analyze every provision of it here. Instead, we will consider how the main ideas behind this bill might affect the internet if enacted.

How would this affect the real world?

The DAA would likely apply to all three of the biggest ad tech companies in the world: Meta, Amazon, and Google. As we’ll describe, all of these companies act as both publishers and service providers at multiple levels of the ad tech “stack.”

Meta is a “publisher” because it operates websites and apps that serve content to users directly, including Facebook, Instagram, Whatsapp, and Oculus. It also operates a massive third-party ad platform, called “Audience Network,” which sells ad space in “thousands” of third-party apps that reach “over 1 billion” people each month. Audience Network essentially acts as a supply-side platform, a demand-side platform, and an exchange at the same time. Furthermore, Meta uses both its user-facing apps and those “thousands” of third-party Audience Network apps to gather data about our online behavior. The data it gathers about users on its social media platforms are used to target them in Audience Network apps; those apps, in turn, collect yet more data user behavior. This kind of cross-platform data collection is common to all of the ad tech oligarchs, and it helps them target users more precisely (and more invasively) than their smaller competitors.

Amazon has been rapidly developing its own advertising business. While online advertising was once widely viewed as a duopoly of Google and Facebook, today the ad market is better characterized as a triopoly. Amazon operates several third-party advertising services, including Amazon DSP, an analytics platform called Amazon Attribution, and a supply-side ad server called Sizmek Ad Suite. It also sells ad space on Amazon properties like its flagship website amazon.com, its Kindle e-readers, Twitch.tv, and its many video streaming services. Like Facebook, Amazon can use data about user behavior on its own properties to target them on third-party publishers and vice versa.

Google is the biggest of all. It makes billions of dollars selling ads on its user-facing services, including Google Search, YouTube, and Google Maps. But behind the scenes, Google’s ad infrastructure is even more expansive. Google operates at least ten different components that handle different parts of the ad business for different kinds of clients. Its ad exchange (AdX, formerly Doubleclick Ad Exchange), supply-side platform (Google Ad Manager, formerly Doubleclick for Publishers) and mobile ad platform (AdMob) all dominate their respective market segments. Its trackers, inserted into third-party websites, are far and away the most common on the web. And in addition to the massive information advantage it has over competitors, Google has repeatedly been accused of using its different components to secretly self-preference and directly undermine competition. As a result, the company is currently the subject of several different antitrust investigations around the world.

All of these companies likely meet the revenue threshold specified by the DAA. That means if the bill becomes law, all three may be required to divest their advertising businesses. Google could operate Youtube and Search, or the infrastructure that serves ads on those sites, but not both. Furthermore, if all of its advertising components were spun off into one “Google Ads” conglomerate that still made over $20B in revenue, the resulting company would have to choose between its ad exchange, its supply-side platforms, or its demand-side platforms, and spin off its other parts. Essentially, the ad giants will have to break themselves into component parts until each component either falls below the revenue threshold or operates just one layer of the ad tech stack.

Why do break-ups matter?

Google and Facebook build user-facing platforms, but their main customers are advertisers. This central conflict of interest manifests in design choices that sell out our privacy. For example, Google has made sure that Chrome and Android keep sharing private information by default even as competing browsers and operating systems take a more pro-privacy stance. When advertiser interests conflict with user rights, Google tends to side with its customers.

Splitting user-facing platforms apart from ad tech tools would cut right through this tension. Chrome and Android developers would face competitive pressure from rivals who design tools that cater to users alone.

Separating ads from publishing can protect rights that US privacy laws do not address. A majority of proposed and enacted privacy laws in the U.S. regulate data sharing between distinct companies more strictly than data sharing within a single corporation. For example, the California Privacy Rights Act (CPRA) allows users to opt out of having their personal information “shared or sold,” but it does not give users the right to object to many kinds of intra-company sharing—like when Google’s search engine shares data with Google Ads to enable hyper-specific behavioral targeting on Google properties. Breaking user-facing services apart from advertiser-facing businesses will make it easier to regulate these flows of private information.

Splitting ad empires apart also holds the promise of a fairer ad market. Removing tech companies’ content and app businesses from their ad businesses, and splitting the sell-side and buy-side of the ad-tech stack, will make self-preferencing, bid-rigging, and other forms of fraud and cheating less profitable, less lucrative, and easier to detect. This will help media producers and individual creators get their rightful share of revenue from the ads that run against their work, and it will help protect small businesses and other advertisers from being price-gouged or defrauded by powerful, integrated ad-tech businesses.

Conclusion

The Digital Advertising Act is a bold, promising legislative proposal. It could split apart the most toxic parts of Big Tech to make the internet more competitive, more decentralized, and more respectful of users’ digital human rights, like the right to privacy. As with any complex legislation, the impacts of this bill must be thoroughly explored before it becomes law. But we believe in the methods described in the bill: they have the power to reshape the internet for the better.

Bennett Cyphers

Security and Privacy Tips for People Seeking An Abortion

1 week ago

Given the shifting state of the law, people seeking an abortion, or any kind of reproductive healthcare that might end with the termination of a pregnancy,  may need to pay close attention to their digital privacy and security. We've previously covered how those involved in the abortion access movement can keep themselves and their communities safe. We've also laid out a principled guide for platforms to respect user privacy and rights to bodily autonomy. This post is a guide specifically for anyone seeking an abortion and worried about their digital privacy. There is a lot of crossover with the tips outlined in the previously mentioned guides; many tips bear repeating. 

We are not yet sure how companies may respond to law enforcement requests for any abortion related data, and you may not have much control over their choices.  But you can do a lot to control who you are giving your information to, what kind of data they get, and how it might be connected to the rest of your digital life.

Keep This Data Separate from Your Daily Activities

If you are worried about legal pressure, the most important thing to remember is to keep these activities separate from less sensitive ones. This can be done many ways, but the underlying idea is to keep that information compartmentalized away from other aspects of your "regular" life. This makes it harder to trace back to you. 

Choosing a separate browser with hardened privacy settings is an easy and free start. Browsers like Brave, Firefox, and DuckDuckGo on mobile are all easy-to-use options that come with hardened privacy settings out of the box. It's a good idea to look into the “preferences” menu of whichever browser you choose, and raise the privacy settings even further. It's also a good idea to turn off this browser's features to remember browsing history and site data/cookies. Here’s what that looks like in Firefox’s “Privacy and Security” menu: 

Firefox's cookies and history options in its privacy menu

How to turn off Firefox's feature that remembers browser history

If you are calling clinics or healthcare providers, consider keeping a secondary phone number like Google Voice (which is free), Hushed, or Burner (both Hushed and Burner are paid apps, but have significantly better privacy policies than Google Voice). Having a separate email address, especially one that is made with privacy and security in mind, is also a good idea. Some email services you might consider are Tutanota and Protonmail.

Mobile Privacy

One way to protect your privacy is to get a “burner phone” – meaning a phone that’s not connected to your normal cell phone account. But keeping a super secure burner phone may be hard for many people. If so, consider reviewing the privacy settings on your current cell phone to see what information is being collected about you, who is collecting it, and what they might do with it.

If you're using a period tracker app already, carefully examine its privacy settings. If you can, consider switching to a more privacy-focused app.  Euki, for example, promises not to store any user information.

Turn off ad identifiers on your phone. We've laid out a guide for doing so on iOS and Android here. This restricts individual apps' abilities to track your behavior when you use them, and limits their sharing of that information with others.

While you're at it, it's a good idea to review the other permissions that apps have on your phone, especially location services. For apps that require location data for their core functionality (such as Google Maps), choose an option like "While Using" that only gives the app permission to view your location when it's open (remember to fully close out of those apps when you are finished using them).

If you have a "Find My" feature turned on for your phone, like Apple's function to see where your phone is from your other computers, you will want to consider turning that off before traveling to or from a location you don't want someone else being able to see you visit.

If you're traveling to or from a location (such as a clinic or a rally) where there is a likelihood law enforcement may stop you or seize your device, or if you're often near someone who may look into your phone without permission, turning off biometric unlocking is a good idea. This means turning off any feature for unlocking your phone using your face ID or fingerprint. Instead you should opt for a passcode that is difficult to guess (like all passwords: make it long, unique, and random).

Since you are likely using your phone to text and call others that will share similar data privacy and security concerns as you, it’s a good idea to download Signal, an end-to-end-encrypted messaging app. For a more thorough walkthrough, check out this guide for Android and this for iOS.

Lock & Encrypt

Anticipating how data on your devices might be seized as evidence is a scary thought. You don't need to know how encryption works, but checking to make sure it's turned on for all your devices is vital. Android and iOS devices have full-disk encryption on by default (though it doesn't hurt to check). Doing the same for your laptops and other computers is just as important. It's likely that encryption is on by default for your operating system, but it's worthwhile to check. Here is how to check for MacOS, and also for Windows. Linux users ought to check for guides for their choice of distribution and how to enable full disk encryption from there.

Delete & Turn Off

Deleting things from your phone or computer isn't as easy as it sounds. For sensitive data, you want to make sure it's done right.

When deleting images from your phone, make sure to remove them from "recently deleted" folders. Here is a guide on permanently deleting from iOS. Similar to iOS, Android's Google Photos app requires you to delete photos from its "Bin" folder where it stores recently deleted images for a period of time.

For your computer, using "secure deletion" features on either Windows or MacOS is a good call, but are not as important as making sure full disk encryption is turned on (discussed in the above section)

If you’re especially worried that someone might learn about a specific location you are traveling to or, simply turning off your phone and leaving your laptop at home is the easiest and most foolproof solution. Only you can decide if the risk outweighs the benefit of keeping your phone on when traveling to or from a clinic or abortion rally. For more reading, here is our guide on safely attending a protest, which may be useful for you to make that decision for yourself.

Daly Barnett

Westlaw Must Face Antitrust Claims in a Case That Could Boost Competitive Compatibility

1 week 1 day ago

Westlaw, the world’s largest legal research service, is very likely to face antitrust liability. A federal court has ruled that ROSS Intelligence, a tiny rival offering new research tools (which Westlaw forced out of business with a copyright infringement suit) could proceed with claims that Westlaw uses exclusionary and anticompetitive practices to maintain its monopoly over the legal research market. 

The ruling is a significant step in an antitrust case about Westlaw’s conduct as an entrenched incumbent. The company controls 80 percent of the market for legal research tools and maintains a massive, impossible-to-duplicate database of public case law built over decades. It faces few major competitors. Westlaw doesn’t license access to its database, which means that it’s difficult for another company to offer new and innovative online tools for searching case law or other follow-on products and services. 

The potential ramifications of this case are huge. The outcome could boost the case for competitive compatibility (comcom), the ability of challengers to build on the work of entrenched players like Westlaw to create innovative and useful new products. More prosaically, it could improve public access to court records.

The U.S. District Court for the District of Delaware in April refused to dismiss an antitrust claim against Westlaw by the now-defunct legal research company ROSS Intelligence. ROSS developed a new online legal research tool using artificial intelligence (AI), contracting with an outside company for a database of legal cases sourced from Westlaw. Westlaw sued ROSS for copyright infringement, accusing it of using AI to mine the Westlaw database as source material for its new tool. Though the database is mainly composed of judicial opinions, which can’t be copyrighted, Westlaw long maintained that it holds copyrights to the page numbers and other organizational features. ROSS went out of business less than a year after Westlaw filed suit. 

Despite going out of business, ROSS pressed ahead with a countersuit, claiming Westlaw and its parent company, Thomson Reuters Corporation, violate antitrust law by requiring customers to buy their online search tool to access its database of public domain case law, unlawfully tying the tool to the database to maintain dominance in the overall market for legal search platforms. 

The dispute is more than an example of David v. Goliath: Lawyers, students, and academics all over the world rely on online access to court records for scholarship, research, education, and case work. Westlaw controls access to the largest database of public court records, judge’s opinions, statutes, and regulations. Those who need this information have little choice but to do business with Westlaw, on Westlaw’s terms. The work of compiling it took decades, and effectively can’t be duplicated, but no amount of effort alone gives Westlaw ownership over judges’ opinions. Copyrights are not based on effort, but rather on original, creative work. The mere fact that Westlaw worked hard to build its database doesn’t mean the public domain records of the U.S. legal system become its copyrighted material.

No single company should gatekeep public access to our laws and public information. Companies should be able to build on non-copyrighted work, especially the non-copyrighted work of a massive incumbent with enormous market power. This is especially important in categories such as legal research tools, because these are necessary if the public is to participate in governance and lawmaking in an informed manner.

ROSS had made other antitrust claims against Westlaw, saying it violated the Sherman Antitrust Act by refusing to license its database and engaging in sham litigation to block competitors from its industry. The court dismissed those claims. But the court let the claim of tying stand, siding with ROSS and finding that Westlaw’s database—which existed in the form of printed books for many decades before the internet—can be a separate product from its legal search tool, even though the tool does not work on any other database.

ROSS has “adequately and plausibly alleged separate product markets for public law databases and legal search tools,” the court said, while noting that the Supreme Court has “often found that arrangements involving functionally linked products, at least one of which is useless without the other, to be prohibited tying devices.”

ROSS will now be entitled to investigate Westlaw’s business practices to try to prove illegal tying. That will be no small feat. But it’s still important because it shows that entities like ROSS can use antitrust law to argue that companies with market power should allow others to build on their work. Westlaw will now have to explain how it benefits users to refuse to license its case law database to competing search tools, which could yield new insights into the law.

The lack of competitive compatibility is what holds back many new internet products and services. Many big tech companies used comcom when they were first starting out, but now that they are entrenched, powerful companies, they don’t want to make it easy for anyone to build on what they have. When they were upstarts, the products and services of big firms were fair game. Now that they’re entrenched, they don’t want upstarts challenging their dominance. 

We need comcom to make better and more innovative products for tech users. This is particularly crucial with the products and services at the heart of this case. Companies like Westlaw/Thomson Reuters should not be able to monopolize online access to the law and limit the ways that people can engage with it. We will keep a close eye on this case.

Malaika Fraley

Victory! Court Rules That DMCA Does Not Override First Amendment’s Anonymous Speech Protections

1 week 2 days ago

Copyright law cannot be used as a shortcut around the First Amendment’s strong protections for anonymous internet users, a federal trial court ruled on Tuesday.

The decision by a judge in the United States District Court for the Northern District of California confirms that copyright holders issuing subpoenas under the Digital Millennium Copyright Act must still meet the Constitution’s test before identifying anonymous speakers.

The case is an effort to unmask an anonymous Twitter user (@CallMeMoneyBags) who posted photos and content that implied a private equity billionaire named Brian Sheth was romantically involved with the woman who appeared in the photographs. Bayside Advisory LLC holds the copyright on those images, and used the DMCA to demand that Twitter take down the photos, which it did.

Bayside also sent Twitter a DMCA subpoena to identify the user. Twitter refused and asked a federal magistrate judge to quash Bayside’s subpoena. The magistrate ruled late last year that Twitter must disclose the identity of the user because the user failed to show up in court to argue that they were engaged in fair use when they tweeted Bayside’s photos.

When Twitter asked a district court judge to overrule the magistrate’s decision, EFF and the ACLU Foundation of Northern California filed an amicus brief in the case, arguing that the magistrate’s ruling sidestepped the First Amendment when it focused solely on whether the user’s tweets constituted fair use of the copyrighted works.

In granting Twitter’s motion to quash the subpoena, the district court agreed with EFF and ACLU that the First Amendment’s protections for anonymous speech are designed to protect a speaker beyond the content of any particular statement that is alleged to infringe copyright. So the First Amendment requires courts to analyze DMCA subpoenas under the traditional anonymous speech tests courts have adopted.

“But while it may be true that the fair use analysis wholly encompasses free expression concerns in some cases, that is not true in all cases—and it is not true in a case like this,” the court wrote. “That is because it is possible for a speaker’s interest in anonymity to extend beyond the alleged infringement.”

The district court then applied the traditional two-step test used to determine when a litigant can unmask an anonymous internet user. The first step requires a proponent of unmasking to show that their claims have legal merit. The second step requires courts to balance the harm to the anonymous speaker against the proponent of unmasking’s need to identify the user.

The district court ruled that Bayside failed on both steps.

First, the court ruled that Bayside had not shown that its copyright claims had merit, finding that the tweets at issue constituted fair use, largely because they were transformative.

“Rather, by placing the pictures in the context of comments about Sheth, MoneyBags gave the photos a new meaning—an expression of the author’s apparent distaste for the lifestyle and moral compass of one-percenters,” the court wrote.

Second, the court ruled that there were significant First Amendment issues at stake because the tweets constituted “vaguely satirical commentary criticizing the opulent lifestyle of wealthy investors generally (and Brian Sheth, specifically).” The court ruled that identifying “MoneyBags thus risks exposing him to ‘economic or official retaliation’ by Sheth or his associates.”

In contrast, the court ruled, Bayside failed to show that it needed the information, particularly given that Twitter had already removed the copyrighted images from the tweets. Further, the court was suspicious that Bayside may have been using its DMCA subpoena as a proxy for Sheth, which the court described as a “puzzling set of facts” that Bayside had never fully explained.

In upholding the user’s First Amendment rights to speak anonymously, the district court also rejected the argument that because the user never showed in court to fight the subpoena, Twitter could not raise constitutional arguments on its users’ behalf. EFF and ACLU’s brief called on the court to ensure that online services like Twitter can always stand in their users’ shoes when they seek to protect their rights in court.

The court agreed:

There are many reasons why an anonymous speaker may fail to participate in litigation over their right to remain anonymous. In some cases, it may be difficult (or impossible) to contact the speaker or confirm they received notice of the dispute. Even where a speaker is alerted to the case, hiring a lawyer to move to quash a subpoena or litigate a copyright claim can be very expensive. The speaker may opt to stop speaking, rather than assert their right to do so anonymously. Indeed, there is some evidence that this is what happened here: MoneyBags has not tweeted since Twitter was ordered to notify him of this dispute.

EFF is pleased with the district court’s decision, which ensures that DMCA subpoenas cannot be used as a loophole to the First Amendment’s protections. The reality is that copyright law is often misused to silence lawful speech or retaliate against speakers. For example, in 2019 EFF successfully represented an anonymous Reddit user that the Watchtower Bible and Tract Society sought to unmask via a DMCA subpoena, claiming that they posted Watchtower’s copyrighted material. 

We are also grateful that Twitter stood up for its user’s First Amendment rights in court.

Aaron Mackey

When “Jawboning” Creates Private Liability

1 week 2 days ago
A (Very) Narrow Path to Holding Social Media Companies Legally Liable for Collaborating with Government in Content Moderation

For the last several years we have seen numerous arguments that social media platforms are "state actors" that “must carry” all user speech. According to this argument, they are legally required to publish all user speech and treat it equally. Under U.S. law, this is almost always incorrect. The First Amendment generally requires only governments to honor free speech rights and protects the rights of private entities like social media sites to curate content on their sites and impose content rules on their users. 

Among the state actor theories presented is one based on collaboration with the government on content moderation. “Jawboning”—or when government authorities influence companies’ social media policies—is extremely common. At what point, if any, does a private company become a state actor when they act according to it?

Deleting posts or cancelling accounts because a government official or agency requested or required it—just like spying on people’s communications on behalf of the government—raises serious human rights concerns. The newly revised Santa Clara Principles, which outline standards that tech platforms must consider to make sure they provide adequate transparency and accountability, specifically scrutinize “State Involvement in Content Moderation.” As set forth in the Principles: “Companies should recognise the particular risks to users’ rights that result from state involvement in content moderation processes. This includes a state’s involvement in the development and enforcement of the company’s rules and policies, either to comply with local law or serve other state interests. Special concerns are raised by demands and requests from state actors (including government bodies, regulatory authorities, law enforcement agencies and courts) for the removal of content or the suspension of accounts.”

So, it is important that there be a defined, though narrow, avenue for holding social media companies liable for certain censorial collaborations with the government. But the bar for holding platforms accountable for such conduct must be high to preserve their First Amendment rights to edit and curate their sites. 

Testing Whether a Jawboned Platform is a State Actor

We propose the following test. At a minimum: (1) the government must replace the intermediary’s editorial policy with its own, (2) the intermediary must willingly cede the editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party lacks an adequate remedy against the government. These findings are necessary, but not per se sufficient to establish the social media service as a state actor; there may always be “some countervailing reason against attributing activity to the government.” 

In creating the test, we had two guiding principles.

First, when the government coerces or otherwise pressures private publishers to censor, the censored party’s first and favored recourse is against the government. Governmental manipulation of the already fraught content moderation systems to control public dialogue and silence disfavored voices raises classic First Amendment concerns, and both platforms and users should be able to sue the government for this. In First Amendment cases, there is a low threshold for suits against government agencies and officials that coerce private censorship: the government may violate speakers’ First Amendment rights with “system[s] of informal censorship” aimed at speech intermediaries. In 2015, for example, EFF supported a lawsuit by Backpage.com after the Cook County sheriff pressured credit card processors to stop processing payments to the website. 

Second, social media companies should retain their First Amendment rights to edit and curate the user posts on their sites as long as they are the ones controlling the editorial process. So, we sought to distinguish those situations where the platforms clearly abandoned editorial power and ceded editorial control to the government from those in which the government‘s desires were influential but not determinative. 

We proposed this test in an amicus brief recently filed in the Ninth Circuit in a case in which YouTube has been accused of deleting QAnon videos at the request and compulsion of individual Members of Congress. We argued in that brief that the test was not met in that case and that YouTube could not be liable as a state actor under the facts alleged. 

However, even though they are not legally liable, social media companies should voluntarily disclose to a user when a government has demanded or requested action on their post, or whether the platform’s action was required by law. Platforms should also report all government demands for content moderation, and any government involvement in formulating or enforcing editorial policies, or flagging posts. Each of these recommendations is set out in the revised Santa Clara Principles.

The Santa Clara Principles also calls on governments to limit their involvement in content moderation. The Principle for Governments and Other State Actors states that governments “must not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.” The Santa Clara Principles go on to urge governments to disclose their involvement in content moderation and to remove any obstacles they have placed on the companies to do so, such as gag orders.

Our position with respect to state action being established by government collaboration stands in contrast to the more absolute positions we have taken against other state action theories.

Although we have been sharp critics of how the large social media companies curate user speech and its differential impacts on those traditionally denied voice, we are also concerned that holding social media companies to the legal standards of the First Amendment would hinder their ability to moderate content in ways that serves users well: by removing or downranking posts that although legally protected were harassing or abusive to other users, or which were just offensive to many users the company sought to reach; or by adopting policies or community standards that focus on certain subject matters or communities, and excluding off-topic posts. Many social media companies offer curation services that suggest or prioritize certain posts over others, whether through Facebook’s Top Stories feed, or Twitter’s Home feed, etc., that some users seem to like. Plus, there are numerous practical problems. First, clear distinctions between legal and illegal speech are often elusive. Law enforcement often gets these wrong and judges and juries struggle with these distinctions. Second, it just doesn’t reflect reality: every social media service has an editorial policy that excludes or at least disfavors certain legal speech, and always have had such policies.

We filed our first amicus brief setting out this position in 2018 and wrote about it here. And we’ve been asserting that position in various US legal matters ever since. That first case and others like it argued incorrectly that social media companies functioned like public forums, places open to the public to associate and speak to each other, and thus should be treated like government controlled public forums like parks and sidewalks. 

Other cases and the social media laws passed by Florida and Texas also argued that social media services, at least the very large ones, were “common carriers” which are open to all users on equal terms. In those cases, our reasoning for challenging the laws remained the same: users are best served when social media companies are shielded from governmental interference with their editorial policies and decisions.

This policy-based position was consistent with what we saw as the correct legal argument: that social media companies themselves have the First Amendment right to adopt editorial policies, and to curate and edit the user speech that get submitted to them. And it’s important to defend that First Amendment right so as to shield these services from becoming compelled mouthpieces or censors of the government: if they didn’t have their own First Amendment rights to edit and curate their sites as they saw fit, then governments could tell them how to edit and curate their sites according to the government’s wishes and desires.

We stand by our position that social media platforms have the right to moderate content, and believe that allowing the government to dictate what speech platforms can and can’t publish is anathema to our democracy. But when censorship is a collaboration between private companies and the government, there should be a narrow, limited path to hold them accountable.

 

David Greene

Pass the "My Body, My Data" Act

1 week 2 days ago

EFF supports Rep. Sara Jacobs’ “My Body, My Data" Act, which will protect the privacy and safety of people seeking reproductive health care.

Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for.

TAKE ACTION

Tell Congress to pass the "My Body, My Data" Act

These restrictions apply to companies that collect personal information related to a person’s reproductive or sexual health. That includes information such as data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. 

We are proud to join Planned Parenthood, NARAL, National Abortion Federation, URGE, National Partnership for Women & Families, and Feminist Majority in support of the bill.

In addition to the restrictions on company data processing, this bill also provides people with necessary rights to access and delete their reproductive health information. Companies must also publish a privacy policy, so that everyone can understand what information companies process and why. It also ensures that companies are held to public promises they make about data protection, and gives the Federal Trade Commission the authority to hold them to account if they break those promises. 

The bill also lets people take on companies that violate their privacy with a strong private right of action. Empowering people to bring their own lawsuits not only places more control in the individual's hands, but also ensures that companies will not take these regulations lightly. 

Finally, while Rep. Jacobs' bill establishes an important national privacy foundation for everyone, it also leaves room for states to pass stronger or complementary laws to protect the data privacy of those seeking reproductive health care. 

We thank Rep. Jacobs and the other sponsors for taking up this important bill, and using it as an opportunity not only to protect those seeking reproductive health care, but also highlight why data privacy is an important element of reproductive justice. Please take action to express your support for the "My Body, My Data" Act today.

TAKE ACTION

Tell Congress to pass the "My Body, My Data" Act

Hayley Tsukayama

Daycare Apps Are Dangerously Insecure

1 week 2 days ago

Last year, several parents at EFF enrolled kids into daycare and were instantly told to download an application for managing their children’s care. Daycare and preschool applications frequently include notifications of feedings, diaper changes, pictures, activities, and which guardian picked-up/dropped-off the child—potentially useful features for overcoming separation anxiety of newly enrolled children and their anxious parents. Working at a privacy-oriented organization as we do, we asked questions: Do we have to use these? Are they secure? The answer to the former, unfortunately, was “yes,” partly so that the schools could abide by health guidelines to avoid unnecessary in-person contact. But troublingly, the answer to the second was a resounding “no.”

As is the case with so many of these services, there are a few apps that are more popular than others. While we started with the one we were being asked to use, this prompted us to look closer at the entire industry.

"The (Mostly) Cold Shoulder"

These days, offering two-factor authentication (2FA), where two different methods are used to verify a user’s login, is fairly standard. EFF has frequently asserted that it is one of the easiest ways to increase your security. Therefore, it seemed like a basic first step for daycare apps.

In October 2021, we tried to reach out to one of the most popular daycare services, Brightwheel, about the lack of two-factor authentication on their mobile app. We searched around on the site for an email to report security concerns and issues, but we could not find one.

A few cold emails and a little networking later, we got a meeting. The conversation was productive and we were glad to hear that Brightwheel was rolling out 2FA for all admins and parents. In fact, the company’s announcement claimed they were the “1st partner to offer this level of security” in the industry—an interesting but also potentially worrisome claim.

Was it true? Apparently so. This prompted us to do more outreach to other popular daycare apps. In April 2022, we reached out to the VP of Engineering at another popular app, HiMama (no response). Next we emailed HiMama’s support email about 2FA, and received a prompt but unpromising response that our feature request would be sent to the product team for support. So we dug in further.

Digging Further—And a History of Cold Shoulders

Looking at a number of popular daycare and early education apps, we quickly found more issues than just the lack of 2FA. Through static and dynamic analysis of several apps, we uncovered not just security issues but privacy-compromising features as well. Issues like weak password policies, Facebook tracking, cleartext traffic enabled, and vectors for malicious apps to view sensitive data.

As a note on investigative tools and methodology: we used MobSF and apktool for static analysis of application code and mitmproxy, Frida, and adb (Android Debug Bridge) for dynamic analysis to capture network traffic and app behavior.

Initially, we had inferred that many of these services would be unaware of their issues, and we planned to disclose any vulnerabilities to each company. However, we discovered that not only were we not alone in wondering about the security of these apps, but that we weren’t alone in receiving little to no response from the companies.

In March 2022, a group of academic & security researchers from the AWARE7 agency, Institute for Internet Security, Max Planck Institute for Security and Privacy, and Ruhr University Bochum presented a paper to the PET (Privacy Enhancing Technologies) Symposium in Sydney, Australia. They described the lack of response their own disclosures met:

“Precisely because children's data is at stake and the response in the disclosure process was little (6 out of 42 vendors (±14%) responded to our disclosure), we hope our work will draw attention to this sensitive issue. Daycare center managers, daycare providers, and parents cannot analyze such apps themselves, but they have to help decide which app to introduce."

In fact, the researchers made vulnerability disclosures to many of the same applications we were researching in November 2021. Despite the knowledge that children’s data was at stake, security controls still hadn’t been pushed to the top of the agenda in this industry. Privacy issues remained as well. For example, The Tadpoles Android app (v12.1.5) sends event-based app activity to Facebook's Graph API. As well as very extensive device information to Branch.io.

Tadspoles App for Android using Facebook SDK to send custom app event data to graph.facebook.com


[Related: How to Disable Ad ID Tracking on iOS and Android, and Why You Should Do It Now]

Extensive information sent to branch.io

In its privacy policy, Branch.io states that they do not sell or “rent” this information, but the precise amount of data sent to them—down to the CPU type of the device—is highly granular, creating an extensive profile about the parent/guardian outside of the Tadpoles app. A profile that is subject to data sharing in situations like a merger or acquisition of Branch.io. Neither Branch.io or Facebook are listed or mentioned in Tadpole’s privacy policy.

A Note on Cloud Security

Another common trend in many daycare apps: relying on cloud services to convey their security posture. These apps often state they use “the cloud” to provide top-of-the-line security. HiMama, for example, writes in their Internet Safety statement that Amazon’s AWS “is suited to run sensitive government applications and is used by over 300 U.S. government agencies, as well as the Navy, Treasury and NASA.” This is technically true, but AWS has a particular offering (AWS GovCloud) that is isolated and configured to meet federal standards required for government servers and applications on those servers. In any case, regardless of whether an app uses standard or government level cloud offerings, a significant amount of configuration and application security is left up to the developers and the company. We wish HiMama and other similar apps would just highlight the specific security configurations they use on the cloud services they utilize.

Childcare Needs Conflict with Informed Choice

When a parent has an immediate need for childcare and a daycare near home or work opens up with one spot, they are less inclined to pick a fight over the applications the center chooses. And preschools and daycares aren’t forced to use a specific application. But they are effectively trusting a third party to act ethically and securely with a school’s worth of children’s data. Regulations like COPPA (Children’s Online Privacy Protection Act) likely don’t apply to these applications. Some service providers appear to reference COPPA indirectly with legal language that they do not collect data directly from children under 13 and we found a statement on one app committing to COPPA compliance.

Between vague language that could misguide parents about the reality of data security, fewer options for daycares (especially the first two years of the pandemic), leaky and insecure applications, and lack of account security control options, parents can’t possibly make a fully informed or sound privacy decision.

Call to Action for Daycare and Early Education Apps

It’s crucial that the companies that create these applications do not ignore common and easily-fixed security vulnerabilities. Giving parents and schools proper security controls and hardening application infrastructure should be the top priority for a set of apps handling children’s data, especially the very young children served by the daycare industry. We call on all of these services to prioritize the following basic protections and guidelines:

Immediate Tasks:
  • 2FA available for all Admins and Staff.
  • Address known security vulnerabilities in mobile applications.
  • Disclose and list any trackers and analytics and how they are used.
  • Use hardened cloud server images. Additionally, a process in place to continuously update out-of-date technology on those servers.
  • Lock down any public cloud buckets hosting children’s videos and photos. These should not be publicly available and a child’s daycare and parents/guardians should be the only ones able to access and see such sensitive data.

Those fixes would create a significantly safer and more private environment for data on children too young to speak for themselves. But there is always more that can be done to create apps that create industry benchmarks for child privacy.

Strongly Encouraged Tasks: E2EE (end-to-end encrypted) Messaging between School and Parents

Consider communication between schools and parents highly sensitive. There’s no need for the service itself to view communication being passed between schools and parents.

Create Security Channels for Reporting Vulnerabilities

Both EFF and the AWARE7 (et al.) researchers had issues finding proper channels when we uncovered problems with different applications. It would be great if they put up a simple security.txt file on their website for researchers to get in touch with the proper people, instead of hoping to get a response from company support emails.

At EFF, we are parents too. And the current landscape isn’t fair to parents. If we want a better digital future, it starts with being better stewards today and not enabling a precedent of data breaches that could lead to extensive profiling—or worse—of kids who have yet to take their first steps.

Alexis Hancock

EFF Warns Another Court About the Dangers of Broad Site-Blocking Orders

1 week 6 days ago

A copyright holder can’t use a court order against the owner of an infringing website to conscript every intermediary service on the internet into helping make that website disappear, EFF and the Computer & Communications Industry Association argued in an amicus brief.

The brief, filed in the U.S. District Court for the Southern District of New York, defends Cloudflare, a San Francisco-based global cloud services provider.

United King Film Distribution - a movie, television, sports and news content producer and provider - sued the creators of Israel.tv, which had streamed content on which United King held copyrights. After the people behind Israel.tv failed to appear in court, United King won a shockingly broad injunction not only against them but also claiming to bind hundreds, maybe thousands of intermediaries, including nearly every Internet service provider in the US, domain name registrars, web designers, shippers, advertising networks, payment processors, banks, and content delivery networks.

United King then sought to enforce that injunction against CDN/reverse proxy service Cloudflare, demanding that Cloudflare be held in contempt of court for refusing to block the streaming site and stop it from ever appearing again.

But the injunction is impermissibly broad, at odds with both Federal Rule of Civil Procedure 65 and the Digital Millennium Copyright Act (DMCA), EFF’s brief argued. It’s like ordering a telephone company to prevent a person from ever having conversations over the company’s network. It will cause collateral harm to numerous internet services and their users by imposing unnecessary costs and compliance burdens. And it could cause intermediaries like Cloudflare to block lawful websites and speech in order to avoid being sanctioned by courts in cases like this.

A copyright holder with an injunction simply can’t conscript every Internet intermediary to help cut every Internet user off from accessing an infringing website. In fact, they can’t conscript even one intermediary without fulfilling the law’s requirements. They would have to show that the intermediary acted in close coordination with the website owners, more than just providing them a basic service. And they would have to limit their injunction to the narrow guidelines allowed under the DMCA, including giving intermediaries a chance to be heard before being ordered to block.

We’ve seen this playbook before. In 2015, we helped Cloudflare get relief from a similar order that would have required them to play detective by finding and banning an infringing website owner whenever and wherever they appeared. And of course, the order in this case looks a lot like the kind of website-blocking order that the infamous SOPA and PIPA bills of 2011-2012 would have enabled. It’s preposterous to think that major media companies waged a giant, expensive, and ultimately losing battle for the power to censor websites if that power was allegedly available from the courts all along.

Today, we hope the courts understand that even if a website is infringing copyrights, the law doesn’t let rightsholders conscript the entire internet to help make that site go away. The costs to innocent users’ rights is simply too high.

The case is 21-cv-11024 KPF-RWL.





Mitch Stoltz

Copyright "Small Claims" Quasi-Court Opens. Here's Why Many Defendants Will Opt Out.

1 week 6 days ago

A new quasi-court for copyright, with nationwide reach, began accepting cases this week. The “Copyright Claims Board” or “CCB,” housed within the Copyright Office in Washington DC, will rule on private copyright infringement lawsuits from around the country and award damages of up to $30,000 per case. Though it’s billed as an “efficient and user-friendly” alternative to federal litigation, the CCB is likely to disadvantage many people who are accused of copyright infringement, especially ordinary internet users, website owners, and small businesses. It also violates the Constitution in ways that harm everyone. 

Even if this were a perfect process, there is bound to be confusion when a whole new regime for demanding money springs up. And the rules surrounding the CCB are far from perfect, so we want to hear from people who are hauled before the CCB. If you feel you’ve been wronged by the CCB process, please email info@eff.org and let us know.

It’s Voluntary—Except When It Isn’t

The Copyright Office calls the CCB a “voluntary” system. Copyright holders can choose to bring infringement cases in the CCB as an alternative to federal court. And those accused of infringement (called “respondents” here, rather than defendants) can opt out of a CCB proceeding by filing forms within a 60-day window. If a respondent opts out, the CCB proceeding goes no further, and the rightsholder can choose whether or not to file an infringement suit in federal court. But if the accused party doesn’t opt out in time, they become bound by the decisions of the CCB. Those decisions mostly can’t be appealed, even if they get the law wrong.

Although cases will vary, we think most knowledgeable parties will choose to opt out of the CCB process—again “knowledgeable.” The concern about this system mostly hurting regular users, website owners, and small businesses that don’t have staff who have been watching the CCB unfold cannot be understated. Every reason a knowledgeable party might decide to opt out is also a complicated legal issue that the average person should not be expected to know.

Not-So-Small Claims

Damages awards at the CCB will be limited to $15,000 per claim and $30,000 per case. That’s smaller than the maximum statutory damages a federal court can award on an infringement claim, which is $150,000. But is $30,000 per claim really a “small” claim? It’s 44% of the 2020 median US household income. It’s higher than the maximum damages allowed in the small claims courts of nearly every state. And three years from now, the Register of Copyrights can raise the CCB’s damages caps even higher—the statute puts no limit on increases.

The damages caps also hide a big liability pitfall. In federal court, massive and unpredictable statutory damages are generally only available if the rightsholder has registered their copyright before the alleged infringement began. Without advance registration, a rightsholder is limited to recovering the actual damages caused by the infringement, which they must prove and which are often much smaller than statutory damages. In practice, that rule has limited the possibility of big payouts in a copyright lawsuit to works where the author has either made a small proactive effort to protect against infringement, or that have significant market value.

The CCB, though, dispenses with the timely registration rule. In CCB proceedings, a rightsholder can recover up to $7500 in statutory damages per work without any proof of harm, even if they register their copyright the same day they file a claim with the CCB. That means nearly every photograph, tweet, scrap of prose, or scanned scribble can potentially be the basis for a profitable CCB lawsuit, even if it has no commercial value, or even any personal value to the author. It means that website owners and other internet users can face a CCP complaint even for low-value works that would never have merited a federal lawsuit. And it means that in many cases, opting out of CCB “small claims” proceedings will actually lower your financial risk.

Copyright Parking Tickets

The CCB will also have a worrisome “smaller claims” procedure for claims of $5,000 or less. For these claims, proceedings will be even more informal and the responding party’s ability to request evidence from the rightsholder to help in their defense, such as records of copyright ownership and licenses, may be extremely limited. We suspect that responding parties will face significant pressure to settle claims for cash—equivalent to paying a very large parking ticket—where a more careful consideration would show that they have a valid defense such as fair use. There’s a real risk that even clear cases of fair use won’t get their due in these “smaller claims” proceedings.

Cloudy with a Chance of Pro-Rightsholder Bias

Congress handed the Copyright Office a monumental task: creating the first federal “small claims” tribunal, and the first such body with nationwide jurisdiction. The rules that the Copyright Office has created over the past eighteen months strive for evenhandedness. But they still won’t do enough to overcome pro-rightsholder bias.

First, the CCB is housed within the Copyright Office, which has historically acted as a champion of rightsholders over users of copyrighted works. A former head of the Copyright Office, who now leads a publishing industry lobbying group, famously said that “copyright is for the author first and the nation second.” The Office has close relationships with major media and entertainment companies, including co-hosting events and maintaining a “revolving door” of leaders who move between the Office and these industries. “Regulatory capture” is a problem that affects many government agencies, but it’s especially concerning for an agency that is now running a court.

Second, we expect that a relatively small group of rightsholders and rightsholder attorneys will be “frequent flyers” at the CCB, while the responding parties will more often be first-timers to this new tribunal. All decision-making bodies tend to prioritize the desires of repeat players over new participants, and the CCB will be no exception. Compounding this problem, we expect that respondents who know the law well, or have the resources to hire a lawyer, will opt out of CCB proceedings, so that the majority of respondents judged by the CCB will be people with less practical ability to raise the defenses that the law gives them. This dichotomy could warp the CCB “Claims Officers’” perceptions of all of the parties that come before them.

Constitutional Concerns

The CCB was modeled in part on small claims courts at the state level. But the CCB is different in a significant way: it’s not part of the judiciary. The Copyright Office sits within the Library of Congress, which is part of the Legislative Branch. The Office is sometimes considered an executive agency, but either way, the Constitution doesn’t allow either the Legislative or the Executive branches to run courts that rule on disputes between private parties. (Administrative law judges who rule on questions about what rights and benefits the government owes to people are a different story.) This restriction isn’t just a technicality—the independence of the courts is one of the most important protections for individual rights. “Claims Officers” who are hired by and report to the Register of Copyrights can’t truly render independent judgments. This violates the Constitution, and is yet another reason why many people will opt out of a CCB proceeding.

The CCB has other constitutional defects as well. For example, the absence of a meaningful appeals process and the lack of a jury likely violate the Fifth and Seventh Amendments, and the ability to opt out doesn’t necessarily cure these problems.

The staff of the Copyright Office are dedicated public servants, and in setting up the CCB they are doing what Congress asked of them. But care and good intentions won’t be enough to make this unprecedented judicial experiment fair or constitutionally sound. As the CCB starts rendering decisions, EFF would like to hear from people who have been wronged by the process. Email us at info@eff.org.

 

Mitch Stoltz

Our Digital Lives Rest on a Robust, Flexible, and Stable Fair Use Regime

1 week 6 days ago

Much of what we do online involves reproducing copyrightable material, changing it, and/or making new works. Technically, pretty much every original tweet is copyrightable. And the vast majority of memes are based on copyrighted works. Your funny edits, mashups, and photoshopped jokes manipulate copyrighted works into new ones. Effective communication has always included a shared reference pool to make points clearly understood. And now we do that online.

In other words, as the digital world has grown, so has the reach of copyright protections. At the same time, copyright and related laws have changed: terms have expanded, limits (like registration) have shrunk, and new rules shape what you can do with your stuff if that stuff happens to come loaded with software. Some of those rules have had unintended consequences: a law meant to prevent piracy also prevents you from fixing your own car, using generic printer ink, or adapting your e-reader for your visual impairment. And a law meant to encourage innovation is routinely abused to remove critical commentary and new creativity.

In the age of copyright creep, fair use, which allows the use of copyrighted material without permission or payment in certain circumstances, is more vital than ever. A robust and flexible fair use doctrine allows us to make use of a copyrighted work to make new points, critiques, or commentary. It allows libraries to preserve and share our cultural heritage. It gives us more freedom to repair and remake.  It gives users the tools they need to fight back, in keeping with its core purpose—to ensure that copyright fosters, rather than inhibits, creative expression

The Supreme Court has an opportunity to ensure that the doctrine continues to do that essential work, in a case called Andy Warhol Foundation v. Goldsmith. At issue in the case is a series of prints by Andy Warhol, which adapt and recontextualize a photograph of the musician Prince. While the case itself doesn’t involve a digital work, its central issue is a fair use analysis by the Second Circuit that gets fair use and transformative works fundamentally wrong. First, it assumes that two works in a similar medium will share the same overarching purpose. Second, it holds that if a secondary use doesn’t obviously comment on the primary work, then a court cannot look to the artist’s asserted intent or even the impression reasonable third parties, such as critics, might draw. Third, it holds that, to be fair, the secondary use must be so fundamentally different that it should not recognizably derive from and retain essential elements of the original work.

As EFF and the Organization for Transformative Works explain in a brief filed today, all three conclusions not only undermine fair use protections but also run contrary to practical reality. For example, instead of addressing whether the respective works offered different meanings or messages, the Second Circuit essentially concluded that because the works at issue were both static visual works, they served the same purpose. This conclusion is perplexing, to say the least: the works at issue are a photograph of an individual and a collection of portraits in the classic Warhol style that used the photograph as a reference—which you do not need to be an art expert to see as distinct pieces of art. The intent of the photographer and Warhol were different, as are the effects on the different audiences.

This framing of fair use would be devastating for the digital space. For example, memes with the same image but different text could be seen as serving fundamentally the same purpose as the original, even though many memes depend on the juxtaposition of the original intent of the work and its new context. One scene from Star Wars, for example, has given us two memes. In the original film, Darth Vader’s big “NOOOO” was surely meant to be a serious expression of despair. In meme form, it’s a parodic, over-the-top reaction. Another meme comes from a poorly-subtitled version of the film, replacing “NOOOO” with “DO NOT WANT.”  Fan videos, or vids, remix the source material in order to provide a new narrative, highlighting an aspect of the source that may have been peripheral to the source’s initial message, and often commenting on or critiquing that source. And so on.

Just last year, the Supreme Court recognized the importance of fair use in our digital world in Oracle v. Google, and we look for it to reaffirm fair use’s robust, flexible, and stable protections by reversing the Second Circuit’s decision in this case.

Katharine Trendacosta

First Circuit Court of Appeals Upholds Eight Months of Warrantless 24/7 Video Surveillance

2 weeks ago

EFF Legal Intern Talya Nevins contributed to the drafting of this blog post.

A federal appellate court in Massachusetts has issued a ruling that effectively allows federal agents in Puerto Rico and most of New England to secretly watch and videorecord all activity in front of anyone’s home 24 hours a day, for as long as they want—and all without a warrant. This case, United States v. Moore-Bush, conflicts with a ruling from Massachusetts’s highest court, which held in 2020 in Commonwealth v. Mora that the Massachusetts constitution requires a warrant for the same surveillance.

In Moore-Bush, federal investigators mounted a high-definition, remotely-controlled surveillance camera on a utility pole facing the home where Nia Moore-Bush lived with her mother. The camera allowed investigators to surveil their residence twenty-four hours a day for eight months, zoom in on faces and license plates within the camera’s view, and replay the footage at a later date.

In a deeply fractured opinion, the First Circuit Court of Appeals ultimately allowed the government to use the footage in this case, overturning an earlier trial court ruling that would have suppressed the evidence.

EFF joined the ACLU and the Center for Democracy & Technology in an amicus brief at the First Circuit’s rehearing en banc (where all the active judges in the circuit rehear the case),. We argued that modern pole camera surveillance represents a radical transformation of the police’s ability to monitor suspects and goes far beyond anything that could have been contemplated by the drafters of the Fourth Amendment. Pole cameras allow police to cheaply and secretly surveil a suspect continuously for months on end, to zoom in on phone screens or documents, and to create a perfect record that can be searched and reviewed at any time in the future. None of these capabilities would be possible with a traditional stakeout. Finally, we urged the judges to consider the equity implications of finding that a person has no reasonable expectation of privacy as to activities in their exposed front yard, as this would grant greater Fourth Amendment protection to homeowners with the resources to erect high fences or hedges than to renters or those without the means to build such protections.

The government argued in Moore-Bush that a 2009 First Circuit case called United States v. Bucci governed the outcome in this case because it upheld similar pole camera surveillance. The trial court distinguished Bucci, finding the Supreme Court’s landmark 2018 holding in Carpenter v. United States, effectively changed the game. Like the cell site location information at issue in Carpenter, the trial court held long-term, comprehensive, continuous video surveillance of the front of a person’s home violates their reasonable expectation of privacy. Given this, the trial court suppressed the pole camera evidence.

On appeal, the First Circuit disagreed. It reversed the trial court’s decision in a panel opinion in 2019, and has now reaffirmed that decision after a rehearing en banc. Even though three of the judges agreed with the district court, finding Carpenter “provides new support for concluding that the earlier reasoning in Bucci is no longer correct,” they held the police relied on Bucci in “good faith,” and therefore the evidence should not have been suppressed.

The good news: Chief Judge Barron, joined by Judges Thompson and Kayatta, wrote a concurring opinion finding that pole camera surveillance constitutes an illegal search under Carpenter. This analysis had three parts: (i) Moore-Bush had exhibited a subjective expectation of privacy in the cumulative conduct that the camera had captured, (ii) that expectation was objectively reasonable in the eyes of society, and (iii) the use of the pole camera contravened that expectation.

Barron wrote that, although each of the moments captured by the pole camera could have been viewed by a casual passerby, Moore-Bush had chosen to live on a quiet street where she would never have expected to be meticulously observed for eight months. He combined this subjective prong with the objective reasonableness analysis from Carpenter, which held that even if an individual action is public, the comprehensive compilation of such public actions over time becomes unreasonable–­–sometimes called the “mosaic theory.” In a nod to the equity argument we raised in our amicus brief, Barron wrote that combining the subjective and objective analysis allows the court to ensure that people who more clearly manifest an expectation of privacy are not granted greater Fourth Amendment protections than those who cannot do so.

Barron also recognized that today’s advanced surveillance technology makes it possible to “effectively and perfectly capture all that visibly occurs in front of a person's home over the course of month,” something that would have been virtually impossible with a pre-digital, traditional stake-out. The concurring opinion noted that videosurveillance technology is evolving rapidly, and the court should keep in mind the “advent of smaller and cheaper cameras with expansive memories and the emergence of facial recognition technology.” Further, pole camera surveillance of a home is an especially egregious Fourth Amendment violation, as it allows police to record and review in detail many months of highly personal moments. Ultimately, Barron’s concurring opinion found that the comprehensive nature, ease of access, and retrospective quality of pole camera footage contravened a reasonable expectation of privacy, and thus violated the Fourth Amendment. Going forward, Barron advocated that Bucci should be overruled, given Carpenter. But, with an evenly split court, Bucci remains good law in the First Circuit.

The bad news: The other three First Circuit judges sharply disagreed with the Barron concurrence. They joined a separate opinion authored by Judge Lynch, finding pole cameras were no different from “conventional surveillance [] tools” like private security cameras. This is significant because Carpenter explicitly stated it did not apply to these tools. Lynch dismissed the fact that pole cameras are different from traditional security cameras because they are higher definition, equipped with much greater storage capacity, and can be controlled remotely. Lynch argued that this is a mere “sharpening” of a conventional tool.

Lynch further wrote that people know they can be seen in front of their homes, and thus they have no reasonable expectation of privacy there if they do not take precautions to ensure they cannot be seen by passersby. She distinguished the pole camera footage from the CSLI data addressed in Carpenter by saying that because pole cameras are stationary and surveil only one location, they do not capture the “whole of [a person’s] physical movements” in the same way as location data. She pointed to a recent Seventh Circuit opinion, U.S. v. Tuggle, that refused to apply Carpenter to pole cameras because the Supreme Court has not yet explicitly extended the mosaic theory to digital video surveillance.

Going forward: The outcome in Moore-Bush creates an explicit conflict between how federal and state law enforcement agencies may conduct surveillance in Massachusetts. In Commonwealth v. Mora in 2020, Massachusetts’s Supreme Judicial Court held that similar pole camera surveillance, when conducted without a warrant, violates the state’s constitution. Like Barron’s concurrence in Moore-Bush, the court held that the extended duration and continuous nature of pole camera surveillance mattered. Even where people “subjectively may lack an expectation of privacy in some discrete actions they undertake in unshielded areas around their homes, they do not expect that every such action will be observed and perfectly preserved for the future.” However, with Moore-Bush in place, state and local police could try to circumvent Mora merely by asking their federal partners to conduct surveillance for them.

The First Circuit’s ruling in Moore-Bush leaves intact the court's earlier precedent in U.S. v. Bucci. However, given the judges were evenly split in their reasoning, there is some room for hope that Bucci could be overruled in the future.

Jennifer Lynch

Facebook Says Apple is Too Powerful. They're Right.

2 weeks ago

In December, 2020, Apple did something insanely great. They changed how iOS, their mobile operating system, handled users’ privacy preferences, so that owners of iPhones and other iOS devices could indicate that they don’t want to be tracked by any of the apps on their devices. If they did, Apple would block those apps from harvesting users’ data.

This made Facebook really, really mad.

As far as Apple -and Facebook, and Google, and other large tech companies - are concerned, we’re entitled to just as much privacy as they want to give us, and no more.

It’s not hard to see why! Nearly all iOS users opted out of tracking. Without that tracking, Facebook could no longer build the nonconsensual behavioral dossiers that are its stock-in-trade. According to Facebook, empowering Apple’s users to opt out of tracking cost the company $10,000,000,000 in the first year, with more losses to come after that.

Facebook really pulled out the stops in its bid to get those billions back. The company bombarded its users with messages begging them to turn tracking back on. It threatened an antitrust suit against Apple. It got small businesses to defend user-tracking, claiming that when a giant corporation spies on billions of people, that’s a form of small business development.

For years, Facebook - and the surveillance advertising industry - have insisted that people actually like targeted ads, because all that surveillance produces ads that are “relevant” and “interesting.” The basis for this claim? People used Facebook and visited websites that had ads on them, so they must enjoy targeted ads.

Unfortunately, reality has an anti-surveillance bias. Long before Apple offered its users a meaningful choice about whether they wanted to be spied on, hundreds of millions of web-users had installed ad-blockers (and tracker-blockers, like our own Privacy Badger), in what amounts to the largest consumer boycott in history. If those teeming millions value ad-targeting, they’ve sure got a funny way of showing it.

Time and again, when internet users are given the choice of whether or not to be spied on, they choose not. Apple gave its customers that choice, and for that we should be truly grateful. 

And yet…Facebook’s got a point.

When “users” are “hostages”

In Facebook’s comments to the National Telecommunications and Information Administration’s “Developing a Report on Competition in the Mobile App Ecosystem” docket, Facebook laments Apple’s ability to override its customers’ choices about which apps they want to run. iOS devices like the iPhone use technological countermeasures to block “sideloading” (installing an app directly, without downloading it from Apple’s App Store) and to prevent third parties from offering alternative app stores.

This is the subject of ongoing legislation on both sides of the Atlantic. In the USA, The Open App Markets Act would force Apple to get out of the way of customers who want to use third party app stores and apps; in the EU, the Digital Markets Act contains similar provisions. Some app makers, upset with the commercial requirements Apple imposes on the companies that sell through its App Store, have sued Apple for abusing its monopoly power.

Fights over what goes in the App Store usually focus on the commissions that Apple extracts from its software vendors - historically, these were 30 percent, though recently some vendors have been moved into a discounted 15 percent tier. That’s understandable: lots of businesses operate on margins that make paying a 30 percent (or even 15 percent) commission untenable. 

For example, the retail discount for sellers of wholesale audiobooks - which compete with Apple’s iBooks platform - is 20 percent. That means that selling audiobooks on Apple’s platform is a money-losing proposition unless you’re Apple or its preferred partner, the market-dominating Amazon subsidiary Audible. Audiobook stores with iPhone apps have to use bizarre workarounds, like forcing users to login to their websites using a browser to buy their books, then go back to their phones and use their app to download their books.

That means that Apple doesn’t just control which apps its mobile customers can use; it also has near-total control over which literary works they can listen to. Apple may have not set out to control its customers’ reading habits, but having attained it, it jealously guards that control. When Apple’s customers express interest in using rival app stores, Apple goes to extraordinary technical and legal lengths to prevent them from doing so.

The iOS business model is based on selling hardware and collecting commissions on apps. Facebook’s charges that these two factors combine to impose high “switching costs” on Apple’s customers. “Switching costs” is the economist’s term for all the things you have to give up when you change loyalties from one company to another. In the case of iOS, switching to a rival mobile device doesn’t just entail the cost of buying a new phone, but also buying new apps:

[F]ee-based apps often require switching consumers to repurchase apps, forfeit in-app purchases or subscriptions, or expend time and effort canceling current subscriptions and establishing new ones.

Facebook is right. Apple’s restrictions on third-party browsers, and the limitations it puts on Safari/WebKit (its own browser tools) have hobbled “web apps,” which run seamlessly inside a browser. This means that app makers can’t deliver a single, browser-based app that works on all tablets and phones - they have to pay to develop separate apps for each mobile platform.

That also means that app users can’t just switch from one platform to another and access all their apps by typing a URL into a browser of their choice. 

Facebook is very well situated to comment on how high switching costs can lock users into a service they don’t like very much, because, as much as they dislike that platform, the costs of using it are outstripped by the costs the company imposes on users who leave.

That’s how Facebook operates.

Facebook has devoted substantial engineering effort to keeping its switching costs as high as possible. In internal memos - published by the FTC - the company’s executives, project managers and engineers frankly discuss plans to design Facebook’s services so that users who leave for a rival pay as high a price as possible. Facebook is fully committed to ensuring that deleting your account means leaving behind the friends, family, communities and customers who stay. 

So when Facebook points out that Apple is using switching costs to take its users hostage, they know what they’re talking about. 

Benevolent Dictators Are Still Dictators

Facebook’s argument is that when Apple’s users disagree with Apple, user choice should trump corporate preference. If users want to use an app that Apple dislikes, they should be able to choose that app. If users want to leave Apple behind and go to a rival, Apple shouldn’t be allowed to lock them in with high switching costs. 

Facebook’s right.

Apple’s App Tracking Transparency program - the company’s name for the change to iOS that let you block apps from spying on you - was based on the idea that when you disagree with Facebook (or other surveillance-tech companies), your choice should trump their corporate preferences. If you want to use an app without being spied on, you should be able to choose that. If you want to quit Facebook and go to a rival, Facebook shouldn’t be able to lock you in with high switching costs.

It’s great when Apple chooses to defend your privacy. Indeed, you should demand nothing less. But if Apple chooses not to defend your privacy, you should have the right to override the company’s choice. Facebook spied on iOS users for more than a decade before App Tracking Transparency, after all. 

Like Facebook - and Google, and other companies - Apple tolerates a lot of surveillance on its platform. In spring of 2021, Apple and Google kicked some of the worst location-data brokers out of their app stores - but left plenty behind to spy on your movements and sell them to third parties.

The problem with iOS isn’t that Apple operates an App Store - it’s that Apple prevents others from offering competing app stores. If you like Apple’s decisions about which apps you should be able to use, that’s great! But that’s a system that only works well - and fails badly. No matter how much you trust Apple’s judgments today, there’s no guarantee that you’ll feel that way tomorrow. 

After all, Apple’s editorial choices are,and always have been driven by a mix of wanting to deliver a quality experience to its users, and wanting to deliver profits to its shareholders. The inability of iOS users to switch to a rival app store means that Apple has more leeway to take down apps its users like without losing customers over it.

The US Congress is wrestling with this issue, as are the courts, and one of the solutions they’ve proposed is to order Apple to carry apps it doesn’t like in its App Store. This isn’t how we’d do it. There are lots of ways that forcing Apple to publish software it objects to can go wrong. The US government has an ugly habit of ordering Apple to sabotage the encryption its users depend on.

But Apple also sometimes decides to sabotage its encryption, in ways that expose its customers to terrible risk

Like Facebook, Apple makes a big deal out of those times where it really does stick up for its users - but like Facebook, Apple insists that when it chooses to sell those users out, they shouldn’t be able to help themselves.

As far as Apple -and Facebook, and Google, and other large tech companies - are concerned, we’re entitled to just as much privacy as they want to give us, and no more.

That’s not enough. Facebook is right that users should be able to choose app stores other than Apple, and Apple is wrong to claim that users who are given this choice will be exposed to predatory and invasive apps. Apple’s objections imply that its often fantastic privacy choices can’t possibly be improved upon. That’s categorically wrong. There’s lots of room for improvement, especially in a mass-market product that can’t possibly cater to all the specific, individual needs of billions of users.

Apple is right, too. Facebook users shouldn’t have to opt into spying to use Facebook.

The rights of users shouldn’t be left to the discretion of corporate boardrooms. Rather than waiting for Apple (or even Facebook) to stand up for their users, the public deserves a legally enforceable right to privacy, one that applies to Facebook and Apple…and the small companies that might pop up to offer alternative app stores or user interfaces.

Cory Doctorow

Stop This California Bill that Bans Affordable Broadband Rules

2 weeks 1 day ago

Update, June 16: In response to pressure from advocates, A.B. 2749's bans on affordability have been removed from the bill.

The California Senate Energy, Utilities, and Communications Committee will soon be the first to consider new and terrible amendments to Assemblymember Quirk-Silva’s A.B. 2749—which is backed by AT&T and other telecommunications interests. The legislation would prohibit the state from implementing affordable broadband rules for broadband companies receiving state subsidies as part of the new California infrastructure law, S.B. 156. This is despite the fact that California taxpayers—not the industry—are paying to build these networks.

An affordability requirement is crucial because this will be the only broadband access point for these Californians, and it's likely they will be subject to monopolistic pricing practices. A University of California-Berkeley study found that rural Californians facing a Frontier monopoly were paying rates more than four times higher than in areas with competitive markets. Under an AT&T monopoly, they paid rates more than three time higher. On average, Americans living in areas with only one or two ISPs pay prices five times higher than those in areas with competition in the market. Given what we've already seen in these areas across the state, we can guess these are similar to prices that Californians will face if A.B. 2749 becomes law.

The bill would also undermine the federal fiber infrastructure plan the Biden Administration released in May by allowing inferior non-fiber solutions to be equally eligible for state subsidies. This is a mistake federal and California legislators have made before, wasting billions of dollars. A.B. 2749 would prevent California from merging money from the state’s infrastructure plan with the funds from the new federal infrastructure plan to deliver high-quality access to underserved and unserved populations. Because the federal law also requires states to establish affordability rules for broadband access, California would be unable to fully leverage the federal funding if A.B. 2749 became law.

To read a detailed legal analysis of the legislation, EFF filed the following letter with the Senate Utilities Committee. EFF also submitted this letter explaining why it makes sense to require fiber networks financed with public money to be subject to basic affordability rules.

Passing A.B. 2749 would unwind the promise of California’s infrastructure law to promote local, affordable future- proof access to the internet. It would also undermine the amazing array of local efforts happening all across the state to deliver affordable solutions by allowing major corporations a chance to capture the dollars without conditions. To stop that from happening, we need California lawmakers to hear why you oppose the bill.

If you are a California based business, non-profit, organization, or local elected official, EFF is collecting signers here to join us in telling the legislature we oppose the bill. 

We’ve won many broadband victories in the past. We can do it again here.

Consumers Have Been Winning in Sacramento. Don’t Let the Empire Strike Back.

Large internet service providers (ISPs) such as AT&T have historically had their way in Sacramento when it comes to telecom law and policy. Ten years ago, they were granted laws that gave them $300 million to pay for increasingly obsolete technology. They also influenced policymakers to secure wide-reaching deregulation and even banned cities from building their own broadband. But the winds started to shift in 2018 after the Federal Communications Commission, then under the Trump Administration, repealed national network neutrality rules. That prompted California to pass its own net neutrality law, S.B. 822,  which was introduced by State Senator Scott Wiener.

Every year since then, there has a been successful, pro-consumer effort in Sacramento. In 2018, California repealed the last of its anti-municipal broadband laws with Former Assemblymember Chau’s A.B. 1999. In 2019, the California legislature restored regulation of broadband carriers at the California Public Utilities Commission by not moving forward with A.B. 1366. It also passed new rules in response to firefighters' broadband services being throttled by Verizon during a state emergency (A.B. 1699). Most recently, in response to the COVID pandemic, California passed a package of landmark bills including Senator Lena Gonzalez’s S.B. 4 and Governor Newsom’s budget bill S.B. 156. These set up funding to invest enough money to deliver future-proof fiber infrastructure to nearly every Californian. It was the largest investment in public broadband infrastructure of any U.S. State and was the result of a two-year effort from advocates.

California has made incredible progress for consumer broadband access, and industry incumbents have opposed every single one of these proposals. Now they’re  back at it again with Assemblymember Quirk-Silva’s A.B. 2749. Local efforts to promote competition in communities across California are beginning to take root, and ISPs want to stop them before it’s too late. Don’t let them get their way again. Tell your lawmakers to oppose A.B. 2479.

Ernesto Falcon
Checked
21 minutes 10 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF News feed feed

Powered by Drupal drupal