You are here


Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 21 hours 48 min ago

Strengthen California’s Consumer Data Privacy Regulations

Fri, 12/06/2019 - 14:20

EFF and a coalition of privacy advocates have filed comments with the California Attorney General seeking strong regulations to protect consumer data privacy. The draft regulations are a good step forward, but the final regulations should go further.

The California Consumer Privacy Act of 2018 (CCPA) created new ways for the state’s residents to protect themselves from corporations that invade their privacy by harvesting and monetizing their personal information. Specifically, CCPA gives each Californian the right to know exactly what pieces of personal information a company has collected about them; the right to delete that information; and the right to opt-out of the sale of that information. CCPA is a good start, but we want more privacy protection from the California Legislature.

CCPA also requires the California Attorney General to adopt regulations by July 2020 to further the law’s purposes. In March 2019, EFF submitted comments to the California Attorney General with suggestions for CCPA regulations. In October 2019, the California Attorney General published draft regulations and again invited public comment.

In the new comments, EFF and the coalition wrote:

The undersigned group of privacy and consumer-advocacy organizations thank the Office of the Attorney General for its work on the proposed California Consumer Privacy Act regulations. The draft regulations bring a measure of clarity and practical guidance to the CCPA’s provisions entitling consumers to access, delete, and opt-out of the sale of their personal information. The draft regulations overall represent a step forward for consumer privacy, but some specific draft regulations are bad for consumers and should be eliminated. Others require revision.

The coalition made dozens of suggestions. We note two here.

First, to implement CCPA’s right to opt-out of the sale of one’s personal information, the draft regulations at Section 315(c) would require online businesses to comply with user-enabled privacy controls, such as browser plugins, that signal a consumer’s choice to opt-out of such sales. EFF suggested such an approach in our March 2019 comments. The coalition comments now seek a clarification to this draft regulation: that “do not track” browser headers, which thousands of Californians have already adopted, are among the kinds of signals that online businesses must treat as an opt-out from data sale.

Second, the coalition urges the California Attorney General to issue clarifying regulations that bar misguided efforts announced by some members of the adtech industry to evade CCPA’s right to opt-out of sales. Adtech is one of the greatest threats to consumer data privacy, as explained in a new EFF report on third-party tracking. The broad dissemination of personal information throughout the adtech ecology is a form of “sale” plainly subject to CCPA’s right to opt-out. Regulations should now lay to rest the crabbed arguments to the contrary.

The comments were signed by EFF and 11 other privacy advocacy organizations: Access Humboldt, ACLU of California, CALPIRG, Center for Digital Democracy, Common Sense Media, Consumer Federation of America, Consumer Reports, Media Alliance, Oakland Privacy, and Privacy Rights Clearinghouse.

Read the comments here.

We Need To Save .ORG From Arbitrary Censorship By Halting the Private Equity Buy-Out

Thu, 12/05/2019 - 19:11

The .ORG top-level domain and all of the nonprofit organizations that depend on it are at risk if a private equity firm is allowed to buy control of it. EFF has joined with over 250 respected nonprofits to oppose the sale of Public Interest Registry, the (currently) nonprofit entity that operates the .ORG domain, to Ethos Capital. Internet pioneers including Esther Dyson and Tim Berners-Lee have spoken out against this secretive deal. And 12,000 Internet users and counting have added their voices to the opposition.

What’s the harm in this $1.135 billion deal? In short, it would give Ethos Capital the power to censor the speech of nonprofit organizations (NGOs) to advance commercial interests, and to extract ever-growing monopoly rents from those same nonprofits. Ethos Capital has a financial incentive to engage in censorship—and, of course, in price increases. And the contracts that .ORG operates under don’t create enough accountability or limits on Ethos’s conduct.

Take Action


Domain Registries Have Censorship Power

Registries like PIR manage the Internet’s top-level domains under policies set out by ICANN, the governing body for the Internet’s domain name system. Registries have the power to suspend domain names, or even transfer them to other Internet users, subject to their contracts with ICANN. When a domain name is suspended, all of the Internet resources that use that name are disrupted, including websites, email addresses, and apps. That power lets registries exert influence over speech on the Internet in much the same way that social networks, search engines, and other well-placed intermediaries can do. And that power can be sold or bartered to other powerful groups, including repressive governments and corporate interests, giving them new powers of censorship.

Using the Internet’s chokepoints for censorship already happens far too often. For example:

  • The registry operators Donuts and Radix, who manage several hundred top-level domains, have private agreements with the Motion Picture Association of America to suspend domains based on accusations of copyright infringement from major movie studios, with no court order or right of appeal.
  • The search engine Bing, along with firewall maintainers and other intermediaries, has suppressed access to websites offering truthful information about obtaining prescription medicines from online pharmacies. They acted at the request of groups with close ties to U.S. pharmaceutical manufacturers who seek to keep drug prices high. The same groups have sought cooperation from domain registries and their governing body, ICANN.
  • The governments of Turkey and the United Arab Emirates, among others, regularly submit a flood of takedown requests to intermediaries, presumably in the hope that those intermediaries won’t examine those requests closely enough to reject the unjustified and illegal requests buried within them.
  • Saudi Arabia has relied on intermediaries like Medium, Snapchat, and Netflix to censor journalism it deems critical of the country’s totalitarian government.
  • DNA, a trade association for the domain name industry, has proposed a broad program of Internet speech regulation, to be enforced with domain suspensions, also with no accountability or due process guarantees for Internet users.

As the new operator of .ORG, Ethos Capital would have the ability to engage in these and other forms of censorship. It could enforce any limitations on nonprofits’ speech, including selective enforcement of particular national laws. For intermediaries with power over speech, such conduct can be lucrative, if it wins the favor of a powerful industry like the U.S. movie studios or of the government of an authoritarian country where the intermediary wishes to do business. Since many NGOs are engaged in speech that seeks to hold governments and industry to account, those powerful interests have every incentive to buy the cooperation of a well-placed intermediary, including an Ethos-owned PIR.

Not Enough Safeguards

The sale of PIR to Ethos Capital erodes the safeguards against this form of censorship.

First, the .ORG TLD has a unique meaning. A new NGO website or project may be able to use a different top-level domain, but none carries the same message. A domain name ending in .ORG is the key signifier of non-commercial, public-minded organizations on the Internet. Even the new top-level domains .NGO and .ONG (also run by PIR), which would appear to be substitutes for .ORG, have seen little use.

Established NGOs are in even more of a bind. The .ORG top-level domain is 34 years old, and many of the world’s most important NGOs have used .ORG names for decades. For established NGOs, changing domain names is scarcely an option. Changing from .ORG to a .INFO or .US domain, for example, means disrupting email communications, losing search engine placement, and incurring massive expenses to change an organization’s basic online identity. Established NGOs are effectively a captive audience for the policies and prices set by PIR.

Second, the top-level domain for nonprofits should itself be run by a nonprofit. Today, PIR is a subsidiary of the Internet Society (ISOC), which also promotes Internet access worldwide and oversees the Internet’s basic technical standards. ISOC is a longstanding part of the community of Internet governance organizations. When ISOC created PIR in 2002, it touted its nonprofit status and position in the community as the reasons it should run .ORG. And those community ties help explain why, when PIR proposed building its own copyright enforcement system in 2016, outcry from the community caused it to back down. If PIR is operated for private profit, it will inevitably be less attentive to the Internet governance community.

Third, ICANN, the organization that sets policy for the domain name system, has been busy removing the legal guardrails that could protect nonprofit users of .ORG. Earlier this year, ICANN removed caps on registration fees for .ORG names, allowing PIR to raise prices at will on its captive customer base of nonprofits. And ICANN also gave PIR explicit permission to create new “protections for the rights of third parties”—often used as a justification and legal cover for censorship—without community input or accountability.

Without these safeguards, the sale of PIR to Ethos raises unacceptable risks of censorship and financial exploitation for nonprofits the world over. Yet Ethos and ISOC insist on completing the sale as quickly as possible, without addressing the community’s concerns. Their only response to the massive public outcry against the deal has been vague, unenforceable promises of good behavior.

The sale needs to be halted, and a process begun to guarantee the rights of nonprofit Internet users. You can help by signing the petition:



Mint: Late-Stage Adversarial Interoperability Demonstrates What We Had (And What We Lost)

Thu, 12/05/2019 - 13:18

In 2006, Aaron Patzer founded Mint. Patzer had grown up in the city of Evansville, Indiana—a place he described as "small, without much economic opportunity"—but had created a successful business building websites. He kept up the business through college and grad school and invested his profits in stocks and other assets, leading to a minor obsession with personal finance that saw him devoting hours every Saturday morning to manually tracking every penny he'd spent that week, transcribing his receipts into Microsoft Money and Quicken.

Patzer was frustrated with the amount of manual work it took to track his finances with these tools, which at the time weren't smart enough to automatically categorize "Chevron" under fuel or "Safeway" under groceries. So he conceived on an ingenious hack: he wrote a program that would automatically look up every business name he entered into the online version of the Yellow Pages—constraining the search using the area code in the business's phone number so it would only consider local merchants—and use the Yellow Pages' own categories to populate the "category" field in his financial tracking tools.

It occurred to Patzer that he could do even better, which is where Mint came in. Patzer's idea was to create a service that would take all your logins and passwords for all your bank, credit union, credit card, and brokerage accounts, and use these logins and passwords to automatically scrape your financial records, and categorize them to help you manage your personal finances. Mint would also analyze your spending in order to recommend credit cards whose benefits were best tailored to your usage, saving you money and earning the company commissions.

By international standards, the USA has a lot of banks: around 12,000 when Mint was getting started (in the US, each state gets to charter its own banks, leading to an incredible, diverse proliferation of financial institutions). That meant that for Mint to work, it would have to configure its scrapers to work with thousands of different websites, each of which was subject to change without notice.

If the banks had been willing to offer an API, Mint's job would have been simpler. But despite a standard format for financial data interchange called OFX (Open Financial Exchange), few financial institutions were offering any way for their customers to extract their own financial data. The banks believed that locking in their users' data could work to their benefit, as the value of having all your financial info in one place meant that once a bank locked in a customer for savings and checking, it could sell them credit cards and brokerage services. This was exactly the theory that powered Mint, with the difference that Mint wanted to bring your data together from any financial institution, so you could shop around for the best deals on cards, banking, and brokerage, and still merge and manage all your data.

At first, Mint contracted with Yodlee, a company that specialized in scraping websites of all kinds, combining multiple webmail accounts with data scraped from news sites and other services in a single unified inbox. When Mint outgrew Yodlee's services, it founded a rival called Untangly, locking a separate team in a separate facility that never communicated with Mint directly, in order to head off any claims that Untangly had misappropriated Yodlee's proprietary information and techniques—just as Phoenix computing had created a separate team to re-implement the IBM PC ROMs, creating an industry of "PC clones."

Untangly created a browser plugin that Mint's most dedicated users would use when they logged into their banks. The plugin would prompt them to identify elements of each page in the bank's websites so that the scraper for that site could figure out how to parse the bank's site and extract other users' data on their behalf.

To head off the banks' countermeasures, Untangly maintained a bank of cable-modems and servers running "headless" versions of Internet Explorer (a headless browser is one that runs only in computer memory, without drawing the actual browser window onscreen) and they throttled the rate at which the scripted interactions on these browsers ran, in order to make it harder for the banks to determine which of its users were Mint scrapers acting on behalf of its customers and which ones were the flesh-and-blood customers running their own browsers on their own behalf.

As the above implies, not every bank was happy that Mint was allowing its customers to liberate their data, not least because the banks' winner-take-all plan was for their walled gardens to serve as reasons for customers to use their banks for everything, in order to get the convenience of having all their financial data in one place.

Some banks sent Mint legal threats, demanding that they cease-and-desist from scraping customer data. When this happened, Mint would roll out its "nuclear option"—an error message displayed to every bank customer affected by these demands informing them that their bank was the reason they could no longer access their own financial data. These error messages would also include contact details for the relevant decision-makers and customer-service reps at the banks. Even the most belligerent bank's resolve weakened in the face of calls from furious customers who wanted to use Mint to manage their own data.

In 2009, Mint became a division of Intuit, which already had a competing product with a much larger team. With the merged teams, they were able to tackle the difficult task of writing custom scrapers for the thousands of small banks they'd been forced to sideline for want of resources.

Adversarial interoperability is the technical term for a tool or service that works with ("interoperates" with) an existing tool or service—without permission from the existing tool's maker (that's the "adversarial" part).

Mint's story is a powerful example of adversarial interoperability: rather than waiting for the banks to adopt standards for data-interchange—a potentially long wait, given the banks' commitment to forcing their customers into treating them as one-stop-shops for credit cards, savings, checking, and brokerage accounts—Mint simply created the tools to take its users' data out of the bank's vaults and put it vaults of the users' choosing.

Adversarial interoperability was once commonplace. It's a powerful way for new upstarts to unseat the dominant companies in a market—rather than trying to convince customers to give up an existing service they rely on, an adversarial interoperator can make a tool that lets users continue to lean on the existing services, even as they chart a path to independence from those services.

But stories like Mint are rare today, thanks to a sustained, successful campaign by the companies that owe their own existence to adversarial interoperability to shut it down, lest someone do unto them as they had done unto the others.

Thanks to decades of lobbying and lawsuits, we've seen a steady expansion of copyright rules, software patents (though these are thankfully in retreat today), enforceable terms-of-service and theories about "interference with contract" and "tortious interference."

These have grown to such an imposing degree that big companies don't necessarily need to send out legal threats or launch lawsuits anymore—the graveyard of new companies killed by these threats and suits is scary enough that neither investors nor founders have much appetite for risking it.

For Mint to have launched when it did, and done as well as it did, tells us that adversarial interoperability may be down, but it's not out. With the right legal assurances, there are plenty of entrepreneurs and investors who'd happily provide users with the high-tech ladders they need to scale the walled gardens that Big Tech has imprisoned them within.

The Mint story also addresses an important open question about adversarial interoperability: if we give technologists the right to make these tools, will they work? After all, today's tech giants have entire office-parks full of talented programmers. Can a new market entrant hope to best them in the battle of wits that plays out when they try to plug some new systems into Big Tech's existing ones?

The Mint experience points out that attackers always have an advantage over defenders. For the banks to keep Mint out, they'd have to have perfect scraper-detection systems. For Mint to scrape the banks' sites, they only need to find one flaw in the banks' countermeasures.

Mint also shows how an incumbent company's own size works against it when it comes to shutting out competitors. Recall that when a bank decided to send its lawyers after Mint, Mint was able to retaliate by recruiting the bank's own customers to blast it for that decision. The more users Mint had, the more complaints it would generate—and the bigger a bank was, the more customers it had to become Mint users, and defenders of Mint's right to scrape the bank's site.

It's a neat lesson about the difference between keeping out malicious hackers versus keeping out competitors. If a "bad guy" was attacking the bank's site, it could pull out all the stops to shut the activity down: lawsuits, new procedures for users to follow, even name-and-shame campaigns against the bad actor.

But when a business attacks a rival that is doing its own customers' bidding, its ability to do so has to be weighed against the ill will it will engender with those customers, and the negative publicity this kind of activity will generate. Consider that Big Tech platforms claim billions of users—that's a huge pool of potential customers for adversarial interoperators who promise to protect those users from Big Tech's poor choices and exploitative conduct!

This is also an example of how "adversarial interoperability" can peacefully co-exist with privacy protection: it's not hard to see how a court could distinguish between a company that gets your data from a company's walled garden at your request so that you can use it, and a company that gets your data without your consent and uses it to attack you.

Mint's pro-competitive pressure made banks better, and gave users more control. But of course, today Mint is a division of Intuit, a company mired in scandal over its anticompetitive conduct and regulatory capture, which have allowed it to subvert the Free File program that should give millions of Americans access to free tax-preparation services.

Imagine if an adversarial interoperator were to enter the market today with a tool that auto-piloted its users through the big tax-prep companies' sites to get them to Free File tools that would actually work for them (as opposed to tricking them into expensive upgrades, often by letting them get all the way to the end of the process before revealing that something about the user's tax situation makes them ineligible for that specific Free File product).

Such a tool would be instantly smothered with legal threats, from "tortious interference" to hacking charges under the Computer Fraud and Abuse Act. And yet, these companies owe their size and their profits to exactly this kind of conduct.

Creating legal protections for adversarial interoperators won't solve all our problems of market concentration, regulatory capture, and privacy violations—but giving users the right to control how they interact with the big services would certainly open a space where technologists, co-ops, entrepreneurs and investors could help erode the big companies' dominance, while giving the public a better experience and a better deal.

Certbot Leaves Beta with the Release of 1.0

Thu, 12/05/2019 - 10:47

Earlier this week EFF released Certbot 1.0, the latest version of our free, open source tool that helps websites encrypt their traffic. The release of 1.0 is a significant milestone for the project and is the culmination of the work done over the past few years by EFF and hundreds of open source contributors from around the world.

Certbot was first released in 2015 to automate the process of configuring and maintaining HTTPS encryption for site administrators by obtaining and deploying certificates from Let's Encrypt. Since its initial launch, many features have been added, including beta support for Windows, automatic nginx configuration, and support for over a dozen DNS providers for domain validation.

Certbot is part of EFF's project to encrypt the web. Using HTTPS instead of unencrypted HTTP protects people from eavesdropping, content injection, and cookie stealing, which can be used to take over your online accounts. Since the release of Let's Encrypt and Certbot, the percentage of web traffic using HTTPS has increased from 40% to 80%. This is significant progress in building a web that is encrypted by default but there is more work to be done.

The release of 1.0 officially marks the end of Certbot's beta phase, during which it has helped over 2 million users maintain HTTPS access to over 20 million websites. We’re very excited to see how many more users, and more websites, Certbot will assist in 2020 and beyond.

It’s thanks to our 30,000+ members that we’re able to maintain Certbot and push for 100% encrypted web.

Support Certbot!

Contribute to EFF's Security-enhancing tech projects

The FCC Is Opening up Some Very Important Spectrum for Broadband

Wed, 12/04/2019 - 10:25

Decisions about who gets to use the public airwaves and how they use it impact our lives every day. From the creation of WiFi routers to the public auctions that gave us more than two options for our cell phone providers, the Federal Communications Commission (FCC)’s decisions reshape our technological world. And they’re about to make another one.

In managing the public spectrum, aka “the airwaves,” the FCC has a responsibility to ensure that commercial uses benefit the American public. Traditionally, the FCC either assigns spectrum to certain entities with specific use conditions (for example, television, radio, and broadband are “licensed uses”) or simply designating a portion of spectrum as an open field with no specific use in mind called “unlicensed spectrum,” which is what WiFi routers use.

The FCC is about to make two incredibly important spectrum decisions. The first we’ve written about previously, but, in short, the FCC intends to reallocate spectrum currently used for satellite television to broadband providers through a public auction. The second is reassigning spectrum located in the 5.9 Ghz frequency band from being exclusively licensed to the auto industry to being an open, unlicensed use.

We support this FCC decision because unlicensed spectrum allows innovators big and small to make use of a public asset without paying license fees or obtaining advance government permission. Users gain improved wireless services, more competition, and more services making the most of an asset that belongs to all of us.

Why Is 5.9 GHz Licensed to the Auto Industry?

In 1999, the FCC allocated a portion of the public airwaves to a new type of car safety technology using Dedicated Short Range Communications (DSRC). In theory, cars equipped with DSRC devices on the 5.9 GHz band would communicate with each other and coordinate to avoid collisions. 20 years later, very few cars actually use DSRC. In fact, so few cars are using it that a study found that its current automotive use is worth about $6.2 million, while opening up the spectrum would be worth tens of billions of dollars. In other words, a public asset that could be used for next generation WiFi is effectively laying fallow until the FCC changes part of the license held by the auto industry into an unlicensed use.

Even though it’s barely using it, what are the chances the auto industry will give up exclusive access to a multi-billion public asset it gets for free? This is why last ditch efforts to argue that the auto industry must maintain exclusive use over a huge amount of spectrum as a matter of public safety is hollow. Nothing the FCC is doing here is preventing cars from using this spectrum, and given that its high-frequency a lot of data can travel over the airwave in question.

It isn’t the FCC’s job to stand idly by while someone essentially squats on public property and let it go to waste. Rather, the FCC’s job is to continually evaluate who is given special permission by the government and to decide if they are producing the most benefit to the public.

Unlicensing 5.9 GHz Means Faster WiFi, Improved Broadband Competition, and Better Wireless Service

Spectrum is necessary for transmitting data, and more of it means more available bandwidth. WiFi routers today have a bandwidth speed limit because you can only move as much data as you have spectrum available. In addition, the frequency range affects how much data you can move as well. So earlier WiFi routers that used 2.5 GHz generally transmitted 100s of megabits per second while today’s routers also use 5.0 GHz to deliver gigabit speeds. More spectrum in the range of 5.9 GHz with similar properties to current gigabit routers will mean the next line of WiFi routers will be able to transmit even greater amounts of data.

Adding more high-capacity spectrum into the unlicensed space also means that smaller competitive wireless ISPs that compete with incumbents will be given more capacity for free. Typically small wireless ISPs (WISPs) are reliant on unlicensed spectrum because they do not have the multi-billion dollar financing that AT&T and Sprint do to purchase exclusive licenses. Their lack of financing also limits their ability to immediately deploy fiber wires to the home and unlicensed spectrum allows them to bypass the infrastructure costs until they have enough customers to fund fiber infrastructure. In essence, improving the competitiveness of small wireless players is an essential part of eventually reaching a fiber for all future because smaller competitive ISPs are also more aggressive in deploying fiber to the home than incumbent big telecommunications companies that have generally abandoned their buildouts.

Lastly, wireless broadband service in general will improve because unlicensed spectrum has dramatically reduced congestion on cell towers by offloading traffic to WiFi hotspots and routers. This offloading process is so pervasive that, these days, 59 percent of our smartphone traffic is over WiFi instead of over 4G. It’s estimated that 71 percent of 5G traffic will actually be over WiFi in its early years. Adding more capacity to offloading and higher speeds will mean less congestion on public and small business guest WiFi as well.

As it stands, the 5.9 GHz band of the public airwaves is barely serving the public at all. The FCC deciding to open it up has only benefits and is a good idea.

How a Patent on Sorting Photos Got Used to Sue a Free Software Group

Tue, 12/03/2019 - 17:45

Taking and sharing pictures with wireless devices has become a common practice. It’s hardly a recent development: the distinction between computers and cameras has shrunk, especially since 2007 when smartphone cameras became standard. Even though devices that can take and share photos wirelessly have become ubiquitous over a period spanning more than a decade, the Patent Office granted a patent on an “image-capturing device” in 2018.

A patent on something so commonplace might be comical, but unfortunately, U.S. Patent No. 9,936,086 is already doing damage to software innovation. It’s creating litigation costs for real developers. The creator of this patent is Rothschild Patent Imaging LLC, or RPI, a company linked to a network of notorious patent trolls connected to inventor Leigh Rothschild. We've written about two of them before: Rothschild Connected Devices Innovations, and Rothschild Broadcast Distribution Systems. Now, RPI has used the ’086 patent to sue the Gnome Foundation, a non-profit that makes free software.

The patent claims a generic “image-capturing mobile device” with equally generic components: a “wireless receiver,” “wireless transmitter,” and “a processor operably connected to the wireless receiver and the wireless transmitter.”  That processor is configured i: to (1) receive multiple photographic images, (2) filter those images using criteria “based on a topic, theme or individual shown in the respective photographic image,” and (3) transmit the filtered photographic images to another wireless device. In other words: the patent claims a smartphone that can receive images that a user can filter by content before sending to others.

According to Rothschild’s complaint, all it takes to infringe its patent is to provide a product that “offers a number of ways to wirelessly share photos online such as through social media.” How in the world could a patent on something so basic and established qualify as inventive in 2018?

At least part of the answer is that the Patent Office simply failed to apply the Supreme Court’s Alice decision. The Alice decision makes clear that using generic computers to automate established human tasks cannot qualify as an “invention” worthy of patent protection. Applying Alice, the Federal Circuit has specifically rejected a patent on the “abstract idea of classifying and storing digital images in an organized manner” in TLI Communications

Inexplicably, there’s no sign the Patent Office gave either decision any consideration before granting this application. Alice was decided in 2014; TLI in 2016. Rothschild filed the application that became the ‘086 patent in June 2017. Before being granted, the application received only one non-final rejection from an examiner at the Patent Office. That examiner did not raise any concerns about the application’s eligibility for patent protection, let alone any concerns specifically stemming from Alice or TLI.

The examiner only compared the application to one earlier reference—a published patent application from 2005. Rothschild claimed that system was irrelevant, because the filter was based on the image’s quality; in Rothschild’s “invention,” the filter was based on “subject identification” criteria, such as the topic, theme, or individual in the photo.

Rothschild didn’t describe how the patent performed the filtering step, or explain why filtering on these criteria would be a technical invention. Nor did the Patent Office ask. But under Alice, it should have. After all, humans have been organizing photos based on topic, theme, and individuals depicted for as long as humans have been organizing photos.

Because the Patent Office failed to apply Alice and granted the ’086 patent, the question of its eligibility may finally get the attention it needs in court. The Gnome Foundation has filed a motion to dismiss the case, pointing out that the patent’s lack of eligibility. We hope the district court will apply Alice and TLI to this patent. But a non-profit that exists to create and spread free software never should have had to spend its limited time and resources on this patent litigation in the first place.

Sen. Cantwell Leads With New Consumer Data Privacy Bill

Tue, 12/03/2019 - 17:13

There is a lot to like about U.S. Sen. Cantwell’s new Consumer Online Privacy Rights Act (COPRA). It is an important step towards the comprehensive consumer data privacy legislation that we need to protect us from corporations that place their profits ahead of our privacy.

The bill, introduced on November 26, is co-sponsored by Sens. Schatz, Klobuchar, and Markey. It fleshes out the framework for comprehensive federal privacy legislation announced a week earlier by Sens. Cantwell, Feinstein, Brown, and Murray, who are, respectively, the ranking members of the Senate committees on Commerce, Judiciary, Banking, and Health, Education, Labor and Pensions.

This post will address COPRA’s various provisions in four groupings: EFF’s key priorities, the bill’s consumer rights, its business duties, and its scope of coverage.

EFF’s Key Priorities

COPRA satisfies two of EFF’s three key priorities for federal consumer data privacy legislation: private enforcement by consumers themselves; and no preemption of stronger state laws. COPRA makes a partial step towards EFF’s third priority: no “pay for privacy” schemes.

Private enforcement. All too often, enforcement agencies lack the resources or political will to enforce statutes that protect the public, so members of the public must be  empowered to step in. Thus, we are pleased that COPRA has a strong private right of action to enforce the law. Specifically, in section 301(c), COPRA allows any individual who is subjected to a violation of the Act to bring a civil suit. They may seek damages (actual, liquidated, and punitive), equitable and declaratory relief, and reasonable attorney’s fees.

COPRA also bars enforcement of pre-dispute arbitration agreements, in section 301(d). EFF has long opposed these unfair limits on user enforcement of their legal rights in court.

Further, COPRA in section 301(a) provides for enforcement by a new Federal Trade Commission (FTC) bureau comparable in size to existing FTC bureaus. State Attorneys General and consumer protection officers may also enforce the law, per section 301(b). It is helpful to diffuse government enforcement in this manner.

No preemption of stronger state laws. COPRA expressly, in section 302(c), does not preempt state laws unless they are in direct conflict with COPRA, and a state law is not in direct conflict if it affords greater protection. This is most welcome. Federal legislation should be a floor and not a ceiling for data privacy protection. EFF has long opposed preemption by federal laws of stronger state privacy laws.

“Pay for privacy.” COPRA only partially addresses EFF’s third priority: that consumer data privacy laws should bar businesses from retaliating against consumers who exercise their privacy rights. Otherwise, businesses will make consumers pay for their privacy, by refusing to serve privacy-minded consumers at all, by charging them higher prices, or by providing them services of a lower quality. Such “pay for privacy” schemes discourage everyone from exercising their fundamental human right to data privacy, and will result in a society of income-based “privacy haves” and “privacy have nots.”

In this regard, COPRA is incomplete. On the bright side, it bars covered entities from conditioning the provision of service on the individual’s waiver of their privacy rights in section 109. But COPRA allows covered entities to charge privacy-minded consumers a higher price or provide a lower quality. We urge amendment of COPRA to bar such “pay for privacy” schemes.

Consumer Rights Under COPRA

COPRA would provide individuals with numerous data privacy rights that they may assert against covered entities.

Right to opt-out of data transfer. An individual may require a covered entity to stop transferring their data to other entities. This protection, in section105(b), is an important one. COPRA requires the FTC to establish processes for covered entities to use to facilitate opt-out requests. In doing so, the FTC shall “minimize the number of opt-out designations of a similar type that a consumer must take.” We hope these processes include browser headers and similar privacy settings, such as the “do not track” system, that allow tech users at once to signal to all online entities that they have opted-out.

Right to opt-in to sensitive data processing. An individual shall be free from any data processing or transfer of their “sensitive” data, unless they affirmatively consent to such processing, under section 105(c). There is an exception for certain “publicly available information.

The bill has a long list of what is considered “sensitive” data: government-issued identifiers; information about physical and mental health; credentials for financial accounts; biometrics; precise geolocation; communications content and metadata; email, phone number, or account log-in credentials; information revealing race, religion, union membership, sexual orientation, sexual behavior, or online activity over time and across websites; calendars, address books, phone and text logs, photos, or videos on a device; nude pictures; any data processed in order to identify the above data; and any other data designated by the FTC.

Of course, a great deal of information that the bill does not deem “sensitive” is in fact extraordinarily sensitive. This includes, for example, immigration status, marital status, lists of familial and social contacts, employment history, sex, and political affiliation. So COPRA’s list of sensitive data is under-inclusive. In fact, any such list will be under-inclusive, as new technologies make it ever-easier to glean highly personal facts from apparently innocuous bits of data. Thus, all covered information should be free from processing and transfer, absent opt-in consent, and a few other tightly circumscribed exceptions.

Right to access. An individual may obtain from a covered entity, in a human-readable format, the covered data about them, and the names of third parties their data was disclosed to. Affirming this right, in section 102(a), is good. But requesters should also be able to learn the names of the third parties who provided their personal data to the responding entity. To map the flow of their personal data, consumers must be able to learn both where it came from and where it went.

Right to portability. An individual may export their data from a covered entity in a “structured, interoperable, and machine-readable format.” This right to data portability, in section 105(a), is an important aspect of user autonomy and the right-to-know. It also may promote competition, by making it easier for tech users to bring their data from one business to another.

Rights to delete and to correct. An individual may require a covered entity to delete or correct covered data about them, in sections 103 and 104.

Business Duties Under COPRA

COPRA would require businesses to shoulder numerous duties, even if a consumer does not exercise any of the aforementioned rights.

Duty to minimize data processing. COPRA, in section 106, would bar a covered entity from processing or transferring data “beyond what is reasonably necessary, proportionate, and limited” to certain kinds of purposes. This is “data minimization,” that is, the principle that an entity should minimize its processing of consumer data. Minimization is an important tool in the data privacy toolbox. We are glad COPRA has a minimization rule. We also are also glad COPRA would apply this rule to all the ways an entity processes data (and not just, for example, to data collection or sharing).

However, COPRA should improve its minimization yardstick. Data privacy legislation should bar companies from processing data except as reasonably necessary to give the consumer what they asked for, or for a few other narrow purposes. Along these lines, COPRA allows processing to carry out the “specific” purpose “for which the covered entity has obtained affirmative express consent,” or to “complete a transaction … specifically requested by an individual.” Less helpful is COPRA’s additional allowance of processing for the purpose “described in the privacy policy made available by the covered entity.” We suggest deletion of this allowance, because most consumers will not read the privacy policy.

Duty of loyalty. COPRA, in section 101, would bar companies from processing or transferring data in a manner that is “deceptive” or “harmful.” The latter term means likely to cause: a financial, physical, or reputational injury; an intrusion on seclusion; or “other substantial injury.” This is a good step. We hope legislators will also explore “information fiduciary” obligations where the duty of loyalty would require the business to place the consumer’s data privacy rights ahead of the business’ own profits.

Duty to assess algorithmic decision-making impact. An entity must conduct an annual impact assessment if it uses algorithmic decision-making to determine: eligibility for housing, education, employment, or credit; distribution of ads for the same; or access to public accommodations. This annual assessment—as described in section 108(b)—must address, among other things, whether the system produces discriminatory results. This is good news. EFF has long sought greater transparency about algorithmic decision-making.

Duty to build privacy protection systems. A covered entity must designate a privacy officer and a data security officer. These officers must implement a comprehensive data privacy program, annually assess data risks, and facilitate ongoing compliance with COPRA’s section 202. Moreover, the CEO of a “large” covered entity must certify, based on review, the existence of adequate internal controls and reporting structures to ensure compliance. COPRA in section 2(15) defines a “large” entity as one that processes the data of 5 million people or the sensitive data of 100,000 people. These COPRA rules will help ensure that businesses build the privacy protections systems needed to safeguard consumers’ personal information.

Duty to publish a privacy policy. A covered entity must publish a privacy policy that states, among other things, the categories of data it collects, the purpose of collection, the identity of entities to which it transfers data, and the duration of retention. This language, in section 102(b), will advance transparency.

Duty to secure data. A covered entity must establish and implement reasonable data security practices, as described in section 107.

Scope of Coverage

Consumer data privacy laws must be scoped to particular data, to particular covered entities, and with particular exceptions.

Covered data. COPRA, in section 2(8)(A) protects “covered data,” defined as “information that identifies, or is linked or reasonably linkable to an individual or a consumer device, including derived data.” This term excludes de-identified data, and information lawfully obtained from government records.

We are pleased that “covered data” extends to “devices,” and that “derived” data includes “data about a household” in section 2(11). Some businesses track devices and households, without ascertaining the identity of individuals.

Unfortunately, COPRA defines “covered data” to exclude “employee data,” meaning personal data collected in the course of employment and processed solely for employment in sections 2(8)(B)(ii) and 2(12). For many people, the greatest threat to data privacy comes from their employers and not from other businesses. Some businesses use cutting-edge surveillance tools to closely scrutinize employees at computer workstations (including their keystrokes) and at assembly lines (including wristbands to monitor physical movements). Congress must protect the data privacy of workers as well as consumers.

Covered entities. COPRA, as outlined in section 2(9) applies to every entity or person subject to the FTC Act. That Act, in turn, excludes various economic sectors, such as common carriers, per 15 U.S.C. 45(a)(2). Hopefully, this COPRA limitation reflects the jurisdictional frontiers of the various congressional committees—and the ultimate federal consumer data privacy bill will apply across economic sectors.

COPRA excludes “small business” from the definition of “covered entity” under sections 2(9) & (23). EFF supports such exemptions, among other reasons because small start-ups often are engines of innovation. Two of COPRA’s three size thresholds would exclude small businesses: $25 million in gross annual revenue, or 50% of revenue from transferring personal data. But COPRA’s third size threshold would capture many small businesses: annual processing of the personal data of 100,000 people, households, or devices. Many small businesses have websites that process the IP addresses of 300 visitors per day. We suggest deleting this third threshold, or raising it by an order of magnitude.

Exceptions. COPRA contains various exemptions, listed in sections 110(c) through 110(g).

Importantly, it includes a journalism exemption in section 110(e): “Nothing in this title shall apply to the publication of newsworthy information of legitimate public concern to the public by a covered entity, or to the processing or transfer of information by a covered entity for that purpose. This exemption is properly framed by the activity of journalism, which all people and organizations have a First Amendment right to exercise, regardless of whether they are a professional journalist or a news organization.

COPRA, in section 110(d)(1)(D), exempts the processing and transfer of data as reasonably necessary “to protect against malicious, deceptive, fraudulent or illegal purposes.” Unfortunately, many businesses may interpret such language to allow them to process all manner of personal data, in order to identify patterns of user behavior that the businesses deem indicative of attempted fraud. We urge limitation of this exemption.


We thank Sen. Cantwell for introducing COPRA. It is a strong step forward in the national conversation over how government should protect us from businesses that harvest and monetize our personal information. While we will seek strengthening amendments, COPRA is an important data privacy framework for legislators and privacy advocates.

EFF Releases Certbot 1.0 to Help More Websites Encrypt Their Traffic

Tue, 12/03/2019 - 15:08
Two Million Users Already Actively Using Certbot to Keep Sites Secure

San Francisco - The Electronic Frontier Foundation (EFF) today released Certbot 1.0: a free, open source software tool to help websites encrypt their traffic and keep their sites secure.

Certbot was first released in 2015, and since then it has helped more than two million website administrators enable HTTPS by automatically deploying Let’s Encrypt certificates. Let’s Encrypt is a free certificate authority that EFF helped launch in 2015, now run for the public’s benefit through the Internet Security Research Group (ISRG).

HTTPS is a huge upgrade in security from HTTP. For many years, web site owners chose to only implement HTTPS for a small number of pages, like those that accepted passwords or credit card numbers. However, in recent years, it has become clear that all web pages need protection. Pages served over HTTP are vulnerable to eavesdropping, content injection, and cookie stealing, which can be used to take over your online accounts.

“Securing your web browsing with HTTPS is an important part of protecting your information, like your passwords, web chats, and anything else you look at or interact with online,” said EFF Senior Software Architect Brad Warren. “However, Internet users can’t do this on their own—they need site administrators to configure and maintain HTTPS. That's where Certbot comes in. It automates this process to make it easy for everyone to run secure websites.”

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Along with our browser add-on, HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship. The project is encrypting traffic to over 20 million websites, and has recently added beta support for Windows-based servers. Before the release of Let’s Encrypt and Certbot, only 40% of web traffic was encrypted. Now, that number is up to 80%.

“A secure web experience is important for everyone, but for years it was prohibitively hard to do,” said Max Hunter, EFF’s Engineering Director for Encrypting the Internet. “We are thrilled that Certbot 1.0 now makes it even easier for anyone with a website to use HTTPS.”

For more about Certbot:

Contact:  BradWarrenSenior Software MaxHunterEngineering Director, Encrypting the

EFF Report Exposes, Explains Big Tech’s Personal Data Trackers Lurking on Social Media, Websites, and Apps

Mon, 12/02/2019 - 08:26
User Privacy Under Relentless Attack by Trackers Following Every Click and Purchase

San Francisco—The Electronic Frontier Foundation (EFF) today released a comprehensive report that identifies and explains the hidden technical methods and business practices companies use to collect and track our personal information from the minute we turn on our devices each day.

Published on Cyber Monday, when millions of consumers are shopping online, “Behind the One-Way Mirror” takes a deep dive into the technology of corporate surveillance. The report uncovers and exposes the myriad techniques—invisible pixel images, browser fingerprinting, social widgets, mobile tracking, and face recognition—companies employ to collect information about who we are, what we like, where we go, and who our friends are. Amazon, Facebook, Google, Twitter, and hundreds of lesser known and hidden data brokers, advertisers, and marketers drive data collection and tracking across the web.

“The purpose of this paper is to demystify tracking by focusing on the fundamentals of how and why it works and explain the scope of the problem. We hope the report will educate and mobilize journalists, policy makers, and concerned consumers to find ways to disrupt the status quo and better protect our privacy,” said Bennett Cyphers, EFF staff technologist and report author.

“Behind the One-Way Mirror” focuses on third-party tracking, which is often not obvious or visible to users. Webpages contain embedded images and invisible codes that come from entities other than the website owner. Most websites contain dozens of these bugs that go on to record and track your browsing, activity, purchases, and clicks. Mobile apps are equally rife with tracking code which can relay app activity, physical location, and financial data to unknown entities.

With this information companies create behavioral profiles that can reveal our political affiliation, religious beliefs, sexual identity and activity, race and ethnicity, education level, income bracket, purchasing habits, and physical and mental health. The report shows how relentless data collection and profile building fuels the digital advertising industry that targets users with invasive ads and puts our privacy at risk.

“Today online shoppers will see web pages, ads, and their social media feeds. What they won’t see are trackers controlled by tech companies, data brokers, and advertisers that are secretly taking notes on everything they do,” said Cyphers. “Dominant companies like Facebook can deputize smaller publishers into installing its tracking code, so it can track individuals who don’t even use Facebook.”

"Behind the One-Way Mirror" offers tips for users to fight back against online tracking by installing EFF’s tracker-blocker extension Privacy Badger in their browser and changing phone settings. Online tracking is hard to avoid, but there are steps users can take to seriously cut back on the amount of data that trackers can collect and share.

“Privacy is often framed as a matter of personal responsibility, but a huge portion of the data in circulation isn’t shared willingly—it’s collected surreptitiously and with impunity. Most third-party data collection in the U.S. is unregulated,” said Cyphers. “The first step in fixing the problem is to shine a light, as this report does, on the invasive third-party tracking that, online and offline, has lurked for too long in the shadows.”

For the report:

For more on behavioral tracking:

Contact:  BennettCyphersStaff