Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 11 hours 27 min ago

Oakland Renters Deserve Quality Service and The Power To Choose Their ISP

Thu, 02/14/2019 - 19:44

Oakland residents, we need your stories and experience to continue the fight to stop powerful Internet Service Providers (ISPs) from limiting your ability to choose the service that’s best for you.

Submit Your Story

If you live in Oakland and have had trouble acquiring service from the ISP of your choice, EFF wants to know.

Even in cities like Oakland where many residents ostensibly have a choice, thousands of renters are denied the power of that option.

For years, renters have been denied access to the Internet Service Provider of their choice as a result of pay-to-play schemes. These schemes―promoted by the corporations in control of the largest Internet Service Providers―allow powerful corporations to manipulate landlords into denying their tenants the ability to choose between providers who share their values, or have plans that provide the best service meeting a customer's needs and budget. This concern was only exacerbated when the FCC repealed the 2015 Open Internet Order. Chairman Pai and the FCC claimed that net neutrality protections were not necessary, as the free market would prevent exploitative practices by allowing customers to vote with their dollars. But with more than half the country only having one option of high-speed Internet Service Provider, this illusion of choice has never been based in reality. Even in cities like Oakland where many residents ostensibly have a choice, thousands of renters are denied the power of that option by real estate trusts and management firms that restrict access to their properties to any provider other than the one with the most enticing landlord incentives.

In January of 2017, San Francisco adopted critical protections to stop these exploitative practices. As a result, San Francisco residents enjoy better, more affordable options than many of their friends and coworkers in neighboring communities.

EFF, local residents, advocacy groups, and businesses have begun working with Oakland lawmakers to make sure that the city's renters can take advantage of these same protections. If you live in Oakland and have experienced difficulty acquiring Internet service from the provider that's best for you, your City Council representatives want to know.

Designing Welcome Mats to Invite User Privacy

Thu, 02/14/2019 - 13:50

The way we design user interfaces can have a profound impact on the privacy of a user’s data. It should be easy for users to make choices that protect their data privacy. But all too often, big tech companies instead design their products to manipulate users into surrendering their data privacy. These methods are often called “Dark Patterns.”

When you purchase a new phone, tablet, or “smart” device, you expect to have to set it up with the needed credentials for it to be fully usable. For Android devices, you set up your Google account. For iOS devices, you set your Apple ID. For your Kindle, you set up your Amazon account.

Privacy by default should be the goal. However, particularly worrisome practices have been paired with the on-boarding process for many different platforms that serve as an obstacle to this aspiration.

What are “Dark Patterns”?

Harry Brignull, a UX researcher, coined the term “Dark Patterns.” He maintains a site dedicated to documenting the different types of Dark Patterns, where he explains: “Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn't mean to.”

The Norwegian Consumer Council (the Forbrukerrådet or NCC) builds on this critical UX concept in a recent report that criticizes “features of interface design crafted to trick users into doing things that they might not want to do, but which benefit the business in question.”

On the heels of this report, the NCC filed a complaint against Google on the behalf of a consumer. This complaint argues that Google violated the European Union’s General Data Protection Regulation (GDPR) by tricking the consumer into giving Google access to their location information. Likewise, the French data protection agency (the CNIL) recently ruled that some of Google’s consent and transparency practices violate the GDPR. The CNIL fined Google 50 million Euros (equivalent to about 57 million U.S. dollars).

The NCC report emphasizes two important steps in the on-boarding process of Android-based devices: the enabling of Web & App Activity and Location History. These two services encompass a wide variety of information exchanges between different Google applications and services. Examples include collection of real-time location data on Google Maps and audio-based searches and commands via Google Assistant.

It is possible to disable these services in the “Activity Controls” section of one’s account. But Google’s on-boarding process causes users to unintentionally opt-in to information disclosure, then makes it difficult to undo these so-called “choices” about privacy control, which were not ethically presented in the beginning. This creates more work for the consumer to retroactively opt-out.

Of course, Google isn’t alone in using Dark Patterns to coerce users into “consenting” to different permissions. For example, in the image immediately below, Facebook Messenger’s SMS feature presents itself when you first download the application. Giving SMS permission would mean making Facebook Messenger the default texting application for your phone. Note the bright blue “OK”, as opposed to the less prominent “Not Now”.

Likewise, in the next image immediately below, Venmo’s onboarding encourages users to connect to Facebook and sync the contacts from their phones. Note how “Connect Facebook” is presented as the bolder and more apparent option. Potentially cross sharing robust profiles of information from your Facebook network.

These are classic Dark Patterns, deploying UX design against consumer privacy and in favor of corporate profit.

What is “Opinionated Design”?

The common thread between Opinionated Design and Dark Patterns is the power of the designer behind the technology to nudge the user to take actions that the business would like the user to take.

Of course, UX design can also guide users to protect their safety. “Opinionated Design” uses the same techniques as Dark Patterns, by means of persuasive visual indicators, bolder options, and compelling wording. For example, the Google Chrome security team used the design principles of “Attractiveness of Choice” and “Choice Visibility” to effectively warn some users about SSL hazards, as discussed in their report in 2015. When the safety of the user is valued by the designer and product team, they can guide the user away from particularly vulnerable situations while browsing.

The common thread between Opinionated Design and Dark Patterns is the power of the designer behind the technology to nudge the user to take actions that the business would like the user to take. As in the case of Google Chrome’s SSL warnings, where explanations and clear guidance to safety can help to prevent abuse of a person navigating the web.

These are examples of Opinionated Design:

SSL warnings are presented to the user and given brief explanations on why the connection is not safe. Note how “Back to safety” is boldly presented to guide the user back from an potential attack.

Privacy by Default

Part of the solution is new legislation that requires companies to obtain opt-in consent that is easy for users to understand before they harvest and monetize users’ data. To do this, UX design must pivot from using Dark Patterns to satisfy business metrics. Among other things, it should:

  • Decouple the on-boarding process for devices and applications from the consent process.
  • Visually display equally weighted options on pages that involve consent to data collection, use, and sharing.
  • Consumers feel uneasy about privacy, so default to the “no” option during setup.
  • Coercing “consent” for lucrative data bundling may satisfy a temporary metric, but public distrust of your platform will outweigh any gains from unethical design.

We must continue this critical discussion around consent and privacy, and urge product designers and managers to build transparency into their applications and devices. Privacy doesn’t have to be painful and costly, if it is integrated in the beginning of UX design, rather than stapled on at the end.

Powerful Permissions, Wimpy Warnings: Installing a Root Certificate Should be Scary

Thu, 02/14/2019 - 13:27
More lessons from "Facebook Research"

Last week, Facebook was caught using a sketchy market research app to gobble large amounts of sensitive user activity after instructing users to alter the root certificate store on their phones. A day after, Google pulled a similar iOS “research program” app. Both of these programs are a clear breach of user trust that we have written about extensively.

This news also drew attention to an area both Android and iOS could improve on. Asking users to alter root certificate stores gave Facebook the ability to intercept network traffic from users’ phones even when that traffic is encrypted, making users' otherwise secure Internet traffic and communications available to Facebook. How the devices alert users to this possibility—the "UX flow"—on both Android and iOS could be improved dramatically.

To be clear, Android and iOS should not ban these capabilities altogether, like Apple has already done for sideloaded applications and VPNs. The ability to alter root certificate stores is valuable to researchers and power-users, and should never be locked-down for device owners. A root certificate allows researchers to analyze encrypted data that a phone’s applications are sending off to third-parties, exposing whether they’re exfiltrating credit-card numbers or health data, or peddling other usage data to advertisers. However, Facebook’s manipulation of regular users into allowing this ability for malicious reasons indicates the necessity of a clearer UX and more obvious messaging.

Confusing prompts for adding root certificates

When regular users are manipulated into installing a root certificate on their device, it may not be clear that this allows the owner of the root certificate to read any encrypted network traffic.

On both iOS and Android, users installing a root certificate click through a process filled with vague jargon. This is the explanation users get, with inaccessible jargon bolded.

Android: “Note: The issuer of this certificate may inspect all traffic to and from the device.”

iOS: “Installing the certificate “<certificate name>” will add it to the list of trusted certificates on your iPhone. This certificate will not be trusted for websites until you enable it in Certificate Trust Settings.”

 

Android's warning before adding a root certificate is some small red text filled with jargon.

iOS's warning is much larger, but doesn't explain at all what significance this action may have to a non-technical user.

Regular users probably don’t know about the X.509 Certificate ecosystem, who certificate issuers are, what it means to “trust” a certificate, and its relationship to encrypting their data. On Android, the warning is vague about who has what capabilities: an “issuer … may … inspect all traffic”. On iOS, there’s no explanation whatsoever, even in the “Certificate Trust Settings,” about why this may be a dangerous action.

Security-compromising actions should have understandable messaging for non-technical users

The good news: it’s possible to get this sort of messaging right.

 For instance, these dangers also apply on browsers, where the warnings to users are much more clear. Compare the above messaging flow for trusting root certificates on your phone to the equivalent warnings on browsers when you hit a website with a self-signed or untrusted certificate. Chrome warns in very large letters, “Your connection is not private,” and Firefox similarly announces, “Your connection is not secure.” Chrome’s messaging even lists possible types of sensitive data that may be exfiltrated: “passwords, messages, or credit cards." Changing your browser’s root certificate store then involves multiple steps hidden behind an “Advanced” button.

Chrome's warning on websites with self-signed certificates. The messaging is clear and understandable, and changing your browser’s root certificate store then involves multiple steps hidden behind an “Advanced” button.

The prompt that appears when entering the developer console on the Facebook website.

 Another good example comes from Facebook itself: when you open a browser developer console on Facebook’s website, a big red “Stop!” appears to prevent users not familiar with the console from doing something dangerous. Here, Facebook goes out of its way to warn users about the dangers of using a feature meant for researchers and developers. Facebook’s “market research” app, Android, and iOS did none of this.

The answer should not be to vilify root certificates and their capabilities in general. Tools like this prove themselves invaluable to security researchers and privacy experts. At the same time, they should not be presented to general users without abundantly clear messaging and design to indicate their potential dangers.

National Emergencies: Constitutional and Statutory Restrictions on Presidential Powers

Thu, 02/14/2019 - 11:38

When a president threatens to exercise the power to declare a national emergency, our system of checks and balances faces a crucial test. With President Trump threatening such a declaration in order to build his proposed physical border wall, that test could be an important one that could quickly implicate your right to privacy and a transparent government.

Like the Constitution, statutory powers do not justify a presidential declaration of emergency powers to build a proposed border wall.

EFF has long tangled with governmental actions rooted in presidential power. From mass telephone records collection to tapping the Internet backbone, and from Internet metadata collection to biometric tracking and social media monitoring, claims of national crisis have often enabled digital policies that have undermined civil liberties. Those policies quickly spread far beyond their initial justification. We have also seen presidential authorities misused to avoid the legislative process—and even used to try to intimidate courts and prevent them from doing their job to protect our rights.

So when the President threatens to use those same emergency authorities to try paying for a border wall after Congress has refused, we watch closely. And so should you.

National Emergencies and the Constitution

The tension created by the constitutional separation of powers among the president, Congress, and the courts during times of national emergency is not new. 

President Lincoln famously suspended habeas corpus during the American Civil War. Later, President Roosevelt authorized the detention and internment of 100,000 Japanese-Americans, a profound constitutional offense that ranks among our country’s most severe violations of our founding commitments. Finally, President Truman sought to use emergency powers to nationalize a steel mill to supply the armed forces during their “police action” in Korea.

The last of these prompted a lawsuit that has served as the touchstone for consideration of these questions ever since. The case is known as the Steel Seizure Cases or Youngstown Sheet & Tube Co. v. Sawyer (1952). In Youngstown, the Court ruled against President Truman, holding that he did not have the power to seize a privately owned steel mill.

Justice Jackson’s concurring opinion in Youngstown set forth the analytical framework that has come to define this area of law. It explains that executive power stands at its lowest ebb when confronting an explicit act of Congress denying the purported authority, as President Truman did when attempting to seize steel mills. In contrast, executive power attains maximal reach when authorized (either explicitly or by implication) by Congress, such as when Congress has authorized military action. In between, the executive branch has flexible authority within a “zone of twilight” on issues that Congress has not addressed.  

Flash forward to today. Congress has not appropriated funds to build the wall requested by the President. Given that Congress is the branch of government with the exclusive power to tax and spend, the most obvious way to characterize this refusal is as a rejection of the President’s request—placing the President at his “lowest ebb” of power under the Youngstown analysis.

Alternatively, Congress’ silence could be read to indicate that it hasn’t addressed the issue. Congress regularly grants the President an annual sum of funding for “discretionary” spending purposes, from which the administration could claim funds have already been provided and that his acts fall within the “zone of twilight.” On the other hand, funds provided in discretionary budgets are designed to fill budgetary gaps of federal agencies in their regular operations. Construing congressional silence as assent would therefore require a stretch.

The choice between these two possibilities will likely be the crux of any legal challenge to the President’s attempt to direct funds absent congressional appropriation. Because Congress exclusively wields the power to appropriate funds and declined to do so here, the courts should overturn any unilateral executive branch action, as they did in Youngstown.

Statutory Powers to Declare Presidential Emergencies

In addition to his constitutional powers, several statutory powers could conceivable invoked by the President. Let's look at two we think are more likely:

First, Congress enacted the National Emergencies Act in 1976. The Act has been invoked dozens of times in the years since, usually to prohibit transactions with foreign powers engaged in violent conflict with U.S. military forces.

On the one hand, the National Emergencies Act does not specify any particular circumstances that must be satisfied before a president can invoke its extraordinary powers. On the other hand, the legislative history preceding its passage reveals Congress’ intent to limit executive fiat by terminating previous open-ended declarations of emergency and providing mechanisms for Congress to limit future declarations.

Until either the Senate or House passes a resolution to terminate a national emergency, however, the Act requires relatively little of a president invoking its authority. It requires the President to periodically report to Congress about any funds it spends related to a declaration of emergency.

This requirement for reporting could be read to insinuate presidential authority to spend funds in the event of a bona fide emergency. Courts, however, should scrutinize the legitimacy of any claimed emergency, especially when Congress has affirmatively declined to appropriate the full sum sought by the President.

Second, the Robert T. Stafford Disaster Relief and Emergency Assistance Act provides a more well-tested authority for a president to spend funds without congressional appropriation. It, however, has been used only in responses to natural disasters, to enable temporary responses to unforeseen events. It has never justified spending funds that Congress has refused to appropriate in response to a previous executive branch request. It also requires a Governor of an affected state to request an emergency declaration from the president, which has yet to happen along the U.S. southern border. In fact, one Governor has done the opposite.

Like the Constitution, statutory powers do not justify a presidential declaration of emergency powers to build a proposed border wall. Given Congress’ refusal to appropriate funds for the president’s proposal, the courts should guard the separation of powers, and not defer to a potential presidential declaration of emergency.

Emergency Powers Have Enabled Secrecy and Mass Surveillance

Letting a supposed national emergency serve as a pretense for extraordinary executive powers outside of congressional approval has proven to be very dangerous for digital civil liberties. As noted above, emergencies have already served as the basis for creating several mass surveillance programs that EFF has worked hard to stop. These same claims have been used to justify tracking immigrants and other marginalized people, tracking that quickly spread far beyond its original justification.

Beyond the powers themselves, the secrecy that has come along with those efforts threatens our Constitutional checks & balances by truncating legislative oversight and judicial review. The decade we’ve spent trying to get a public federal court to consider the government’s unconstitutional mass surveillance programs has informed our perspective on that problem.

Put simply, there is no national emergency exception to the Constitution, or to the key statutes that constrain executive branch authority here. Similarly, national emergencies have never legitimately justified the continual monitoring of hundreds of millions of Americans without a shred of suspicion. 

As the debate over border policy continues, we urge both the political branches and the judiciary to refuse to allow the false pretenses about a national emergency to rip holes into our system of checks and balances. We know all too well that such holes can too easily be used to justify a further expansion of both surveillance and secrecy.

The Final Version of the EU's Copyright Directive Is the Worst One Yet

Wed, 02/13/2019 - 20:48

Despite ringing denunciations from small EU tech businesses, giant EU entertainment companies, artists' groups, technical experts, and human rights experts, and the largest body of concerned citizens in EU history, the EU has concluded its "trilogues" on the new Copyright Directive, striking a deal that—amazingly—is worse than any in the Directive's sordid history.

Take Action

Stop Article 13

Goodbye, protections for artists and scientists

The Copyright Directive was always a grab bag of updates to EU copyright rules—which are long overdue for an overhaul, given that it's been 18 years since the last set of rules were ratified. Some of its clauses gave artists and scientists much-needed protections: artists were to be protected from the worst ripoffs by entertainment companies, and scientists could use copyrighted works as raw material for various kinds of data analysis and scholarship.

Both of these clauses have now been gutted to the point of uselessness, leaving the giant entertainment companies with unchecked power to exploit creators and arbitrarily hold back scientific research.

Having dispensed with some of the most positive versions of the Directive, the trilogues have also managed to make the (unbelievably dreadful) bad components of the Directive even worse.

A dim future for every made-in-the-EU platform, service and online community

Under the final text, any online community, platform or service that has existed for three or more years, or is making €10,000,001/year or more, is responsible for ensuring that no user ever posts anything that infringes copyright, even momentarily. This is impossible, and the closest any service can come to it is spending hundreds of millions of euros to develop automated copyright filters. Those filters will subject all communications of every European to interception and arbitrary censorship if a black-box algorithm decides their text, pictures, sounds or videos are a match for a known copyrighted work. They are a gift to fraudsters and criminals, to say nothing of censors, both government and private.

These filters are unaffordable by all but the largest tech companies, all based in the USA, and the only way Europe's homegrown tech sector can avoid the obligation to deploy them is to stay under ten million euros per year in revenue, and also shut down after three years.

America's Big Tech companies would certainly prefer not to have to install these filters, but the possibility of being able to grow unchecked, without having to contend with European competitors, is a pretty good second prize (which is why some of the biggest US tech companies have secretly lobbied for filters).

Amazingly, the tiny, useless exceptions in Article 13 are too generous for the entertainment industry lobby, and so politicians have given them a gift to ease the pain: under the final text, every online community, service or platform is required to make "best efforts" to license anything their users might conceivably upload, meaning that they have to buy virtually anything any copyright holder offers to sell them, at any price, on pain of being liable for infringement if a user later uploads that work.

News that you're not allowed to discuss

Article 11, which allows news sites to decide who can link to their stories and charge for permission to do so, has also been worsened. The final text clarifies that any link that contains more than "single words or very short extracts" from a news story must be licensed, with no exceptions for noncommercial users, nonprofit projects, or even personal websites with ads or other income sources, no matter how small.

Will Members of the European Parliament dare to vote for this?

Now that the Directive has emerged from the Trilogue, it will head to the European Parliament for a vote for the whole body, either during the March 25-28 session or the April 15-18 session—with elections scheduled in May.

These elections are critical: the Members of the European Parliament are going to be fighting an election right after voting on this Directive, which is already the most unpopular legislative effort in European history, and that's before the public gets wind of these latest changes.

Let's get real: no EU political party will be able to campaign for votes on the strength of passing the Copyright Directive—but plenty of parties will be able to drum up support to throw out the parties that defied the will of voters and risked the destruction of the Internet as we know it to pour a few million Euros into the coffers of media companies and newspaper proprietors—after those companies told them not to.

There's never been a moment where your voice mattered more

Watch this space. We will be working with allies across the EU to make this upcoming Parliamentary vote into an issue that every Member of the European Parliament is well-informed on, and we're going to make sure that every MEP knows that the voters of Europe are watching them and taking note of how they vote.

Digital rights activists across the EU will be working to make this upcoming Parliamentary vote into an issue that every Member of the European Parliament is well-informed on. Between the lobbying of Big Tech and Big Media, we’ll be explaining what this means for every day Internet users. And together, we’re going to make sure that every MEP knows that the voters of Europe are watching them and taking note of how they vote.

All that it takes is for you to speak up. Over four million Internet users have signed the petition against the Directive. If you can do that, you can pick up the phone and call your MEP. Tell them why you’re against the Directive, what it means for you, and what you expect your representatives to do in the forthcoming plenary vote. It really is the last chance to make your voice heard.

Take Action

Stop Article 13

Entrepreneurs Tell USPTO Director Iancu: Patent Trolls Aren’t Just “Monster Stories”

Wed, 02/13/2019 - 17:31

Patent trolls aren’t a myth. They aren’t a bedtime story. Ask a software developer—they’re likely to know someone who has been sued or otherwise threatened by one, if they haven’t been themselves.

Unfortunately, the new director of the U.S. Patent and Trademark Office (USPTO) is in a serious state of denial about patent trolls and the hurt they cause to technologists everywhere. Today a number of small business owners and start-up founders have submitted a letter [PDF] to USPTO Director Andre Iancu telling him that patent trolls remain a real threat to U.S. businesses. Signatories range from mid-sized companies like Foursquare and Life360 to one-person software enterprises like Ken Cooper's. The letter explains the harm, cost, and stress that patent trolls cause businesses.   

Patent trolls aren’t a thing that happens once in a while or an exception to the rule. Over the past two decades, troll litigation has become the rule. There are different ways to measure exactly what a “troll” is, but by one recent measurement, a staggering 85 percent of recently filed patent lawsuits in the tech sector were filed by trolls.

That’s almost 9 out of 10 lawsuits being filed by an entity with no real product or service. Because the Patent Office issues so many low-quality software patents, the vast majority of these suits are brought by entities that played no role in the development of the real-world technology they attack. Instead, trolls use vague and overbroad patents to sue the innovators who create products and services. This is how we end up with patent trolls suing people for running an online contest or making a podcast.

Three Steps Forward, Two Steps Back

The news isn’t all bad. Reformers have made substantial progress in the fight against patent trolls. A string of positive Supreme Court decisions, beginning with the 2006 eBay v. MercExchange decision, and going on through 2014’s Alice v. CLS Bank ruling, have made it feasible to fight trolls in court. Meanwhile, the America Invents Act created a useful new process for challenging bad patents right in the patent office—the inter partes review

Supreme Court decisions have made it harder for patent trolls to rope defendants into remote, inappropriate venues like the Eastern District of Texas; easier to award fees against patent owners who abuse the system; and perhaps most importantly, the Alice decision has made it easier to knock out bogus software patents more quickly.

Those victories haven’t solved the problem. Still, they have slowed down the onslaught of litigation by patent trolls and the lawyers who help them. Just to focus on a single, prolific bad actor, take the shell company ArrivalStar, which later morphed into to Shipping & Transit LLC. Several years ago, the Shipping & Transit lawsuit machine was able to scare both private companies and even public transit agencies into coughing up $80,000 or more for valueless “licenses.” Shipping & Transit ultimately skimmed hundreds of thousands of dollars from cash-strapped cities and millions more from companies large and small. Later, bolstered by the new reforms, small companies fought back—and won. Today, Shipping & Transit is bankrupt, can’t sue anyone, and admits that patents it used to demand millions in licensing fees are worth just $1.

The victories we’ve won are why the trolls and their allies are out in force this year. They’re pushing awful legislation like the STRONGER Patents Act, which would roll back just about every reform that has given victims of trolling a fighting chance. The trolls and abusive companies that have profited off the patent system have now won a considerable prize. The man who runs the office where patents are granted has said clearly that further reforms aren’t necessary. It’s disappointing, but considering their over-the-top lobbying efforts, it isn’t surprising.

Trolls Are All Too Real

Director Iancu has gone much further than saying he’s skeptical of reform. Iancu appears to question whether patent trolls even exist. In a recent speech, he called accounts of patent trolling “scary monster stories.” Iancu clearly isn’t listening to the stories of small businesses hit by patent demands week after week. But we do hear from those businesses—over and over again. We won’t stand idle while Iancu denies basic facts about what’s going on in the U.S. patent system.

It isn’t hard to find entrepreneurs who have been hurt by patent trolls. We highlight just a few of those innovators in our “Saved by Alice” series. These business owners endured years of stress, huge costs, and sometimes bankruptcy, because they were threatened by patent trolls that produce nothing. And they are the few that are brave enough to speak up—many more are too afraid to speak out, else be targeted with yet another expensive lawsuit. 

These aren’t myths. The flood of lawsuits we witness isn’t an opinion. The cases are real and the damage they do to defendant companies is undeniable. Iancu is choosing to ignore this situation, to satisfy his audience. And the audience he’s chosen says it all. When Iancu called patent trolls “monster stories,” he was speaking to a gathering of lawyers and judges in the Eastern District of Texas—the heart of the problem. The signaling couldn’t have been more clear. Iancu is working to overturn hard-won reforms and to re-open the spigot of patent trolling dollars that flows into that skewed venue.

When Iancu hails the innovation produced by the U.S. tech sector, he’s absolutely correct. Innovation in software and tech is everywhere we look. But patent trolls are there, too, and easy to find. When it comes to patents, the magical thinking isn’t coming from reformers. Rather, it’s on full display at the exclusive conferences that Iancu is speaking at, surrounded by patent owners, patent lawyers, and patent-licensing insiders. We’re in danger of heading back to a wrongheaded mentality that “more patents equals more innovation.” That’s the real myth.

French Data Protection Authority Takes on Google

Wed, 02/13/2019 - 14:34

        France’s data protection authority is first out the gate with a big decision regarding a high-profile tech company, and every other enforcer in Europe is taking notes. On January 21, France’s CNIL fined Google 50 million Euros for breaches of the General Data Protection Regulation (GDPR). This is about 57 million U.S. dollars. The decision relates to Google’s intrusive ad personalization systems, and its inadequate systems of notice and consent when users create accounts for Google services on Android devices.

Since the GDPR came into effect on May 25, 2018, many companies have simulated compliance with the law while manipulating users into granting them consent by means of deceptive interface design and behavioral nudging. If a major company is seeking to get a free pass from another national data protection authority, that decision will now be critically contrasted with the approach of the CNIL.

Hopefully, the CNIL’s recent decision is a harbinger of a robust enforcement approach which will deliver critical privacy protections to users.

The Complaints from Privacy Advocates

Under the GDPR, processing of personal data is only allowed where there is a “legal basis.” such as the consent of the user, and users are granted extensive rights over their data. The CNIL found Google in breach of the law’s transparency and information requirements, and as a result found invalid the so-called “consent” that Google sought to rely upon.

The CNIL’s investigation was prompted by two complaints from digital rights organizations, None of Your Business (NOYB) and La Quadrature du Net (LQDN). NOYB was established by data protection activist Max Schrems, and this group filed similar complaints against Android, Facebook, Whatsapp and Instagram. NOYB objected to the privacy policy which Android users were asked to agree to, and argued that the consent was invalid and thus illegal.

LQDN’s complaint addresses the consent process around the creation of an account to access Google services. They argued that Google does not have a valid legal basis for using consumer data for the personalization of content, the behavioral analysis of users, and the targeting of ads on Youtube, Gmail, and Google Search.

Invalid Transparency and Information

The GDPR places much importance on companies informing users about how their data is used and what rights they have to intervene in the processing. Article 13 specifies information that must be disclosed to the user before any processing takes place, such as the nature and purpose of collection, and how long the data will be retained. Article 12 requires that this information be conveyed “in a concise, transparent, intelligible and easily accessible form.” The aim is to ensure that users have control over what data is taken from them, and how it is used and shared.

The CNIL found that Google violated its duties of transparency and information. Specifically, Google obfuscated “essential information” about data processing purposes, data storage periods, and categories of personal information used for ads personalization. For example, the relevant information was “excessively disseminated” over multiple documents, and required users to click through five or six pages. Moreover, information was “not always clear” due to “generic and vague” verbiage. Yet the “massive and intrusive” scope and detail of the data collected by Google from its array of services and sources placed an increased obligation on the company to make its practices clear and comprehensible to users.

Invalid Consent

In its communications with the CNIL, Google asserted that the legal basis for its personalization of ads was the consent of the user. The CNIL rejected this assertion, for two reasons. First, the CNIL found that due to the breaches of Articles 12 and 13 discussed above, the consent acquired by Google was not properly informed.

In the EU, the user must opt-in - consent cannot be implied on the basis that users theoretically have a way of opting out.

Second, the GDPR requires user consent to be specific and unambiguous, and the latter requires a positive act by the user to indicate their agreement. Yet Google had pre-ticked the boxes allowing it to use web and app history for behavioral targeting, a method specifically excluded by the Regulation. The CNIL cites Article 29 Working Party guidance which requires a user to take steps to consent. In the EU, the user must opt-in - consent cannot be implied on the basis that users theoretically have a way of opting out.

Unanswered Questions

While the CNIL found Google in breach of the GDPR, it left unaddressed key arguments of the complainants. NOYB hones in on the imbalance of power between Android and individual users. Its dominance of the market, and the absence of effective alternatives, means that users have little option but to “consent” or they will be excluded from the ecosystem. Recital 42 of the GDPR states that consent “should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment.” This is one more reason why companies must be required to offer access to their service even when users reject tracking and behavioral personalization.

LQDN challenge the practice of tying, or making acceptance of personalized advertising a condition of access to its services. Article 7(4) states that “when assessing whether consent is freely given, utmost account shall be taken of whether... the provision of a service is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.” Behavioral analysis of user data for the personalization of advertising is not necessary to deliver mail or video hosting services to users. If tying is allowed, users will be confronted by cookie walls everywhere, requiring that they agree to tracking in exchange for access to services.

The Stakes Have Been Raised in Europe

The $57 million fine highlights the increased sanctions available under the GDPR. In November, the UK’s Information Commissioner’s Office imposed the then-maximum fine of £500,000 on Facebook for breaches uncovered as part of Cambridge Analytica investigations under the old law. $57 million is certainly manageable for a company with a turnover over €96 Billion, but the ramifications of this decision do not end with the payment of the fine.

Google is the subject of multiple other investigations in Europe, and this is unlikely to be the last finding that it violated the GDPR. It will have to remedy its violations and change its practices and improve user privacy. This decision sends a shot across the bows of other companies with worse transparency and few scruples in deceiving users into granting consent.

Google has four months to appeal this decision to the Conseil D’État, France’s Supreme Court for administrative matters.

Hearing Wednesday: EFF Asks Court to Unseal Phone Tap Order That Was Among Hundreds of Questionable Wiretaps Approved By California Court

Tue, 02/12/2019 - 19:23
EFF Client Targeted As Record Number of Wiretap Orders Raised Questions About the Legality of Riverside County’s Wiretapping Process

Riverside, California—On Wednesday, February 13, at 10:00 am, the Electronic Frontier Foundation (EFF) will ask a state court to unseal a wiretap order issued against individuals with no criminal records to learn why the phones were tapped and whether the warrant authorization process was legitimate.

The order was among hundreds of questionable wiretaps issued by a single county in 2015, which accounted for over half of all reported wiretaps from California, and over one-fifth of all state wiretaps issued nationwide. The individuals were never notified that their phones were being tapped, despite a law requiring such notice within 90 days of the wiretap’s conclusion, and were never charged with any wrongdoing.

EFF and Sheppard, Mullin, Richter & Hampton LLP represent a targeted individual whose phone was ordered tapped by Judge Helios J. Hernandez. The judge authorized a record number of wiretaps during the 2015 calendar year, not a single one of which resulted in either a trial or conviction. After a series of stories in USAToday uncovered Riverside County’s massive surveillance campaign and questioned the legality of the surveillance, watchdogs warned that the wiretaps likely violated federal law.

EFF Criminal Defense Attorney Stephanie Lacambra will argue that the wiretap order and justification should be subjected to public scrutiny and oversight because of the questionable circumstances surrounding its ssuance. Moreover, the First Amendment right of public access to court records should also apply to wiretap orders.

What:
Hearing in the Matter of the Application of Michael A. Hestrin, District Attorney of the County of Riverside, State of California, for an Order Authorizing the Interception of Wire Communications in Wiretap No. 15-409

When:
Wednesday, February 13, at 10:00 am

Where:
Superior Court of California, Riverside County
Riverside Hall of Justice
4100 Main St.
Department 64
Riverside, CA 92501

For EFF's motion to unseal in the case:
https://www.eff.org/document/client-motion-2

For more on this case:
https://www.eff.org/cases/riverside-wiretaps

Contact:  Stephanie LacambraCriminal Defense Staff Attorneystephanie@eff.org

SEC’s Action Against Decentralized Exchange Raises Constitutional Questions

Tue, 02/12/2019 - 02:00

A recent public statement from the U.S. Securities and Exchange Commission implied that those engaged in writing and publishing code might need to worry about running afoul of securities laws. In its statement about the cease and desist order against the co-founder of decentralized cryptocurrency exchange EtherDelta, the SEC indicated that someone who simply “provides an algorithm” might be found to be running a securities exchange.  In the order itself, the SEC stated that EtherDelta’s creator had violated securities laws because he “wrote and deployed” code that he “should have known” would contribute to EtherDelta’s alleged violations. EFF today sent a public letter reminding the agency that writing and publishing code is a form of protected speech under the First Amendment, and that the courts don’t take kindly to government agencies requiring people to obtain licenses before exercising their free speech rights. 

EtherDelta was founded by Zachary Coburn. The SEC charged Coburn with running an unregistered national securities exchange, and he settled and agreed to pay over $300,000. While EFF takes no position on whether other aspects of EtherDelta violated the law or SEC regulations, EFF urged the SEC to clarify its dangerously broad language regarding potential liability for merely writing and publishing code.

Setting aside whatever other issues the SEC might have had with EtherDelta, there are many reasons why software that enables decentralized currency or other exchanges may be useful to consumers. That’s why it’s important that regulatory interventions today don’t stifle that innovation by claiming they can impose liability merely for the act of writing or distributing code. 

 Centralized exchanges invest power in a central intermediary, which can freeze the funds of customers, block certain customers from the platform, or block specific transactions, with no obligations to provide affected customers with an appeals process. Centralized exchanges can suffer outages that prevent customers from accessing their digital currencies. These centralized exchanges are also a target for criminals seeking to steal customer funds, and can themselves be run by unscrupulous individuals who abuse their access to customer funds and data.

For these and other reasons, EFF has called on cryptocurrency exchanges to adopt best practices around defending user rights, including issuing regular transparency reports. It’s also not surprising that regulators are cracking down on exchanges, and indeed EFF supports regulators stepping in to hold exchanges accountable for engaging in fraud, theft, and other misleading business practices.

But there’s also a technological response to the risks of centralized exchanges, and that’s where decentralized exchanges come in.

Decentralized exchanges allow for the exchange of digital currencies using smart contracts. Requests to sell and buy cryptocurrency can be submitted to a smart contract that matches and completes these transactions. Decentralized exchanges often don’t need to hold funds for customers—rather, customers can maintain possession of their cryptocurrency, and the decentralized exchange can automatically complete exchanges  without taking possession of the assets. Decentralized exchanges thus do not need to hold a honeypot of money that might attract criminals, as centralized exchanges do, and cannot themselves steal funds. Furthermore, because trades are not approved by an individual or company, they cannot easily be censored by an intermediary.

Decentralized exchanges are in their earliest stages of development. They aren’t widely used right now, but this is an area of rapid research and innovation.  Many cryptographers and programmers are experimenting with smart contract systems to develop new systems to help users correct the power imbalances they face when engaging in online transactions. 

When describing its action against EtherDelta, the SEC argued that EtherDelta “provided a marketplace for bringing together buyers and sellers for digital asset securities through the combined use of an order book—a website that displayed orders—and a smart contract run on the Ethereum blockchain.” The SEC said it would take a “functional approach” to determining whether something was or was not an exchange that should register with the SEC: “The activity that actually occurs between the buyers and sellers—and not the kind of technology or the terminology used by the entity operating or promoting the system—determines whether the system operates as a marketplace and meets the criteria of an exchange under Rule 3b-16(a).”

The SEC wrote:

A system uses established non-discretionary methods if it provides a trading facility or sets rules.  For example, an entity that provides an algorithm, run on a computer program or on a smart contract using blockchain technology, as a means to bring together or execute orders could be providing a trading facility. As another example, an entity that sets execution priorities, standardizes material terms for digital asset securities traded on the system, or requires orders to conform with predetermined protocols of a smart contract, could be setting rules. Additionally, if one entity arranges for other entities, either directly or indirectly, to provide the various functions of a trading system that together meet the definition of an exchange, the entity arranging the collective efforts could be considered to have established an exchange.

 The SEC’s broad language about “an entity that provides an algorithm” could include cryptographic researchers and coders who are publishing ideas or code for debate and discussion, and working to develop systems that could benefit the public. Even if the individuals never deployed the code and never actively maintained or promoted a decentralized exchange, this overly broad language implies the SEC could well expect people merely writing and publishing code to register as a national securities exchange or face liability.

The free speech protections enshrined in the First Amendment and upheld through court cases across decades include the rights of individuals to publish their ideas without preemptively obtaining a license.

 This isn’t just dangerous because it could quell research; it’s unconstitutional. The free speech protections enshrined in the First Amendment and upheld through court cases across decades include the rights of individuals to publish their ideas without preemptively obtaining a license. And code itself is speech. EFF fought to establish this principle in 1995, which led a federal court to strike down Commerce Department export restrictions on encryption technologies for exactly this reason, ruling that forcing cryptographic researchers to obtain a license before publishing their work was a gross violation of the freedom of expression. Other courts have recognized this principle, and today it is not controversial that writing code implicates the same First Amendment protections as writing in any other language.

Imposing liability on coders and researchers who fail to obtain a license as a national security exchange prior to publishing their code on GitHub or describing their methods in a white paper would similarly be an affront to the First Amendment. This would unfairly hamper the expressive rights of countless coders and researchers in this space. As we explained to the SEC, restricting the ability to merely write and publish code would fail the well-established test for imposing restraints on speech. Courts have found that such restraints are justified only in unusual and extreme circumstances, and they must survive the most exacting scrutiny.

There were many facts in the specific case of EtherDelta that may have attracted the SEC’s attention, but the act of publishing code used in a decentralized exchange should not be one of them. As we said in our letter to the SEC, it is crucial that those engaged in developing protocols, verifying transactions through mining, and writing code are not held liable for operating, or assisting with operating, a securities exchange. Imposing regulatory or criminal liability for such activity would run afoul of the First Amendment. 

Read our letter.

Note: EFF is grateful to Prof. Joseph Bonneau, Lindsay Lin, and Marta Belcher for providing feedback related to blockchain technology and policy.

Enough of the 5G Hype

Mon, 02/11/2019 - 17:03

Wireless carriers are working hard to talk up 5G (Fifth Generation) wireless as the future of broadband. But don’t be fooled—they are only trying to focus our attention on 5G to try to distract us from their willful failure to invest in a proven ultrafast option for many Americans: fiber to the home, or FTTH.

A recent FCC report on competition found that the future of high-speed broadband for most Americans will be a cable monopoly. Without a plan to promote fiber to the home, that’s not likely to change. In fact, because the 5G upgrade relies on fiber infrastructure, even 5G will be possibly limited to areas that already have FTTH – meaning, they already have a competitive landscape and, therefore, better service. The rest of us get monopolistic slow lanes.

Regulators and policymakers focusing only on 5G wireless are setting us up to fail. Only aggressive promotion of competitive high-speed wireline infrastructure can ensure that the 68 million Americans that currently have access to only one high-speed ISP will have options (and, in turn, the ability to vote with their wallets if service is slow), and that the 19 million Americans with no access at all can leapfrog into the 21st century of broadband access.

So here’s a better idea: ignore the 5G hype and focus instead on why large corporations such as Google, AT&T, and Verizon are content with abandoning fiber to the home.

Let’s break it down.

What Will 5G Do?

Fifth Generation (5G) wireless represents a more efficient way to manage wireless services through “network slicing,” which prevents multiple wireless networks from interfering with each other. Under previous wireless specifications, operators generally competed for the same network resources, but “slices” can operate as their own independent networks without cannibalizing one another. This allows for tailored wireless services for the Internet of Things, autonomous vehicles, broadband Internet access, and other services that have different needs.

Wireless carriers can deliver 5G services in basically two ways. Moderate speeds with lots of coverage or high-speeds with very limited range (around 1000 feet from the tower). The 5G services most often touted by industry are the limited range high-speed type, which will exist where fiber wireline infrastructure is present.

What Is 5G Not Going to Do?

Without a comprehensive plan for fiber infrastructure, 5G will not revolutionize Internet access or speeds for rural customers. So anytime the industry is asserting that 5G will revolutionize rural broadband access, they are more than just hyping it, they are just plainly misleading people. It will also not provide “fiber-like capabilities," no matter what was asserted by T-Mobile before the Senate Judiciary Committee as grounds to justify its merger. While a 100 Mbps (essentially 4G LTE speed) is nice, it is nothing compared to the 10-gigabit fiber networks that exist today (and were rolled out three years ago by a local government in Tennessee).

5G will also not be competitive with wireline Internet services. In the early Verizon home 5G broadband test cities, where the connections were marketed as faster than your cable broadband, it turned out that speeds average around 300 Mbps with some peaking to gigabit speeds. By comparison, cable networks had already deployed gigabit download networks earlier in 2018 and have plans to upgrade 10-gigabit networks (which they comically call 10G, because why not). In other words, 5G’s peak speeds match broadband speeds that are already in the process of being topped.

Ignore the Hype and Ask Why We Are Falling Behind 

Ultimately though, all this hype over 5G networks by these large companies has done little to answer fundamental questions in the U.S. broadband market: why are the largest ISPs not aggressively deploying fiber to the home despite its proven track record of profits and success? How is it that the United States, as the FCC has recently acknowledged, starting to slow down on fiber to the home, whereas the rest of the world moving is faster? Fiber to the home is cheap to upgrade to even higher speeds once it is laid, making it an infrastructure investment that will be good for decades to come. And its top speeds are already dwarfing even the loftiest 5G hyped assertions.

However, as long as policymakers and regulators at the state and federal level remain distracted and even willfully promote 5G hype, we will not see any remedy for the high-speed competition problems in America right now. The end result is we will be stuck with slower, more expensive, and not universally accessible high-speed broadband.

Video: How Justus Decher Beat a Patent Troll, With Help From Alice

Fri, 02/08/2019 - 17:07

Thousands of patent lawsuits are filed each year, and most of them deal with computer technology and software. Nearly 90 percent of those high-tech patent lawsuits are filed by shell companies that offer the public no products or services whatsoever. These patent-assertion entities, also known as “patent trolls,” simply enrich their owners by extracting money from operating companies.

In recent years, the Supreme Court has limited patent venue abuses, made fee-shifting easier, and most importantly, made it easier to throw out bad software patents in its Alice v. CLS Bank decision.

Patent troll lawsuits hurt real small businesses owned by regular people, most of whom don’t have the millions of dollars that would be required to defend themselves through a patent trial. We created our “Saved by Alice” project to tell their stories—the stories of company founders and entrepreneurs, who were able to keep on innovating because of the Alice decision.

We also sat down with some of those business owners to talk to them about their stories on camera. Today, we’re releasing a short video interview with Justus Decher, who told us about how he worked for years to build up his company—only to be threatened by a patent troll called MyHealth.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FSubKe8JYRJk%3Fautoplay%3D1%22%20allow%3D%22autoplay%3B%20encrypted-media%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

Armed with U.S. Patent No. 6,612,985, which is nothing more than a bare-bones description of the concept of remote patient monitoring, MyHealth and its lawyers went around to working tele-health companies demanding cash. To Decher, faced with a $25,000 demand, it felt like extortion.

It’s more important than ever that we hear from the people who have been hurt by rampant patent troll lawsuits. This year, patent trolls and other companies profiting from a lopsided patent system, are working hard to roll back the hard-won reforms that have made the patent system better. In the end, Decher’s company was saved, when a court analyzed the MyHealth patent under the rules of Alice—and promptly threw it out.

Now, patent trolls and their allies are pushing an upside-down narrative saying reforms have gone too far. Bona fide trolls are on the same side of this debate as patent holders like IBM, which claims that “the troll scare is largely just noise now.” Unfortunately, they have a powerful ally in the new Director of the U.S. Patent and Trademark Office, Andre Iancu, who has gone so far as to say that patent trolls are just “monster stories,” and that “we have over-corrected and risk throwing out the baby with the bathwater.” 

That’s the excuse they’re using to try to roll back Alice, and bring the U.S. patent system back to the bad old days. Right now, the Patent Office is seeking to impose new rules on patent examiners that will encourage them to ignore the Alice decision and give out more abstract software patents. We’re asking EFF supporters to file comments asking the Patent Office to reject that new guidance. 

TAKE ACTION

Tell the patent office to stop issuing abstract software patents

The purpose of Saved by Alice is to give voice to real inventors, who are sick and tired of being threatened by trolls who have contributed nothing to their industry. These aren’t “monster stories” or a “narrative” that pro-patent lobbyists can shift around—they’re real people. We hope you take a few minutes to listen to Justus Decher’s story.