You are here

Feed aggregator

Innocent Users Have the Most to Lose in the Rush to Address Extremist Speech Online

EFF - Fri, 09/20/2019 - 16:49
Internet Companies Must Adopt Consistent Rules and Transparent Moderation Practices

Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions.

A recent Senate Commerce Committee hearing regarding violent extremism online illustrated this problem. Representatives from Google, Facebook, and Twitter each made claims about their companies’ efficacy at finding and removing terrorist content, but offered very little real transparency into their moderation processes.

Facebook Head of Global Policy Management Monika Bickert claimed that more than 99% of terrorist content posted on Facebook is deleted by the platform’s automated tools, but the company has consistently failed to say how it determines what constitutes a terrorist⁠—or what types of speech constitute terrorist speech.

This isn’t new. When it comes to extremist content, companies have been keeping users in the dark for years. EFF recently published a paper outlining the unintended consequences of this opaque approach to screening extremist content—measures intended to curb extremist speech online have repeatedly been used to censor those attempting to document human rights abuses. For example, YouTube regularly removes violent videos coming out of Syria—videos that human rights groups say could provide essential evidence for future war crimes tribunals. In his testimony to the Commerce Committee, Google Director of Information Policy Derek Slater mentioned that more than 80% of the videos the company deletes using its automated tools are down before a single person views them, but didn’t discuss what happens when the company takes down a benign video.

Unclear rules are just part of the problem. Hostile state actors have learned how to take advantage of platforms’ opaque enforcement measures in order to silence their enemies. For example, Kurdish activists have alleged that Facebook cooperates with the Turkish government’s efforts to stifle dissent. It’s essential that platforms consider the ways in which their enforcement measures can be exploited as tools of government censorship.

That’s why EFF and several other human rights organizations and experts have crafted and endorsed the Santa Clara Principles, a simple set of guidelines that social media companies should follow when they remove their users’ speech. The Principles say that platforms should:

  • provide transparent data about how many posts and accounts they remove;
  • give notice to users who’ve had something removed about what was removed, under what rules; and
  • give those users a meaningful opportunity to appeal the decision.

While Facebook, Google, and Twitter have all publicly endorsed the Santa Clara Principles, they all have a long way to go before they fully live up to them. Until then, their opaque policies and inconsistent enforcement measures will lead to innocent people being silenced—especially those whose voices we need most in the fight against violent extremism.

Facebook's Social Media Council Leaves Key Questions Unanswered

EFF - Thu, 09/19/2019 - 18:34

Facebook took big step forward this week in its march to create an "oversight board" to help vet its more controversial takedown decisions, publishing more details about how it will work. Both Facebook and its users will be able to refer cases to the Board to request its review. Is this big step a big deal for online speech?

Maybe not, but it's worth paying attention. A handful of tech companies govern a vast amount of speech online, including the platforms we use to get our news, form social bonds, and share our perspectives. That governance means, in practice, making choices about what users can say, to whom. Too often—on their own or under pressure—the speech police make bad choices, frequently at the expense of people who already struggle to make their voices heard and who are underrepresented in the leadership of these companies.

EFF has proposed a few ways to improve the way speech is governed online, ever-vigilant to the fact that your freedoms can be threatened by governments, corporations, or by other private actors like online mobs. We must ensure that any proposed solution to one of those threats does not make the others even worse.

We have six areas of concern when it comes to this kind of social media council, which we laid out earlier this year in response to a broader proposal spearheaded largely by our friends at Article 19. How does Facebook's version stack up?

  • Independence: A subgroup of board members will be initially selected by Facebook, and they will then work with Facebook to recruit the rest (the goal is to ultimately have 40 members on the Board.) Thus, Facebook will have a strong influence on the makeup of first Board. In addition, the Board is funded through a "Trust," appointed and paid for by Facebook. This structure provides a layer of formal independence, but as a practical matter Facebook could maintain a great deal of control through its power to appoint trustees.
  • Roles: Some have argued that an oversight board should be able to shape a platform's community standards. We've been worried that because such a board might have no more legitimacy to govern speech than a company, it should not be given the power to dictate new rules under the guise of independence. So we think an advisory role is more appropriate, particularly given that the Board is supposed to adhere to international human rights principles.
  • Subject matter: The Oversight Board is to interpret Facebook's policies, which will hopefully improve consistency and transparency, and may suggest improvements to the rules governing speech on Facebook. We hope that they press Facebook to improve its policies, just as a wide range of advocates do (and should continue to do).
  • Jurisdiction: One of the problems with corporate speech controls is that rules and expectations can vary by region. The Facebook proposal suggests that a panel for a given case will include someone from the relevant "region," but it is unclear how a group of eleven to forty Board members can adequately represent the diverse viewpoints of Facebook's global userbase.
  • Personnel: As noted, the composition of the Board remains an unknown. Facebook has said it will strive for a broad diversity of geographic, gender, political, social and religious representation and perspectives.
  • Transparency: It will certainly be a step forward if the Board's public opinions give us more insight into the rules that are supposed to govern speech at Facebook (the actual, detailed rules used internally, not the general rules made public on Facebook.com). We would like to see more information about what kinds of cases are being heard and how many requests for review the Board receives. New America has a good set of specific transparency suggestions.

In short, Facebook's proposal could improve the status quo. The transparency of the Board's decisions means that we will likely know more than ever about how Facebook is making decisions. It remains to be seen, though, whether Facebook can be consistent in its application of the rules going forward, as well as how "Independent" the Oversight Board can be, and many of the important details about who will make up the Board and whether it will take the necessary steps to understand local and subcultural norms. We and other advocates will continue to press Facebook to improve the transparency and consistency of its procedures for policing speech on its platform, as well as the substance of its rules. We hope the Oversight Board will be a mechanism to support those reforms and push Facebook towards better respect for human rights.

What it won't do, however, is fix the real underlying problem: Content moderation is extremely difficult to get right, and at the scale at which Facebook is operating, it may be impossible for one set of rules to properly govern the many communities that rely on the platform. As with any system of censorship, mistakes are inevitable.  And although the ability to appeal is an important measure of harm reduction, it's not an adequate remedy for having fair policies in place and adhering to them in the first place.

EFF to Observe at United Nations General Assembly Leaders' Week Event

EFF - Thu, 09/19/2019 - 15:49

EFF has joined the advisory committee of the Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online and will be represented at meetings near the United Nations General Assembly early next week. We have been involved in the process since May, when the government of New Zealand convened more than forty civil society actors in Paris for an honest discussion of the Call’s goals and drawbacks.

We are grateful to New Zealand’s government for working toward greater inclusion of civil society in the conversation around what to do about violent extremism. But, we remain concerned that some of the governments and corporations involved seek to rid the internet of terrorist content regardless of the human cost. As demonstrated by a paper we released this summer, in conjunction with Witness and Syrian Archive demonstrates, that cost is very real.

At the moment, companies are scrambling to respond to demands to remove extremist content from their platforms. In doing so, however, they risk removing other expression. That includes videos that might be used as evidence in war crimes tribunals; speech from opposition groups that share key identifiers with US-designated terrorist organizations; and, in some cases, benign imagery that happens to contain a banned symbol in the background. While companies have the right to remove extremist content, they must be transparent about their rules and what they remove, and offer users an opportunity to appeal decisions.

Our involvement in the Christchurch Call advisory committee is just one of several ways in which we’re engaged with this topic. We have also been observing the deliberations in the EU over the so-called terrorism regulation, and are watching the debate closely in the US as well. We will also continue our research into the impact of extremist speech regulations on human rights.

We have also spoken recently on the topic, at the Chaos Communications Camp in Germany, and will be speaking again soon at NetHui in Wellington, New Zealand.

Hearing Friday: Plaintiffs Challenging FOSTA Ask Court to Reinstate Lawsuit Seeking To Block Its Enforcement

EFF - Wed, 09/18/2019 - 15:54
Risk of Prosecution Has Caused Groups to Self-Censor, Platforms to Shut Out Legal Services

Washington D.C.—On Friday, Sept. 20, at 9:30 am, attorneys for five plaintiffs suing the government to block enforcement of FOSTA will ask a federal appeals court to reverse a judge’s decision to dismiss the case.

The plaintiffs—Woodhull Freedom Foundation, the Internet Archive, Human Rights Watch, and individuals Alex Andrews and Eric Koszyk—contend that FOSTA, a federal law passed in 2018 that expansively criminalizes online speech related to sex work and removes important protections for online intermediaries, violates their First Amendment rights.

Electronic Frontier Foundation (EFF) is counsel for the plaintiffs along with co-counsel Davis Wright Tremaine LLP, Walters Law Group, and Daphne Keller.

FOSTA, or the Allow States and Victims to Fight Online Sex Trafficking Act, makes it a felony to use or operate an online service with the intent to “promote or facilitate the prostitution of another person,” vague terms with wide-ranging meanings that can include speech that makes sex work easier in any way. FOSTA also expanded the scope of other federal laws on sex trafficking to include online speech, and reduced statutory immunities previously provided under Section 230 of the Communications Decency Act. The plaintiffs sued to block enforcement of the law because its overbroad language sweeps up Internet speech about sex, sex workers, and sexual freedom, including harm reduction information and speech advocating decriminalization of prostitution.

A federal judge dismissed the case, ruling that the plaintiffs lacked “standing” because they failed to prove a credible threat that they would be prosecuted for violating FOSTA. Because the court dismissed the case on procedural grounds, it did not rule on whether FOSTA is constitutional.

Attorney Robert Corn-Revere, counsel for the plaintiffs, will argue at a hearing on Sept. 20 that the plaintiffs don’t have to wait until they face prosecution before challenging a law regulating speech when, as here, the vague and overbroad prohibitions of the law are causing numerous speakers to censor themselves and their users. FOSTA specifically authorized enforcement by state prosecutors and private litigants, vastly increasing the risk of being sued under the statute and greatly exacerbating the speech-chilling effects of the law. FOSTA has also reportedly generated increased risks for sex workers and frustrated law enforcement efforts to investigate trafficking.

WHAT:
Oral argument in Woodhull Freedom Foundation v. U.S.

WHO:
Robert Corn-Revere of Davis Wright  Tremaine LLP

WHEN:
Friday, Sept. 20, at 9:30 am

WHERE:
E. Barrett Prettyman U.S. Courthouse and William B. Bryant Annex
Courtroom 31
333 Constitution Avenue, NW
Washington, DC 20001

For more on this case:
https://www.eff.org/cases/woodhull-freedom-foundation-et-al-v-united-states
https://www.woodhullfoundation.org/our-work/fosta/

For more on FOSTA:
https://www.eff.org/deeplinks/2018/03/how-congress-censored-internet

 

 

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org

Thanks For Helping Us Defend the California Consumer Privacy Act

EFF - Wed, 09/18/2019 - 14:31

The California Consumer Privacy Act will go into effect on January 1, 2020—having fended off a year of targeted efforts by technology giants who wanted to gut the bill. Most recently, industry tried to weaken its important privacy protections in the last days of the legislative session.

Californians made history last year when, after 600,000 people signed petitions in support of a ballot initiative, the California State Legislature answered their constituents’ call for a new data privacy law. It’s been a long fight to defend the CCPA against a raft of amendments that would have weakened this law and the protections it enshrines for Californians. Big technology companies backed a number of bills that each would have weakened the CCPA’s protections. Taken together, this package would have significantly undermined this historic law.

Fortunately, the worst provisions of these bills did not make it through the legislature—though it wasn’t for lack of trying. Lawmakers proposed bills that would have opened up loopholes in the law and made it easier for businesses to skirt privacy protections if they shared information with governments, changed definitions in the bill to broaden its exemptions, and made it easier for businesses to require customers to pay for their privacy rights.

These bills sailed through the Assembly but were stopped in July by the Senate Judiciary Committee, chaired by Senator Hannah-Beth Jackson. The final amendments to the CCPA that passed through the legislature last week make small changes to the law, and do not weaken its important protections.

We want to thank everyone who called or wrote to their lawmakers to protect the CCPA this year and amplified how important data privacy is to the people of California. Your voices are invaluable to our advocacy.

We also appreciate the time that lawmakers, our coalition partners, and other stakeholders devoted to discussions about these amendments. As a result of this hard work, the California State Legislature stood up for the privacy law that they passed last year.

Still, while the CCPA is important for Californians’ consumer data privacy, it needs to be stronger. EFF and other privacy organizations earlier this year advanced two bills to strengthen the CCPA, which met significant opposition from technology industry trade association groups. Most importantly, these bills would have improved enforcement by allowing consumers to bring their own privacy claims to court. We particularly thank Assemblymember Buffy Wicks. Sen. Jackson, and the California Attorney General’s Office for leading the charge to improve the CCPA in the legislature.

More than anything, this year’s CCPA fight shows that when voters speak up for their privacy, it makes a big difference with legislators. We look forward to continuing to work with legislators and our coalition partners to advance measures that improve everyone’s privacy. We also look forward to offering input on the Attorney General’s regulations for the CCPA, expected this fall. And as technology trade groups redouble their efforts to weaken state privacy laws or override them with a national law, we encourage everyone to keep pushing for strong consumer data privacy laws across the country.

Big Tech’s Disingenuous Push for a Federal Privacy Law

EFF - Wed, 09/18/2019 - 13:25

This week, the Internet Association launched a campaign asking the federal government to pass a new privacy law.

The Internet Association (IA) is a trade group funded by some of the largest tech companies in the world, including Google, Microsoft, Facebook, Amazon, and Uber. Many of its members keep their lights on by tracking users and monetizing their personal data. So why do they want a federal consumer privacy law?

Surprise! It’s not to protect your privacy. Rather, this campaign is a disingenuous ploy to undermine real progress on privacy being made around the country at the state level. IA member companies want to establish a national “privacy law” that undoes stronger state laws and lets them continue business as usual. Lawyers call this “preemption.” IA calls this “a unified, national standard” to avoid “a patchwork of state laws.” We call this a big step backwards for all of our privacy.

The question we should be asking is, “What are they afraid of?”

Stronger state laws

After years of privacy scandals, Americans across the political spectrum want better consumer privacy protections. So far, Congress has failed to act, but states have taken matters into their own hands. The Illinois Biometric Information Privacy Act (BIPA), passed in 2008, makes it illegal to collect biometric data from Illinois citizens without their express, informed, opt-in consent. Vermont requires data brokers to register with the state and report on their activities. And the California Consumer Privacy Act (CCPA), passed in 2018, gives users the right to access their personal data and opt out of its sale. In state legislatures across the country, consumer privacy bills are gaining momentum.

This terrifies big tech companies. Last quarter alone, the IA spent nearly $176,000 lobbying the California legislature, largely to weaken CCPA before it takes effect in January 2021. Thanks to the efforts of a coalition of privacy advocates, including EFF, it failed. The IA and its allies are losing the fight against state privacy laws. So, after years of fighting any kind of privacy legislation, they’re now looking to the federal government to save them from the states. The IA has joined Technet, a group of tech CEOs, and Business Roundtable, another industry lobbying organization, in calls for a weak national “privacy” law that will preempt stronger state laws. In other words, they want to roll back all the progress states like California have made, and prevent other states from protecting consumers in the future. We must not allow them to succeed.

A private right of action

Laws with a private right of action allow ordinary people to sue companies when they break the law. This is essential to make sure the law is properly enforced. Without a private right of action, it’s up to regulators like the Federal Trade Commission or the U.S. Department of Justice to go after misbehaving companies. Even in the best of times, regulatory bodies often don’t have the resources needed to police a multi-trillion dollar industry. And regulators can fall prey to regulatory capture. If all the power of enforcement is left in the hands of a single group, an industry can lobby the government to fill that group with its own people. Federal Communications Commission chair Ajit Pai is a former Verizon lawyer, and he’s overseen massive deregulation of the telecom industry his office is supposed to keep in check.

The strongest state privacy laws include private rights of action. Illinois BIPA allows users whose biometric data is illegally collected or handled to sue the companies responsible. And CCPA lets users sue when a company’s negligence results in a breach of personal information. The IA wants to erase these laws and reduce the penalties its member companies can face for their misconduct in legal proceedings brought by ordinary consumers.

Real changes to the surveillance business model

We don’t know what the IA’s final legislative proposal will say, but its campaign website is thick with weasel words and equivocation. For example, the section on “Controls” says:

Individuals should have meaningful controls over how personal information they provide to companies is collected, used, and shared, except where that information is necessary for the basic operation of the business[.]

The “basic operation” of data brokers involves collecting and selling personal data without your consent. Does that mean you shouldn’t be able to stop them?

The rest of IA’s proposals follow the same pattern. The section on “transparency” says that users should be able to know the “categories of entities” that their data is shared with, but not the names of actual companies or people that receive it. This will make it unnecessarily difficult for people to trace how their personal information is bought and sold. The section on “access” says that users’ ability to access their data should not “unreasonably interfere with a company’s business operations.” Again, if a business depends on gathering data about people without their knowledge, will users ever be able to access their information? Sometimes, exercising your privacy rights will mean “interfering” with a company’s business.

The bottom line is that tech companies are happy for Congress to enact a privacy law—as long as it doesn’t affect their “business operations” in any way. In other words, they’d like a privacy law that doesn’t change anything at all.

The Internet Association knows which way the wind is blowing. Across the country, people are fed up with Big Tech’s empty promises and serial mishandling of personal data. They want real change, and state legislatures are listening. We must allow states to continue passing innovative new privacy laws. Any federal privacy legislation needs to build a floor, not a ceiling.

Subscribe to Headup aggregator