Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 23 hours 13 min ago

Hardwood Floors, Natural Light and the Right to Choose Your ISP

Tue, 12/11/2018 - 12:43

Your landlord is prohibited from making deals that restrict you to a single video provider, and those prohibitions should apply to your broadband service as well. Yet, across the country, tenants remain locked into a single choice. In January of 2017, San Francisco became the first city to take action toward filling in the loopholes that enable anti-competitive practices. Will 2019 see more cities adopting similar protections?

Large Corporate ISPs—looking to lock out competition—have created a market of landlord addiction to practices that take advantage of these loopholes in the FCC’s prohibition on exclusive access agreements, by simply denying physical access to any but their preferred ISP. These owners and Real Estate Investment Trusts may charge prohibitive Door Fees, participate in ISP revenue sharing schemes, or enter into exclusive marketing agreements. While ostensibly legal, these practices often result in the same lack of choice, and disincentivization of innovation, the FCC intended to curtail.

Proposed revenue share for building owners. Image Source: Wired



Along with EFF, residents, community groups, and local ISPs are already looking to break this corporate ISP stranglehold in neighboring Oakland. Media Alliance Executive Director, Tracy Rosenberg notes “Letting tenants use their choice of providers creates better services for all by forcing the big companies to compete. It prevents them from tying up the market by crafting exclusive deals with big landlords.” Rosenberg adds that “It can also let you put your values into action by getting Internet services from companies that agree to abide by net neutrality, or have better data privacy practices and don't sell your data or voluntarily respond to government information requests.”

“Letting tenants use their choice of providers creates better services for all by forcing the big companies to compete." - Tracy Rosenberg, Media Alliance

Oakland resident J. Oswald who has been instrumental in rallying support for the adoption of an ordinance that builds on the template set by San Francisco, believes that lack of awareness is at the heart of why this practice has been able to continue for so long. “Most people aren't even aware they are discreetly being ripped off. When I first read about the landlord-ISP payola, I couldn't comprehend why this hijacking was allowed to happen.” Property manager lack of awareness regarding the FCC’s prohibitions also seems to play a key role in how exclusive advertising agreements become exclusive access agreements—in practice if not in name. In a piece titled The New Payola, Wired highlighted how companies like Comcast encourage this confusion, sending letters to building staff at properties where they have exclusive marketing arrangements including phrases such as “under our agreement they are not permitted to do anything on property or that is coordinated with property owner or management.” It’s not difficult to see how a property manager could read that exclusive right to extend well beyond what the FCC regulations permit.

ISPs with exclusive access to properties have little reason to offer residents competitive pricing, improved customer service, better selection, or quality of services. Choice of Communication Service Provider ordinances like San Francisco’s put the power to choose back in the hands of consumers. Should Oakland, and other cities, adopt a similar ordinance, landlords would be required to provide access to qualified ISPs given they can show that at least one resident is seeking service through the provider. The ordinance requires that property owners be given reasonable notice before a communication service provider intends to inspect the property or install equipment, and makes great effort to assure proper maintenance of the owners property, while ensuring that any refusal of access is in the interest of protecting the safety, functioning, and appearance of the property, instead of an incumbent ISP's profits.

EFF supported the adoption of San Francisco's ordinance two years ago. We stand with Oakland residents, community groups and advocates such as Sudo Mesh and Media Alliance, and privacy and net neutrality respecting ISPs such as Monkey Brains, in calling for an end to the broadband oligopoly created by pay-to-play schemes. Across the U.S., local members of the Electronic Frontier Alliance (EFA) are working to assure that their neighbors' online rights and privacy are respected and supported. Visit eff.org/fight to find your local ally, and support returning the power of choice to thousands of renters, instead of lining the pockets of the largest ISPs and Real Estate Trusts.

Four million Europeans' signatures opposing Article 13 have been delivered to the European Parliament

Mon, 12/10/2018 - 22:07

Lawmakers in the European Union (EU) often lament the lack of citizen engagement with the complex policy questions that they wrestle with in Strasbourg and Brussels, so we assume that they will be delighted to learn that more than 4,000,000 of their constituents have signed a petition opposing Article 13 of the new Copyright in the Single Market Directive. They oppose it for two main reasons: because it will inevitably lead to the creation of algorithmic copyright filters that only US Big Tech companies can afford (making the field less competitive and thus harder for working artists to negotiate better deals in) and because these filters will censor enormous quantities of legitimate material, thanks to inevitable algorithmic errors and abuse.

Currently, the Directive is in the "trilogue" phase, where European national governments and the EU negotiate its final form behind closed doors. We're told that the final language may emerge as soon as this week, with the intention of rushing a vote before Christmas, despite the absolute shambles that the negotiations have made of the text.

On Monday, a delegation from the signatories officially presented the Trilogue negotiators with the names of 4,000,000+ Europeans who oppose Article 13. These 4,000,000 are in esteemed company: Article 13 is also opposed by the “father of the Internet”, Vint Cerf, and the creator of the Web, Tim Berners-Lee and more than 70 of the Internet's top technical experts, not to mention Europe's largest sports leagues and film studios. Burgeoning movements opposing the measure have sprung up in Italy and Poland.

With so much opposition, it’s time for negotiators to recognize there's no hope of salvaging Article 13. Much of the new Copyright Directive is a largely inoffensive slate of much-needed technical tweaks to European copyright, which has not had a major revision since 2001. At this point, the entire Directive is in danger of going down in flames to salvage an unworkable, absurd, universally-loathed proposal. Let's hope that the Trilogue negotiators understand that and take steps to save the Directive (and Europe, and the Internet) from this terrible proposal.

Human Rights Groups to Sundar Pichai: Listen to Your Employees and Halt Project Dragonfly

Mon, 12/10/2018 - 21:17

EFF, as part of a coalition of over sixty other human rights groups led by Human Rights Watch and Amnesty International —still have questions for Sundar Pichai, Google’s CEO. Leaks and rumors continue to spread from Google about “Project Dragonfly,” a secretive plan to create a censored, trackable search tool for China. Media reports based on sources from within the company have stated that the project was being readied for a rapid launch, even as it was kept secret even from Google’s own security and privacy experts.

These stories undermine the vague answers we were given in previous correspondence. On the eve of Pichai being called before the House Judiciary Committee, we have re-iterated our profound concern, and jointly called upon Google to halt Project Dragonfly completely.

Silicon Valley companies know how dangerous it can be to enter markets without considering the human rights implications of what they do. A decade ago, following Yahoo’s complicity in the arrest and detention of journalist Shi Tao, and Google’s own fumbles in creating a Great Firewall-compatible search service, companies like Microsoft, Google, and Yahoo agreed to work with independent experts in the Global Network Initiative to stave off the use of new technology to conduct human rights violations. Members of the U.S. Congress concerned about Google and other tech companies’ co-operation with other governments, have been supportive of this open, cautious approach.

But under Pichai’s leadership, Google appears to have ignored not just outside advice; the company has apparently ignored the advice of its own privacy and security experts. An Intercept article based on statements made by four people who worked on Project Dragonfly noted that Google’s head of operations in China “shut out members of the company’s security and privacy team from key meetings about the search engine … and tried to sideline a privacy review of the plan that sought to address potential human rights abuses.”

If that description is accurate, Google is throwing out not just human rights experts and Congress’ own advice regarding its entry into China; it’s throwing away its own engineers’ guidance.

The coalition's open letter states:

Facilitating Chinese authorities’ access to personal data, as described in media reports, would be particularly reckless. If such features were launched, there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations. This risk was identified by Google’s own security and privacy review team, according to former and current Google employees. Despite attempts to minimize internal scrutiny, a team tasked with assessing Dragonfly concluded that Google “would be expected to function in China as part of the ruling Communist Party’s authoritarian system of policing and surveillance”, according to a media report.

It was Google’s engineering-led management that changed its mind about entering the Chinese market a decade ago. Back then, it stepped back from tying Google’s future to that of China’s surveillance state. That took bravery, and a deep understanding of the power and responsibility the company’s team bears when wielding its technology. We hope their CEO shows similar bravery tomorrow in coming clean about Project Dragonfly to the Judiciary committee, and not hide behind vague descriptions, and promises that are made in public, but broken in secret.

Social Justice Organizations Challenge Retention of DNA Collected from Hundreds of Thousands of Innocent Californians

Mon, 12/10/2018 - 13:02
California Arrestees’ DNA Profiles Become Part of Federal Database, Accessible to Law Enforcement Across the Country, Even for Those Not Convicted of Any Crime

San Francisco - Two social justice organizations—the Center for Genetics and Society and the Equal Justice Society—and an individual plaintiff, Pete Shanks, have filed suit against the state of California for its collection and retention of genetic profiles from people arrested but never convicted of any crime. The Electronic Frontier Foundation (EFF) and the Law Office of Michael T. Risher represent the plaintiffs. The suit argues that retention of DNA from innocent people violates the California Constitution’s privacy protections, which are meant to block overbroad collection and unlawful searches of personal data.

“One-third of people arrested for felonies in California are never convicted. The government has no legitimate interest in retaining DNA samples and profiles from people who have no felony convictions, and it’s unconstitutional for the state to hold on to such sensitive material without any finding of guilt,” said Marcy Darnovsky, Executive Director at the Center for Genetics and Society.

While California has long collected DNA from people convicted of serious felony offenses, in 2009 the state doubled-down on this policy to mandate DNA collection for every single felony arrestee, including those later determined to be innocent. The intimate details that can be revealed by a person’s DNA only increases as technology develops, exposing plaintiffs to ever heightening degrees of intrusiveness. After collection, the DNA is analyzed and uploaded to the nationwide Combined DNA Index System, or “CODIS,” which is shared with law enforcement across the U.S.

DNA identification is widely but mistakenly seen as a “fool-proof” technology. Studies and real-life cases have shown that there are myriad ways that it can implicate innocent people for crimes, ranging from crime-lab sample mix-ups and sample contamination by forensic collectors, to subjective misreading of complex mixtures containing genetic material from multiple donors, to selective presentation of the evidence to juries.

Including an individual’s DNA in CODIS increases the chance that they could wrongly become a suspect in a criminal case. And because of the deep racial disparities that plague our criminal justice system, DNA collection and retention practices disproportionally put people of color at risk of mistaken arrest and conviction. 

“The overexpansion of the CODIS database and California’s failure to promptly expunge profiles of innocent arrestees exploits and reinforces systemic racial and socio-economic biases,” said Lisa Holder, Interim Legal Director at the Equal Justice Society. “We want the court to recognize that California’s DNA collection and retention practices are unfairly putting already vulnerable poor communities and people of color at even greater risk of racial profiling and law enforcement abuse.”

California allows people who were never convicted of a felony to apply to have their DNA expunged from the system. But the existing statutory process is lengthy and uncertain, and many people may not even know it exists because of inadequate notice requirements. While an estimated 750,000 individual profiles gathered over the last decade could be eligible to be removed from the database, only 1,510 expungement requests have been made, and only 1,282 were granted. The indefinite retention of thousands of DNA profiles from people who are acquitted or never charged violates the California Constitution, which affords both a right to privacy and a right against unlawful searches and seizures that are specifically aimed at protecting people from the government’s overbroad retention of personal information.

“Our DNA contains our entire genetic makeup—private and intensely personal information that maps who we are and where we come from. The state’s failure to automatically expunge DNA samples and profiles from the hundreds of thousands of Californians who were not ultimately convicted of a crime is unconstitutional,” said EFF Staff Attorney Jamie Lee Williams. “It’s time for the state to start honoring the privacy rights guaranteed to all Californians.” 

For the full complaint in Center for Genetics and Society v. Becerra:
https://www.eff.org/document/complaint-49

Contact:  Jamie LeeWilliamsStaff Attorneyjamie@eff.org

TSA’s Roadmap for Airport Surveillance Moves in a Dangerous Direction

Fri, 12/07/2018 - 19:21

The Transportation Security Administration has set out an alarming vision of pervasive biometric surveillance at airports, which cuts against the right to privacy, the “right to travel,” and the right to anonymous association with others.

The FAA Reauthorization Act of 2018, which included language that we warned would provide implied Congressional endorsement to biometric screening of domestic travelers and U.S. citizens, became law in early October. The ink wasn’t even dry on that bill when the Transportation Security Administration (TSA) published their Biometrics Roadmap for Aviation Security and the Passenger Experience, detailing TSA’s plans to work with Customs and Border Protection (CBP) to roll out increased biometric collection and screening for all passengers, including Americans traveling domestically.

This roadmap appears to latch on to a perceived acceptance of biometrics as security keys while ignoring the pervasive challenges with accurately identifying individuals and the privacy risks associated with collecting massive amounts of biometric data. Furthermore, it provides no strategy for dealing with passengers who are unfairly misidentified.

Worst of all, while the roadmap explicitly mentions collaborating with airlines and other partners inside and outside the government, it is alarmingly silent on how TSA plans to protect a widely distributed honeypot of sensitive biometric information ripe for misuse by identity thieves, malicious actors, or even legitimate employees abusing their access privileges.

 TSA PreCheck is Not a Blank Check

The roadmap proposes significant changes to what the government can do with data collected from more than 5 million people in the TSA PreCheck program. It also proposes new programs to collect and use biometric data from American travelers who haven’t opted into the PreCheck program.

The TSA PreCheck program has long been billed as a convenient way for travelers to cut down on security wait times and speed through airports. All a traveler has to do is to sign up, pay a fee, and allow TSA to collect fingerprints for a background check. However, the roadmap outlines TSA’s plans to expand use of those prints beyond the background check to other uses throughout the airport, such as for security at the bag drop or for identity verification at security check points.

TSA has already rolled this out as a pilot program. In 2017, at Atlanta’s Hartsfield-Jackson Airport and Denver International Airport, TSA used prints from the PreCheck database and a contactless fingerprint reader to verify the identity of PreCheck-approved travelers at security checkpoints at both airports. TSA now proposes to make the pilot program permanent and to widen the biometrics used to include face recognition, iris scans, and others.

Even more concerning, the roadmap outlines a strategy to capture biometrics from American travelers who haven’t enrolled in PreCheck and who never consented to any biometric data collection from TSA. Instead of giving passengers the option to opt in, TSA plans to partner and share information with other federal and state agencies like the FBI and state Departments of Motor Vehicles to get the biometric information they want.

While Congress has authorized a biometric data collection exit program for foreign visitors—supposedly to help monitor visa compliance by using biometrics to track foreigners leaving the country—the roadmap explicitly outlines plans for TSA and CBP to collect any biometrics they want from all travelers—American or foreign, international and domestic—wherever they are in the airport. That data will be stored in a widely shared database could be used to track people outside the airport context. For example, TSA’s Precheck as well as Clear have already begun using their technology at stadiums to “allow” visitors a faster entry.

This is a big, big change. It is unprecedented for the government to collect, store, and share this kind of data, with this level of detail, with this many agencies and private partners. We know that security lines are a huge pain, but we are concerned that travelers getting used to biometric tracking in the airport context will be less concerned about tracking in other contexts and eventually throughout society at large.

Device Security and National Security Are Not the Same

The roadmap also makes the huge assumption that people will not object to this expanded collection. It states that “popular perceptions [of biometrics] have evolved to appreciate the convenience and security biometric solutions can offer in the commercial aviation sector.” In other words, it claims that travelers using biometrics like fingerprints and facial recognition programs to unlock their phones and laptops, will be less concerned about Department of Homeland Security agencies collecting biometrics to store in government databases for unspecified, myriad uses.

The problem with this claim is that those two things are not the same.

Apple software, for one example, allows consumers to use biometrics (currently, fingerprints and faceprints) to unlock their devices. However, Apple has specifically built in privacy and security protections that prevent the biometric data from being stolen. Apple does not enable third party software to access the original biometric data. Plus, unlike federal agencies, Apple stores the original biometric information on your phone, not in a central, searchable database intended for use by multiple government and private partners over many years.

Additionally, TSA seems to be ignoring the risk that relying heavily on biometric data for identification may actually create new national security risks that the federal government is ill-equipped to handle. For example, India’s infamous Aadhaar biometric database, which was built by the Indian government to reduce corruption and expanded for use by other public and private groups, keeps getting hacked. It is not only cheap to buy the information of one of the 1.19 billion people in the database, but the hacks also allow for new information to be entered into the database. Rather than increasing security, India’s biometric database created more problems and opportunities for corruption.

Implementation Issues and Cost Overruns

 Finally, this roadmap glosses over the weaknesses of facial recognition technology as a means to identify travelers and ignores the challenges CBP has already faced rolling out their biometric exit program.

We’ve written many times before about the significant accuracy problems with current face recognition software, especially for non-white and female people. For example, earlier this summer the ACLU published a test of Amazon’s facial recognition program, comparing the official photos of 435 Members of Congress with publicly available mugshots. The ACLU found 28 false matches, even in this relatively small data set.

CBP has claimed to have a 98% accuracy rating in their pilot programs, even though the Office of the Inspector General could not verify those numbers. According to the FAA, 2.5 million passengers fly through U.S. airports every day, meaning that even a 2% error rate would cause thousands of people to be misidentified every day. 

TSA’s roadmap does not acknowledge these accuracy problems, much less outline an efficient way to allow wrongly identified travelers to complete their trips. Additionally, the roadmap does not acknowledge the need to allow travelers to opt out of the system.

But even if the claims about the advances in biometric software and technology are true, the Office of the Inspector General has also reported that CBP consistently and substantially underestimated the cost of their biometric exit program to the American taxpayer. To close some of the funding gaps, CBP would have to depend on the airports and airlines to purchase the necessary biometric equipment and to provide staff to implement the program. In short, for CBP and TSA to achieve their goals, they must force American travelers to hand over their biometric data to private companies.

What’s Next

TSA should not move forward on this plan without addressing the serious security concerns and without providing a reliable, convenient way for travelers to opt out of the program. Even if biometrics provided a reliable identification system for travelers, the kind of system and database the roadmap outlines could make it more difficult for people to travel, in direct conflict with the agency’s mission “to protect the nation's transportation systems to ensure freedom of movement for people and commerce.”

Facebook’s Sexual Solicitation Policy is a Honeypot for Trolls

Fri, 12/07/2018 - 14:46

Facebook just quietly adopted a policy that could push thousands of innocent people off of the platform. The new “sexual solicitation” rules forbid pornography and other explicit sexual content (which was already functionally banned under a different statute), but they don’t stop there: they also ban “implicit sexual solicitation”, including the use of sexual slang, the solicitation of nude images, discussion of “sexual partner preference,” and even expressing interest in sex. That’s not an exaggeration: the new policy bars “vague suggestive statements, such as ‘looking for a good time tonight.’” It wouldn’t be a stretch to think that asking “Netflix and chill?” could run afoul of this policy.

The new rules come with a baffling justification, seemingly blurring the line between sexual exploitation and plain old doing it:

[P]eople use Facebook to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and want to allow for this discussion. We draw the line, however, when content facilitates, encourages or coordinates sexual encounters between adults.

In other words, discussion of sexual exploitation is allowed, but discussion of consensual, adult sex is taboo. That’s a classic censorship model: speech about sexuality being permitted only when sex is presented as dangerous and shameful. It’s especially concerning since healthy, non-obscene discussion about sex—even about enjoying or wanting to have sex—has been a component of online communities for as long as the Internet has existed, and has for almost as long been the target of governmental censorship efforts.

Until now, Facebook has been a particularly important place for groups who aren’t well represented in mass media to discuss their sexual identities and practices. At very least, users should get the final say about whether they want to see such speech in their timelines.

Overly Restrictive Rules Attract Trolls

Is Facebook now a sex-free zone? Should we be afraid of meeting potential partners on the platform or even disclosing our sexual orientations?

With such broadly sweeping rules, online trolls can take advantage of reporting mechanisms to punish groups they don’t like.

Maybe not. For many users, life on Facebook might continue as it always has. But therein lies the problem: the new rules put a substantial portion of Facebook users in danger of violation. Fundamentally, that’s not how platform moderation policies should work—with such broadly sweeping rules, online trolls can take advantage of reporting mechanisms to punish groups they don’t like.

Combined with opaque and one-sided flagging and reporting systems, overly restrictive rules can incentivize abuse from bullies and other bad actors. It’s not just individual trolls either: state actors have systematically abused Facebook’s flagging process to censor political enemies. With these new rules, organizing that type of attack just became a lot easier. A few reports can drag a user into Facebook’s labyrinthine enforcement regime, which can result in having a group page deactivated or even being banned from Facebook entirely. This process gives the user no meaningful opportunity to appeal a bad decision.

Given the rules’ focus on sexual interests and activities, it’s easy to imagine who would be the easiest targets: sex workers (including those who work lawfully), members of the LGBTQ community, and others who congregate online to discuss issues relating to sex. What makes the policy so dangerous to those communities is that it forbids the very things they gather online to discuss.

Even before the recent changes at Facebook and Tumblr, we’d seen trolls exploit similar policies to target the LGBTQ community and censor sexual health resources. Entire harassment campaigns have organized to use payment processors’ reporting systems to cut off sex workers’ income. When online platforms adopt moderation policies and reporting processes, it’s essential that they consider how those policies and systems might be weaponized against marginalized groups.

A recent Verge article quotes a Facebook representative as saying that people sharing sensitive information in private Facebook groups will be safe, since Facebook relies on reports from users. If there are no tattle-tales in your group, the reasoning goes, then you can speak freely without fear of punishment. But that assurance rings rather hollow: in today’s world of online bullying and brigading, there’s no question of if your private group will be infiltrated by the trolls; it’s when.

Did SESTA/FOSTA Inspire Facebook’s Policy Change?

The rule change comes a few months after Congress passed the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act (SESTA/FOSTA), and it’s hard not to wonder if the policy is the direct result of the new Internet censorship laws.

Wrongheaded as it is, the new rule should come as no surprise. After all, Facebook endorsed SESTA/FOSTA.

SESTA/FOSTA opened online platforms to new criminal and civil liability at the state and federal levels for their users’ activities. While ostensibly targeted at online sex trafficking, SESTA/FOSTA also made it a crime for a platform to “promote or facilitate the prostitution of another person.” The law effectively blurred the distinction between adult, consensual sex work and sex trafficking. The bill’s supporters argued that forcing platforms to clamp down on all sex work was the only way to curb trafficking–nevermind the growing chorus of trafficking experts arguing the very opposite.

As SESTA/FOSTA was debated in Congress, we repeatedly pointed out that online platforms would have little choice but to over-censor: the fear of liability would force them not just to stop at sex trafficking or even sex work, but to take much more restrictive approaches to sex and sexuality in general, even in the absence of any commercial transaction. In EFF’s ongoing legal challenge to SESTA/FOSTA, we argue that the law unconstitutionally silences lawful speech online.

While we don’t know if the Facebook policy change came as a response to SESTA/FOSTA, it is a perfect example of what we feared would happen: platforms would decide that the only way to avoid liability is to ban a vast range of discussions of sex.

Wrongheaded as it is, the new rule should come as no surprise. After all, Facebook endorsed SESTA/FOSTA. Regardless of whether one caused the other or not, both reflect the same vision of how the Internet should work—a place where certain topics simply cannot be discussed. Like SESTA/FOSTA, Facebook’s rule change might have been made to fight online sexual exploitation. But like SESTA/FOSTA, it will do nothing but push innocent people offline.

Related Cases: Woodhull Freedom Foundation et al. v. United States

In the New Fight for Online Privacy and Security, Australia Falls: What Happens Next?

Thu, 12/06/2018 - 19:53

With indecent speed, and after the barest nod to debate, the Australian Parliament has now passed the Assistance and Access Act, unopposed and unamended. The bill is a cousin to the United Kingdom’s Investigatory Powers Act, passed in 2016. The two laws vary in their details, but both now deliver a panoptic new power to their nation’s governments. Both countries now claim the right to secretly compel tech companies and individual technologists, including network administrators, sysadmins, and open source developers – to re-engineer software and hardware under their control, so that it can be used to spy on their users. Engineers can be penalized for refusing to comply with fines and prison; in Australia, even counseling a technologist to oppose these orders is a crime.

We don’t know – because it is a state secret – whether the UK has already taken advantage of its powers, but this month we had some strong statements from GCHQ about what they plan to do with them. And because the “Five Eyes” coalition of intelligence-gathering countries have been coordinating this move for some time, we can expect Australia to shortly make the same demands.

Ian Levy, GCHQ’s Technical Director, recently posted on the Lawfare blog what GCHQ wants tech companies to do. Buried in a post full of justifications (do a search for “crocodile clips” to find the meat of the proposal, or read EFF’s Cindy Cohn’s analysis), Levy explained that GCHQ wants secure messaging services, like WhatsApp, Signal, Wire, and iMessage, to create deceitful user interfaces that hide who private messages are being sent to.

In the case of Apple’s iMessage, Apple would be compelled to silently add new devices to the list apps think you own: when someone sends you a message, it will no longer just go to, say, your iPhone, your iPad, and your MacBook – it will go to those devices, and a new addition, a spying device owned by the government.

With messaging systems like WhatsApp, the approach will be slightly different: your user interface will claim you’re in a one-on-one conversation, but behind the scenes, the company will be required to silently switch you into a group chat. Two of the people in the group chat will be you and your friend. The other will be invisible, and will be operated by the government.

The intelligence services call it “the ghost"; a stalking ghost that requires the most secure tech products available today to lie to their users, via secret orders that their designers cannot refuse without risking prosecution.

So this is the first step, after this Australian bill becomes law. We can imagine Facebook and Apple and other messaging services fighting these orders as best as they can. Big tech companies are already struggling with a profound collapse in trust among their customers; the knowledge that they may be compelled to lie to those users will only add to their problems.

But what about other services, who refuse to compromise their users’ security? What about the open source projects that will ask their Australian contributors to stop working on their security code, and businesses who will choose not to employ Australian developers, or decline to open offices in that country?

There can be only one step after you’ve compelled the big companies to agree to your back-doors, and that is to criminalize those truly secure services who prefer to follow the “laws of mathematics” instead of “the laws of Australia”.

Somewhat more quietly than the passage of the AA bill, the Australian Internet Parliament this month also voted for an expansion of the country’s already wide-ranging website blocking powers. Australia continues to work to establish another precedent: that even supposedly open and democratic states should be able to censor and filter the Internet. If the country continues to walk down this road, then it’s only a matter of time before only back-doored communication tools run by compliant multinational tech companies are permitted in Australia; and all other services and protocols will face government-mandated blocking and filtering.

That world still is still only a potential future. There will be opportunities for companies, lawyers, activists, technologists, and Australian voters to keep a filtered, insecure Australian Net from becoming a dystopian reality. But this month, thanks to Australia’s lawmakers on both left and right, that reality is a giant step closer.

New Documents Show That Facebook Has Never Deserved Your Trust

Thu, 12/06/2018 - 18:19

Another week, another set of reminders that, while Facebook likes to paint itself as an “optimistic” company that’s simply out to help users and connect the world, the reality is very different.  This week, those reminders include a collection of newly released documents suggesting that the company adopted a host of features and policies even though it knew those choices would harm users and undermine innovation.

Yesterday, a member of the United Kingdom’s Parliament published a trove of internal documents from Facebook, obtained as part of a lawsuit by a firm called Six4Three. The emails, memos, and slides shed new light on Facebook’s private behavior before, during, and after the events leading to the Cambridge Analytica scandal.

Here are some key points from the roughly 250 pages of documents.

Facebook Uses New Android Update To Pry Into Your Private Life In Ever-More Terrifying Ways

The documents include some of the internal discussion that led to Facebook Messenger’s sneaky logging of Android users’ phone call and text message histories. When a user discovered what Messenger was doing this past spring, it caused public outrage right on the heels of the Cambridge Analytica news. Facebook responded with a “fact check” press release insisting that Messenger had never collected such data without clear user permission.

In newly revealed documents from 2015, however, Facebook employees discuss plans to coerce users into upgrading to a new, more privacy-invasive version of Messenger “without subjecting them to an Android permissions dialog at all,” despite knowing that this kind of misrepresentation of the app’s capabilities was “a pretty high-risk thing to do from a PR perspective.”


This kind of disregard for user consent around phone number and contact information recalls earlier research and investigation exposing Facebook’s misuse of users’ two-factor authentication phone numbers for targeted advertising. Just as disturbing is the mention of using call and text message history to inform the notoriously uncanny PYMK, or People You May Know, feature for suggesting friends.

“I think we leak info to developers”

A central theme of the documents is how Facebook chose to let other developers use its user data. They suggest that Mark Zuckerberg recognized early on that access to Facebook’s data was extremely valuable to other companies, and that Facebook leadership were determined to leverage that value.

A little context: in 2010, Facebook launched version 1.0 of the Graph API, an extremely powerful—and permissive—set of tools that third-party developers could use to access data about users of their apps and their friends.

Dozens of emails show how the company debated monetizing access to that data. Company executives proposed several different schemes, from charging certain developers for access per user to requiring that apps “[Facebook] doesn’t want to share data with” spend a certain amount of money per year on Facebook’s ad platform or lose access to their data.

 
NEKO is Facebook’s acronym for its mobile app-install ad system.

The needs of users themselves were a lesser concern. At one point, in a November 2012 email, an employee mentioned the “liability” risk of giving developers such open access to such powerful information.

Zuckerberg replied:

Of course, two years later, that “leak” is exactly what happened: a shady “survey” app was able to gain access to data on 50 million people, which it then sold to Cambridge Analytica.

“Whitelists” and Access to User Data

In 2015, partly in response to concerns about privacy, Facebook moved to the more restrictive Graph API version 2.0. This version made it more difficult to acquire data about a user’s friends.

However, the documents suggest that certain companies were “whitelisted” and continued to receive privileged access to user data after the API change—without notice to users or   transparent criteria for which companies should be whitelisted or not.

Companies that were granted “whitelist” access to enhanced friends data after the API change included Netflix, AirBnB, Lyft, and Bumble, plus the dating service Badoo and its spin-off Hot or Not.

The vast majority of smaller apps, as well as larger companies such as Ticketmaster, were denied access.

User Data As Anticompetitive Lever

Both before and after Facebook’s API changes, the documents indicate that the company deliberately granted or withheld access to data to undermine its competitors. In an email conversation from January 2013, one employee announced the launch of Twitter’s Vine app, which used Facebook’s Friends API. The employee proposed they “shut down” Vine’s access. Mark Zuckerberg’s response?

“Yup, go for it.”

“Reciprocity”

A significant portion of the internal emails mention Facebook enforcing “data reciprocity”: that is, requiring apps that used data from Facebook to allow their users to share all of that data back to Facebook. This is ironic, given Facebook’s staunch refusal to grant reciprocal access to users’ own contacts lists after using Gmail’s contact-export feature to fuel its early growth.

In an email dated November 19, 2012, Zuckerberg outlined the company’s thinking:


Emphasis ours.

It’s no surprise that a company would prioritize what’s good for it and its profit, but it is a problem when Facebook tramples user rights and innovation to get there. And while Facebook demanded reciprocity from its developers, it withheld access from its competitors.

False User Security to Scope Out Competitors

Facebook acquired Onavo Protect, a “secure” VPN app, in fall 2013. The app was marketed as a way for users to protect their web activity from prying eyes, but it appears that Facebook used it to collect data about all the apps on a user’s phone and immediately began mining that data to gain a competitive edge. Newly released slides suggest Facebook used Onavo to measure the reach of competing social apps including Twitter, Vine, and Path, as well as measuring its penetration in emerging markets such as India.


A "highly confidential" slide showing Onavo stats for other major apps.

In August, Apple finally banned Onavo from its app store for collecting such data in violation of its Terms of Service. These documents suggest that Facebook was collecting app data, and using it to inform strategic decisions, from the very start.

Everything But Literally Selling Your Data

In response to the documents, several Facebook press statements as well as Mark Zuckerberg’s own letter on Facebook defend the company with the refrain, “We’ve never sold anyone’s data.”

That defense fails, because it doesn’t address the core issues. Sure, Facebook does not sell user data directly to advertisers. It doesn’t have to. Facebook has tried to exchange access to users and their information in other ways. In another striking example from the documents, Facebook appeared to offer Tinder privileged access during the API transition in return for use of Tinder’s trademarked term “Moments.” And, of course, Facebook keeps the lights on by selling access to specific users’ attention in the form of targeted advertising spots.

No matter how Zuckerberg slices it, your data is at the center of Facebook’s business. Based on these documents, it seems that Facebook sucked up as much data as possible through “reciprocity” agreements with other apps, and shared it with insufficient regard to consequences for users. Then, after rolling back its permissive data-sharing APIs, the company apparently used privileged access to user data either as a lever to get what it wanted from other companies or as a weapon against its competitors. You are, and always have been, Facebook’s most valuable product.

Securing The Institutions We Rely On: A Grassroots Case Study

Thu, 12/06/2018 - 16:59

Grassroots digital rights organizing has many faces, including that of hands-on hardware hacking in an Ivy League institution. Yale Privacy Lab is a member of the Electronic Frontier Alliance, a network of community and student groups advocating for digital rights in local communities. For Yale Privacy Lab, activism means taking the academic principles behind Internet security and privacy out of the classroom and into the real world, one hacking tutorial or digital self-defense workshop at a time.

Yale Privacy Lab is an initiative of Yale Law School’s Information Society Project—which concerns itself with digital freedom, policy, and regulation—and serves as the project’s practical implementation arm. We interviewed founding member Sean O’Brien and Cyber Fellow and researcher Laurin Weissinger about their work empowering the next generation of digital rights defenders, and offer advice for those wishing to emulate their example.

Yale Privacy Lab has been going strong since 2017. Tell us a bit about your origin story.

Sean O’Brien: Privacy Lab grew out of workshops that we were already doing for the law school. I had been doing them in New Haven for a while for some activist groups and then got involved in the law school after some people put me in touch with the Information Society Project. They were very enthusiastic. We did some things like Software Freedom Day with the Free Software Foundation, and we hooked up with a local makerspace here, MakeHaven, which gives us a nice, fun, technical setting to be doing those kinds of events.

As Yale Privacy Lab continues to evolve and expand, what are some of your latest endeavors?

SO: We do digital self-defense workshops, and we also do some fun things like taking photos of surveillance devices and mapping those around New Haven. Our role internally is to help give advice and be a resource for law students, faculty, scholars, and anyone who is involved in the legal clinics. Our legal clinics at Yale Law School do a lot of high-profile work, so there’s substantial interest in private and secure communications and a real need for them.

Our digital self-defense workshops are similar to the kind of information EFF has in the Surveillance Self-Defense guide, but a little different. Our take on it is very much shaped by the individuals who come in. We try not to do the standard cryptoparty thing, which is to make sure you cover a certain five tools. We’ll cover whatever needs to be covered based on who’s actually at the workshop. [Editor’s note: This approach fits the guidance included in EFF’s Security Education Companion guide.]

Laurin Weissinger: We’re doing this cyber security class here at Yale Law School for JD candidates and LL.M. candidates.

In the class, everyone gets a micro computer, and we run a hacking-friendly Kali Linux. It’s all about students actually understanding how computers work, and how software works—for example, how privileges and rights are being used in a Linux or Unix system. We did things like show them how network traffic can be intercepted easily and how unencrypted traffic can be read by third parties. All of this comes down to empowering students.

What’s good about a lot of these technologies—FreedomBox, for example—is that they are free or relatively cheap and also enable privacy-enabled learning. If something breaks, it’s not the end of the world. At the same time, it is state-of-the-art privacy-enabling technology.

Another example is the running of Tor nodes, which is very interesting for instruction, because we can show, for example, that all Tor traffic is encrypted. We can sniff it here in the network to demonstrate how it works; and at the same time, we are running relatively cheap hardware the students are familiar with.

How much institutional support does the group receive to run these projects?

SO: We are currently a volunteer-driven initiative, and have been since the start. That means we don’t get any direct funding from the school or any grants, at least at the moment. We do get support for infrastructure—things like printing and event hosting—and those things are obviously not cheap. All of that is coming from the Information Society Project, inside the law school.

We also have had the benefit of the connections through the Electronic Frontier Alliance. That’s allowed us to reach out to other Internet freedom, anti-censorship, privacy groups here on the east coast and elsewhere to get some ideas and collaborate on thinking about these sorts of things. The free and open-source software movement has been huge in that direction as well. The Software Freedom Law Center has sent folks down to do presentations for us; and they have a close connection to the FreedomBox Foundation, which is where we get the real life support for the devices we’re setting up.

Beyond that, we have the great help of the librarians at Yale and Yale Law School, specifically. Early on, we did a bunch of presentations for the law librarians and they were very concerned, as librarians tend to be, about the privacy of their patrons. So they set up a Tor browser on every single computer they have there, and they encourage patrons to use it. They came up with a training for that and all the documentation they would need for their use cases.

How did you get Yale to agree to the Tor nodes? Do you have advice for other groups on that process?

SO: In my perspective, the first thing that needs to be done is getting people to use the Tor browser. That removes some of the stigma behind Tor use in general.

From a technical standpoint, the reason we latched onto FreedomBox is because it’s the easiest way we’ve found to graphically set up a Tor relay that is also a bridge which has a Tor hidden service where they set up an onion service for you. It’s basically a five minute installation once you get the hang of it. We’ve done workshops just recently where we’ll have a bunch of people in the room install this on virtual machines. So if you’re going to have a problem at an institution because they don’t want you to set up physical devices on their network, you can get people at a workshop to do this.

LW: In the cybersecurity class I did speak about the criminal aspects: why would criminals move to the Tor network, and so on. We underline the fact that this is just a rational move by criminals. If you want to, for example, host a forum where illegal stuff can be bought and sold, you would use the most privacy-enhancing technology available, which is Tor. At the same time, there are also a ton of illegitimate websites on what I’ll call the clear web. We know that any technology that exists will be used for illegitimate, criminal, morally problematic reasons. If criminals are using things like the Tor network, email encryption, secure messaging, etc, it means that these technologies appear to be offering some level of protection.

Do you have any resources that could help others who are interested in leading these kinds of projects?

SO: The main resource that we’ve been using for our workshops is called Citizen FOSS (free and open-source software). It’s a play on Citizen 4 which was the handle Edward Snowden used when he was in his operational phase. What we try to do is take the actual software—but sometimes it‘s the operational concepts Ed Snowden used—and apply them as often as possible. The guide is very long and we remix it for workshops on an ad-hoc basis based on what the actual participants are interested in.

What do you see as Yale Privacy Lab’s role in the wider community?

SO: From the start, we’ve been very adamant about engaging with the New Haven community. Obviously if you want to be serious about this kind of anti-surveillance, anti-censorship work you have to engage in the world physically around you. So we always try to make sure the workshops are available to the community and that the resources are all Creative Commons licensed and available to as many people as possible.

I think reaching out to locals in the area has been a big part of the success. It gives us a grounding outside of what can sometimes be a stuffy Ivy League setting. It’s great that the law school allows us to use their facilities, but we want to have environments as well that are welcoming and creative and have more of a community feel. Philosophically and politically, we aren’t trying to do things that concern Yale only, and we also understand that Yale itself has a role in things like local surveillance.

What advice do you have for groups who want help their institutions migrate to new software?

SO: We have a very strong dedication to free and open-source software, and also we have never suggested use of software that has a monetary cost associated with it. As everyone knows, the thing about FOSS is that you’re not tying yourself to a technology that has some big licensing cost. In our case at Yale—as an institution that pays for a lot of software—you’re also not tying yourself to the institutional procurement process behind that.

The thing about free software is not just that it gives you all the freedom to remix and modify code, but also it’s better from a privacy and security standpoint because it can be audited. We let people know that we care about this thing called licensing, but we care from a privacy and security standpoint. It’s a basic truism that the availability of the source code—the ability for security experts to read it—means it’s hard to hide malware in there. I would say, if you’re interested in growing quickly, getting your stuff out there, being able to do it without a ton of people over your shoulder, use FOSS tools.

Focusing on communication needs is really important. In our case, the law school clinics are always talking to at-risk users, so convincing people at the law school and at these clinics that they need this stuff is not very hard. In some other areas, it might be more basic. It might be, “Do you want to get away from your stalker ex-boyfriend? Do you want to not get all your banking information stolen when you’re working at a cafe?” Those might be bigger selling points.

The other thing is just trying to have diplomacy. It’s important to understand that others’ concerns are based in the institutional norms that those workers are used to, rather than trying to just reflexively tell them to go screw themselves.

LW: My first tip comes from my cybersecurity perspective which is: do not run just anything. Run what is the industry standard, particularly when it comes to crypto. In most cases, that will also be open-source. Rely on projects that you know are open-source that are being audited, that are being used by the industry; which means that errors and bugs will be found with a greater likelihood than they would be in random stuff. It is not just about privacy, it’s also about your security.

Finally, get some support to run your infrastructure in a way that does not break the network of the institution you work with. Make sure you have as little impact on them as possible and you will be far more likely to get good institutional buy-in.

The most effective digital rights advocacy is tailored to the skill sets of the organizers and responsive to the needs of the communities they serve. Yale Privacy Lab demonstrates how to build a campus community around digital rights activism, and offers an excellent example to emulate elsewhere.

Are you organizing where you live? If so, we invite you to learn more about the Electronic Frontier Alliance and consider adding your group to the national network. To find grassroots groups in your area that you can collaborate with or assist, please peruse our allies page.

Amendments to Mauritius' ICT Act Pose Risks for Freedom of Expression

Thu, 12/06/2018 - 08:21

Mauritius doesn’t get a whole lot of international attention. The island nation off the southeast coast of Africa, officially the Republic of Mauritius, is a diverse country that is highly ranked for democracy, and economic and political freedom. The Economist’s Intelligence Unit has named the country the only “full democracy” in Africa, and Freedom House’s latest Freedom in the World report calls it a free country. The country’s Constitution (Art. 12) protects freedom of expression, with exceptions in line with Article 19 of the Universal Declaration of Human Rights.

But readers of this blog know that democracies, from France to India and many places in between, often get Internet regulation terribly wrong. Recent amendments to Mauritius’ ICT Act are exemplary of that fact.

The Information and Communications Technologies Act was created in 2001 and covers a broad array of topics, from fraud to identity theft to tampering with telecommunications infrastructure. The Act also defines as an offense the use of telecommunication equipment to “send, deliver or show a message which is obscene, indecent, abusive, threatening, false or misleading, or is likely to cause distress or anxiety.”

But if that weren’t troubling enough, the latest amendment has added even more adjectives to that list, thus potentially criminalizing messages that “annoy”, “humiliate”, or even “inconvenience” the receiver or reader. As Mauritian journalist Lowena Sowkhee recently wrote:

Parents should now fear that their children can get arrested for their use of social media. It shall now be easy to be punished if we publish, share or comment on posts that may annoy another individual. There is no need for the complainant to prove that he has been distressed by the post. Distress is a mental condition, which is difficult to prove.

Furthermore, the latest amendment removes a clause in (h)(ii) that previously included the phrase “for the purpose of causing”, thus removing any need to prove intent to harm.

The government has defended the Act as “important recourse for victims of social networks misuses to report their grievances,” noting that “social media companies are facing increased international scrutiny as the number of victims of online dangers is sharply increasing.”

Sowkhee, on the other hand, calls the amendment a “clear violation of freedom of expression,” comparing it to section 66(A) of India’s Information Technology Act, which was struck down by the country’s Supreme Court as unconstitutional and as “arbitrary excessively and disproportionately” invasive of the right to free speech. While some might call Sowkhee’s comparison of the law to countries like Iran or China a stretch, the amendments to the ICT Act are sadly in line with the laws of countries such as Egypt, the UAE, and Jordan—none of which are democracies. If the government of Mauritius has regard for the freedom for which the country is known, it should immediately repeal these amendments.





In a Letter To The EU, European Film Companies and Sports Leagues Disavow Article 13, Say It Will Make Big Tech Stronger

Wed, 12/05/2018 - 20:42

A coalition of some of Europe's largest film companies and sports leagues have published an open letter to the European Union officials negotiating the final stage of the new Copyright Directive; in their letter, the companies condemn "Article 13," the rule requiring all but the smallest online platforms to censor their users' videos, text-messages, photos and audio if they appear to match anything in a crowdsourced copyrighted works database.

The companies say that Article 13 will give more power to Google and the other Big Tech companies it was supposed to rein in, and make it harder for entertainment companies to negotiate favorable deals with the tech sector. They demand that the negotiators finalising the Directive remove their products from Article 13's scope, unless the negotiators want to roll back the Article 13 language to its 2016 state, an essentially impossible outcome.

Support for Article 13 is evaporating. EU Member States have come out against it. Nearly 4,000,000 Europeans have signed a petition opposing it, and independent scholars and experts have objected to it from the start.

If you're European and you agree, you can join 4,000,000 others in calling for an end to this ridiculous, dangerous exercise.

Dear Tumblr: Banning "Adult Content" Won't Make Your Site Better But It Will Harm Sex-Positive Communities

Wed, 12/05/2018 - 15:20

Social media platform Tumblr has announced a ban on so-called “adult content,” a move made, it seems, in reaction to Tumblr’s app being removed from the Apple app store. But while making the app more available is in theory good for Tumblr users, in practice what’s about to happen is mass censorship of communities that have made Tumblr a positive experience for so many people in the first place.

On December 3, Tumblr CEO Jeff D’Onofrio posted a lengthy missive about a new policy, titled, apparently unironically, “A better, more positive Tumblr.” Instead of laying out a vision that is better and positive, D’Onofrio’s post lays bare the problems with the ban on so-called “adult content.” First of all, the policy is confusing and broad, leaving users in the lurch about what they can and can’t do on Tumblr. Second, according to D’Onofrio, enforcement of the policy will be reliant on automated tools, the use of which is—and always has been—rife with problems. Third, the people who will end up punished aren’t pornbots or sex traffickers but already-marginalized groups who have built sex- and body-positive communities on Tumblr. And finally, all of these things come together to show just how many ways platforms and tech companies can get in between users and their freedom of expression.

In D’Onofrio’s post, he explains that “in order to continue to fulfill [Tumblr’s] promise and place in culture, especially as it evolves, we must change,” going on to say that as part of that evolution, “adult content” will no longer be allowed on the platform. He further explains:

“We recognize Tumblr is also a place to speak freely about topics like art, sex positivity, your relationships, your sexuality, and your personal journey. We want to make sure that we continue to foster this type of diversity of expression in the community, so our new policy strives to strike a balance.”

On the face of it, this is literally contradictory. Saying adult content is banned, but that “diversity of expression” related to all those listed topics isn't is impossible to parse for the average user. Tumblr’s FAQ “clarifying” the definition of adult content (that is, that which includes “photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content—including photos, videos, GIFs and illustrations—that depicts sex acts”) likewise compounds this problem.

The new policies rule out almost all forms of nudity. “Female-presenting nipples” in particular is a phrase that has come under ridicule, because, among other things, it polices bodies for what they look like, based on a specific conception of gender, and a body part that only some cultures—but certainly not all!—prohibit showing in public.

On the other hand, the very next question has Tumblr claiming that “female-presenting nipples” can be shown in some contexts, that written erotica, “political” nudity, and “art” are permitted. These are all subjective categories that leave a lot of people on uncertain ground. Just look at Facebook, which has similar rules regarding nudity. In the past few years, we’ve seen Copenhagen’s Little Mermaid statue, a famous illustration of a woman licking an ice cream cone, a classic French painting, and a 16th-century statue of the Roman god Neptune taken down by Facebook’s content moderators under the restrictive policy.

Tumblr has also decided that the way to make these subjective calls about what is “art” and what is “adult content” is by using automated tools. D’Onofrio basically admits that these tools don’t work properly, saying in his post that “We’re relying on automated tools to identify adult content and humans to help train and keep our systems in check. We know there will be mistakes”.

That is an understatement. Filters don’t work. We’ve seen this in the copyright context many times. For example, YouTube’s Content ID system works by checking newly uploaded material against a database of copyrighted material and notifying copyright holders if there’s a match. And it resulted in five copyright claims being filed against a video of white noise. Five people claimed they literally owned exclusive rights to static.

And that’s just when it comes to checking for copyrighted material. It’s rather brazen of Tumblr to suggest it has the ability to develop and train a program to determine if something is porn; after all, there is literally a famous Supreme Court quote about the difficulty of defining obscenity! And so far, as any informed observer would have predicted, Tumblr’s system is failing miserably. Among the items flagged are a picture of Pomeranian puppies, selfies of fully-clothed individuals, images of raw chicken, and much much more. And, despite D’Onofrio’s statement that art, discussion of sexuality, and politics wouldn’t violate the terms, all of those categories have been hit.

When we look to groups outside the dominant culture, the problem is especially pernicious. Already, an image of a video game character on a pride flag, a selfie with the word “lesbian,” and someone talking about a family death due to AIDS have all been flagged. Tumblr may think it’s creating a “better” community, but it’s destroying what made it great in the first place.

In his post, D’Onofrio defends the policy by saying that the bottom line is that “there are no shortage of sites on the internet that feature adult content.” Indeed, the Internet is full of porn, the overwhelming majority of which caters to heterosexual men. But on Tumblr, people created sex-positive spaces on Tumblr that don’t exist elsewhere. People created portfolios of their work, all of it, on the platform. Those spaces are going to vanish.

A business decision?

Three paragraphs into his better, more positive manifesto, D’Onofrio states “posting anything that is harmful to minors, including child pornography, is abhorrent and has no place in our community. We’ve always had and always will have a zero tolerance policy for this type of content” and asks that no one confuse that with this new policy. Child exploitation imagery is both vile and illegal, and the fact that Tumblr apparently wasn’t eliminating it shows that it needed to hire people to enforce its existing policy, not outsource the job to algorithms. So why create this new, wholesale ban?

It’s impossible to divorce the new policy from the fact that, just a month prior to the announcement, Tumblr disappeared from the Apple App Store. And that, when asked about it, a Tumblr spokesperson responded with nearly the same words that D’Onofrio also used in his post.

Apple’s App Store has long acted as censor and gatekeeper to the Internet. In 2010, Steve Jobs once said that the iPad offered “freedom from porn” and that there was a “moral responsibility to keep porn off the iPhone.” Apple has consistently enforced draconian rules for app developers, exerting control over how its users get to experience the Internet. The company’s rules have even had the effect of silencing the press, as in 2010 when a large-scale removal of apps containing nudity impacted several mainstream German news publications.

We don’t know if Apple is the sole reason for these new rules. Tumblr also got banned this year in Indonesia because of pornography, for example, and may just want to make itself as non-controversial as possible. And it’s notable that Tumblr’s new policy is largely in line with that of peers Facebook, Microsoft, and YouTube, all of which heavily restrict so-called “adult content.”

The end result, though, is that companies and governments are changing how users get to express themselves on the Internet. The multi-billion dollar corporate porn industry won’t go away; rather, what will are places for people to talk frankly, openly, and safely about sex and sexuality. Groups that are pushed out of mainstream discussions or find themselves attacked in mainstream spaces are once again losing their voices.

Alameda and Contra Costa County Sheriffs Flew Drones Over Protests

Wed, 12/05/2018 - 12:16

At the International Association of Chiefs of Police Conference in October, presenters from the Orlando Police Department issued a stern warning for fellow local law enforcement officials eager to start a small unmanned aerial systems (sUAS or drones) program.

“We don’t want to use them when people are exercising freedom of speech,” Orlando Police Sgt. Jeffrey Blye told the audience during the best practices portion of his talk. “Because that will destroy your program quickly.”

That is excellent advice for police departments, but sheriffs in the San Francisco Bay Area have chosen not to follow it.

The Alameda County Sheriff’s Office had drones at the ready on the scene for many high-profile protests in Berkeley and on the University of California Berkeley campus throughout 2017. Just to the north, the Contra Costa County Sheriff deployed drones over immigrant rights rallies outside the West County Detention Facility in Richmond, California, which houses detainees for ICE.

Records obtained by EFF and the Center for Human Rights and Privacy answer some questions, and raise many more about when, how, and if these agencies should deploy drones at peaceful protests. The same goes for drone deployment at mostly peaceful protests that are interrupted by a small portion of the participants engaging in civil disobedience, violence, or property damage.

Review the public records at DocumentCloud (DocumentCloud's privacy policy applies). 

Pilots vs. Protesters in Alameda County

On the final day at the same IACP conference, where Orlando Police spoke about drone best practices, the Anti-Defamation League hosted a panel titled “Handling Extremist Events: Balancing Freedom of Speech, Peaceful Assembly, and Public Safety.” It focused largely on clashes between protesters and counterprotesters, such as events where alt-right and nationalist demonstrators have faced off with Antifa-affiliated groups.

A police chief in the audience asked the panelists whether they had used drones to surveil these protests. Assistant Chief Jeffery Carroll, who heads the Homeland Security Bureau in Washington, D.C’s Metropolitan Police Department, said no, because the jurisdiction is a "No Drone Zone." Richmond Police Chief Alfred Durham from Virginia explained that in his state, drones generally require a warrant, except in exigent circumstances, such as an active shooter. Instead, the agencies generally relied on helicopters.

UC Berkeley’s Chief of Police Margo Bennett described a very different approach. She publicly confirmed, for the first time to our knowledge, that the Alameda County Sheriff’s Office had deployed drones at protests.  

Chief Bennett disclosed that she requested air support ahead of protests against campus speakers, such as Ben Shapiro and Milo Yiannopoulos. According to the county’s policy, the sheriff is allowed to put them up in the air once they have reason to believe a felony—such as physical violence or property destruction—is occurring.

With that tip, we compared the dates of a series of high profile protests against a bare-bones Alameda County Sheriff drone mission log obtained earlier by the Center for Human Rights and Privacy through a California Public Records Act request. Then we used that information to request the formal “Mission Report Forms” from those protests.

Prior to the IACP conference, the Alameda County Sheriff had refused to hand over these mission reports, claiming that as investigative records they were exempt. This time, they turned over the documents for 10 separate mission logs related to six days of protests, although on some days the drones never left the ground. In addition to UC Berkeley, the Alameda County Sheriff also responded to air support requests from the City of Berkeley Police Chief.  

Read the mission reports and associated emails.

August 27, 2018 - "Say No To Marxism"/"Rally Against Hate"

At the request of the Berkeley Police Department, two separate Alameda County drone teams were on hand. They flew DJI Phantom 4 drones over the zone after receiving reports of alleged assault.

September 19, 2018 - Guest Speaker Ben Shapiro at UC Berkeley

The UC Berkeley Police Chief requested drone surveillance for the planned protest against the speaker. While Alameda County deputies authorized DJI Inspire and DJI Phantom 4 drones, no flights were conducted.

September 24-27, 2018 - Milo Yiannopoulos at UC Berkeley/”Free Speech Week”

As was the case with the Shapiro event, the UC Berkeley Police Chief requested drone surveillance, and the Alameda County Sheriff logged missions for the DJI Inspire and DJI Phantom 4 drones. On the first two days, the drones never left the ground. On September 26, the drones flew after violence allegedly occurred. On September 27, the drones were on standby, but did not deploy.

(Emails indicate Berkeley Police further requested drone support on April 27, 2017, to monitor protests following the cancellation of a talk by Ann Coulter. However, there is no log or mission report corresponding to this date.)

Interestingly, the mission logs show an exclusive focus on protesters associated with left-wing, anti-racist, anti-fascist, or other groups opposing the Trump presidency. There is little mention of the alt-right, nationalist, or pro-Trump groups also involved in the altercations. The drone pilots used boiler plate language at the beginning of each mission report:

By Any Means Necessary (BAMN) and ANTIFA members were expected to take part in the protest and these groups are notoriously violent and encourage civil disobedience, property destruction and physical violence. BAMN states the following on their web page, “BAMN will employ whatever means are necessary to oppose and defeat these attacks on the democratic and egalitarian aspirations and struggles of our people.” ANTIFA is described as a movement that engages in varied protest tactics, which included property damage and physical violence.

At the IACP conference, UC Berkeley Police Chief Bennett explained that the drones were most helpful in “situational awareness.” She also said the department later confronted the protest organizers with the captured footage.

Chief Bennett described how faith leaders and non-violent protesters at one event (“people I know as activists that I know and respect”) were peacefully escorting rally participants away from the scene:

But Antifa was able to get into their leaving process, and me being able to see that from the drone perspective, seeing it from the air allowed me to have really candid conversations [over] the following days with some accountability. ‘You cannot have these people with what you’re doing, delegitimizing yourselves and creating major issues for our community.’ It was a perspective we only got from the freedom of being in the air.

Chief Bennett did not say whether she had similar conversations with conservative groups whose members were also involved in altercations.

While police are increasingly challenged with providing safe space for dissent, flying drones over protests threatens the First Amendment rights of peaceful protesters, even if a small minority may allegedly engage in criminal activities. The Alameda County Sheriff’s policy for the operation of drones offers no guidance on how to balance freedom of expression and assembly against situational awareness and evidence gathering.

In fact, the procedures do not address protests at all. Instead, deputies seem to rely on parts of the policy that say missions are authorized if there is probable cause it will capture footage relevant to a felony. While probable cause is a better requirement than no standard at all, it is a significantly lower threshold than the warrant requirements in states like Florida, Virginia, and North Dakota. Also, the policy encompasses an extremely broad variety of offenses, but in our view, police should only be allowed to aim drones at protests when the most serious crimes are suspected.

Following each drone mission, deputies are supposed to review and evaluate the evidentiary value of the footage, then purge footage of “identifiable individuals” unless there is reasonable suspicion that a crime occurred. The Alameda County Sheriff refused to release records reflecting the evaluations conducted for protest footage, claiming they were exempt as investigative records.

We also requested a copy of the annual audit of the drone program required by the general orders. The sheriff’s response: it does not have “any audits pertaining to our sUAS program.” It is alarming that the sheriff is not keeping a close eye on this intrusive program.

The Alameda County Sheriff has spent more than $120,000 on six drones, according to documents obtained by the Center for Human Rights and Privacy. These costs include more than $90,000 on two drones that are rarely deployed.

Drones Over the ICE Protests in Contra Costa County

On June 30, researchers for the Center for Human Rights and Privacy witnessed first hand a drone monitoring peaceful immigrant rights protests outside the Contra Costa County Sheriff’s West County Detention Facility in Richmond, California. This jail houses undocumented immigrants through a deal with ICE (Sheriff David Livingston announced on July 10, 2018, that he would terminate the ICE contract). A subsequent public records request of mission logs confirmed that the sheriff authorized drone flights for rallies on June 26 and June 30, 2018, at the detention facility. The fact that the drone was monitoring First Amendment activities was not documented in the log, although it was noted that the drones were used for "facility safety and security."

The use of a DJI drone at the June 30 Abolish ICE protest was previously reported by Darwin BondGraham of the East Bay Express. He noted that an internal ICE memo released in 2017 claimed with “moderate confidence“ that the drone manufacturer was “providing U.S. critical infrastructure and law enforcement data to the Chinese government.” The company disputed the claim. The U.S. Army grounded its DJI drones in 2017 over security vulnerability concerns. Contra Costa County operates three of these drones, purchased at a cost of more than $19,000.

The Center for Human Rights and Privacy filed a public records request for photos and videos captured by the drones, but the agency rejected the request, citing the investigative record exemption under the California Public Records Act.

Contra Costa County’s policy is almost identical to Alameda County’s, particularly in failing to address the privacy and free speech implications of flying drones over demonstrations and political gatherings.

Community Control

This police use of drones to spy on First Amendment activity is just one reason why many community members oppose drone programs. This is why city councils and the public as a whole, and not law enforcement officials, must have the ultimate power to decide whether or not police agencies acquire drones. And if city councils approve drone programs, they must insist on robust policies to limit how police use drones.

This year, cities in San Francisco's East Bay area have taken up policies to ensure that law enforcement doesn’t acquire surveillance technology without such a public process. In Berkeley and Oakland, city councils passed a measure sometimes referred to as “Community Control Over Police Surveillance” (CCOPS) ordinances or “Surveillance Equipment Regulation Ordinances” (SERO). In Silicon Valley, Santa Clara County adopted such a law, too, which applies to that county’s sheriff.

It’s time for East Bay county governments to follow suit. Sheriffs should not be allowed to fly drones over protests—or at all—without an accountable process approved by the communities they serve. Otherwise, like the Orlando Police Department cautioned, these programs ought to be destroyed.

This report was co-authored with Mike Katz-Lacabe, director of research for the Center for Human Rights and Privacy

Related Cases: Drone Flights in the U.S.