You are here

EFF

Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 49 min ago

Help Us Save the Internet: EFF Seeks a Tech Projects Director

Fri, 02/28/2020 - 16:23

There’s a very rare opportunity available right now for an engineering director to join EFF’s leadership team, and we’re asking our community to help us find the perfect candidate.

We’re doing an open hire for our Technology Projects Director role. The role will lead a 16-person team that uses its creativity and skill to create a more secure, private, and censorship-resistant Internet. In addition, this director position joins our senior leadership team, helping EFF figure out what positions to take, what projects to invest resources in, and the strategic direction of the organization. That’s because EFF believes that technical expertise should be embedded at every level of our organizational decision-making.

This role has two huge benefits: the ability to use your work to change the world and a chance to work at a place that cares deeply about the people doing the work.

What do we mean by change the world? This role is uniquely positioned to leave a big legacy on the planet. EFF’s Technology Projects team is a high-performing team whose day-to-day management decisions are handled by two directors that will support this position. The team is responsible for projects such as: 

  • Privacy Badger, EFF’s wildly popular tracker blocker
  • HTTPS Everywhere, used by millions to make their web browsing more secure
  • Certbot, a Let’s Encrypt client helping website owners globally receive and install SSL certificates to secure their sites
  • Panopticlick, a research and education project that sheds light on the seedy underbelly of browser fingerprinting
  • STARTTLS Everywhere, a project to foster STARTTLS adoption and make email delivery more secure
  • EFF’s Threat Lab, which has published blockbuster investigations into abusive practices by technology companies, leading to major policy changes

Perhaps even  more importantly, this role will help us dream up the next chapter. Given the pressing threats of government and corporate power on everyday users, how can EFF best use its resources to defend users? What new software projects should we launch? Where can we best influence technology policy? We’re looking for people with creativity, vision, and the right amount of optimism and humility to help us answer those questions. And we promise - it’ll be fun.  

EFF is more than just a place to work. It’s a community. Our workplace culture is steeped in an ethos that prioritizes kindness to our colleagues, and we deliberately seek applicants with different perspectives, identities, and experiences to build an inclusive workplace. We believe people should be able to work here while also having family commitments and outside interests, and our benefits and our culture reflect that. Some people join EFF having been burned by unfeeling tech companies or corporate law jobs, and coming to EFF can feel a lot like coming home – a place where everyone genuinely wants you to be successful, where we do our best to leave drama at the door even as passion is welcomed, and where we offer flexibility and opportunities for personal and professional development.  

We want to encourage lots of different people to apply for this role, but we won’t achieve that without the help of our community. Please help EFF by sending a note about this role to folks that you believe might be a good fit, and sharing this announcement on social media.

And if you’re thinking of potentially applying but have a few questions, then please drop me a note at rainey@eff.org, and either I or one of my colleagues will get back to you. 

Read details of the role and apply by visiting our hiring portal.  

 

 

Schools Are Pushing the Boundaries of Surveillance Technologies

Thu, 02/27/2020 - 12:54

A school district in New York recently adopted facial recognition technology to monitor students, and it is now one of a growing number of schools across the country conducting mass privacy violations of kids in the name of “safety.” The invasive use of surveillance technologies in schools has grown exponentially, often without oversight or recourse for concerned students or their parents.

Not only that, but schools are experimenting with the very same surveillance technologies that totalitarian governments use to surveil and abuse the rights of their citizens everywhere: online, offline, and on their phones. What does that mean? We are surveilling our students as if they were dissidents under an authoritarian regime.

Schools must stop using these invasive technologies. Americans are already overwhelmingly uneasy with governments’ and corporations’ constant infringements on our personal privacy. Privacy invasions on students in educational environments should be no exception.

Schools are operating as testbeds for mass surveillance with no evidence, and no way to opt out

In recent years, school districts across the country have ramped up efforts to surveil students. In physical spaces, some schools are installing microphones equipped with algorithms that often misinterpret coughs and higher-pitched voices with “aggression.” School districts’ ongoing experiments with facial recognition technology mirror other law enforcement and government programs around the world.

Schools are also watching students online, and on their phones. Social media monitoring company Social Sentinel offers software to monitor students’ social media accounts, not unlike what the Department of Homeland Security regularly does to immigrants and Americans. School and workplace surveillance vendors like Bark, Qustodio, and AirWatch encourage schools and families to install spyware on their children's phones—software that can work similarly to surreptitiously-installed stalkerware or government-sponsored malware campaigns. Qustodio, one of many companies marketing to both schools and parents, earnestly encourages parents to “monitor your kid’s Internet use NSA-style.”

Many of these technologies compromise not only students’ privacy, but also their security. Web filtering services Securly, and ContentKeeper (which partners with the notorious student surveillance service Gaggle) compel students to install root certificates so administrators can comprehensively monitor students’ Internet activity and effectively compromise the security protocols that encrypt over 80% of web sessions worldwide. Just last year, the government of Kazakhstan tried to deploy a similar program to surveil their citizens. Administrators, and potentially third parties (if this filtering is done via “the cloud”), can decrypt and gain complete access to students’ web browsing sessions: what they’re reading, what they’re typing, and even sensitive information like passwords.

Schools refer to these technologies as “student safety” measures, but this label doesn’t change the fact that these are surveillance technologies. Surveillance is surveillance is surveillance.

Pervasive and invasive surveillance is not an effective “safety” measure

What’s more, there is no evidence that surveilling students will lead to better safety outcomes in general. In fact, the few studies that exist show that more cameras inside school buildings decrease students’ perceptions of safety, equity, and support. Additionally, more stringent surveillance ecosystems could have a chilling effect on students’ development. These effects, in turn, tend to impact minority groups and women disproportionately more.

For instance, social media filters that are employed by some districts will send automated alerts to school administrators, and in some cases, local police. The breadth of filtering typically employed by school districts can be staggering. Some flag terms and sites relating to drugs and violence, as well as terms about mental and sexual health. Students may feel less safe researching help online under these surveillance measures.

Schools with larger populations of students of color are also more likely to adopt stricter surveillance policies. The spread of pervasive surveillance also threatens to further entrench the disturbing school-to-prison pipeline where students of color are especially vulnerable to discriminatory over-policing. And deploying pervasive, high-tech surveillance is still expensive—which means administrators are investing in oppressive panopticons instead of other more empirically supported measures, like investing in education or improving mental health support for students. These funds would be better spent on fixing crumbling school infrastructure and investing in students than spying on them through unproven technological panopticons that will fall hardest on students that are already vulnerable to systemic injustice.

Surveillance technologies are often marketed to prevent bullying (both online and offline) and self-harm, but it’s not clear how they purport to do so. Companies that market their surveillance technologies as useful for preventing school shootings are especially guilty of trying to sell digital snake oil. It is not possible to predict, let alone prevent, these tragedies with surveillance and tracking tools like facial recognition. One vendor of facial recognition technology for schools even admits that it can’t prevent school shootings.

Surveillance does not equal safety; it undermines student trust in their learning environments, isn’t effective at keeping them safe, and reinforces systemic injustice. Schools need to slam the brakes and think about what kind of dystopia they’re creating for their students. 

Reform or Expire

Wed, 02/26/2020 - 19:47

Earlier today, the House Committee on the Judiciary was scheduled to mark up the USA FREEDOM Reauthorization Act of 2020, a bill meant to reform and reauthorize Section 215 of the USA PATRIOT Act, as well as some other provisions of FISA, before they are due to expire on March 15, 2010. At the last minute, this markup was postponed without warning and without a new date, throwing the process into chaos.

It is time to enact real reforms to the government’s use of national security authorities, beginning with the obvious, overdue step of prohibiting the intelligence community from using Section 215 to collect the call records of innocent Americans on an ongoing basis. Just yesterday, the New York Times reported that the Privacy and Civil Liberties Board (PCLOB) found that between 2015 and 2019, the CDR program cost $100 million taxpayer dollars, but yielded only one significant investigation. 

Especially when compared to the previously introduced Safeguarding Americans’ Private Records Act, the bill scheduled to be marked up today was far from perfect. But the USA FREEDOM Reauthorization Act of 2020 included some obvious and needed reforms, including ending the authority to conduct the CDR program.

EFF, along with other civil society advocates, has long been clear about the need for significant reform of Section 215. In fact, EFF has been suing over the NSA’s CDR program since 2006—long before the NSA admitted the program existed. In 2015, after Edward Snowden’s revelations in 2013 and after a federal appeals court ruled that the government’s interpretation of Section 215 was “unprecedented and unwarranted,” Congress passed the USA FREEDOM Act to amend Section 215 to stop mass surveillance of Americans’ telephone records. Although the law allowed the NSA to continue to use Section 215 to collect Americans’ call records, it substantially narrowed the program’s operation.

This first set of reforms were important, but it is now clear that they did not go far enough.

In 2018, the NSA announced that it received large numbers of CDRs it should not have had access to under the USA FREEDOM Act, and that these “technical irregularities” began in 2015. Despite this, the NSA encountered yet another “overcollection” incident just months later. After these two incidents, the NSA voluntarily shut the program down.

In 2019, both the House Committee on the Judiciary and the Senate Committee on the Judiciary called witnesses from the NSA, the FBI and the DOJ to discuss the current use of Section 215 authorities. The witnesses told both Committees they were requesting the renewal of the legal authorization for CDR program—that they had voluntarily shut down—because it might be useful one day.

And this is just a small selection of the problems that pervade Section 215. Although the law has become synonymous with the NSA’s collection of call records, it actually has a much wider scope. In addition to authorizing ongoing collection of telephone records, Section 215’s “business records” authority allows the government to obtain a secret order from the Foreign Intelligence Surveillance Court (FISC) requiring third parties to hand over any records or other “tangible thing” if deemed “relevant” to an international terrorism, counterespionage, or foreign intelligence investigation.

In the hearings last year, witnesses confirmed that the 215 “business records” provision may allow the government to collect sensitive information, like medical records, location data, or even possibly footage from a Ring camera. Both committees appeared rightfully skeptical, and the reform bills introduced in Congress this year attempt to curb some of the worst potential abuses of this far ranging authority.

It is past time for reform. Congress has already extended these authorities without reform once, without debate and without consideration of any meaningful privacy and civil liberties safeguards. If Congress attempts to extend these authorities again without significant reform, we urge members to vote no and to allow the authorities to sunset entirely.

How Ring Could Really Protect Its Users: Encrypt Footage End-To-End

Wed, 02/26/2020 - 17:59

Last week, we responded to recent changes Amazon’s surveillance doorbell company Ring made to the security and privacy of their devices. In our response, we made a number of suggestions for what Ring could do to be responsive to the privacy and security concerns of its customers and the larger community. One of our suggestions was for Ring to implement measures that require warrants to be issued directly to device owners in order for law enforcement to gain access to footage. This post will elaborate on this suggestion by introducing a technical scheme that would serve to protect both Ring's customers and the wider community by employing end-to-end encryption between doorbells and user devices.

Introduction: The Cloud and User Notification

In traditional surveillance systems, law enforcement had to approach the owners of footage directly in order to gain access to it. In so doing, law enforcement informed owners of the fact their footage was being requested and the scope of the request. This also served as a de facto rate-limiting of surveillance requests: a certain amount of real-world legwork had to be done to gain access to private footage. Even then, the footage was most likely granted once, and subsequent requests would have to be made for more material.

With the advent of cloud storage, access to raw footage moved from individual, private surveillance systems to the cloud provider. Once on the cloud, law enforcement can go straight to the cloud provider with a warrant for user footage, without informing the user. And footage on the cloud also makes it available to cloud employees—who can access the footage without permission from the user.

End-to-End Encryption Can Protect Footage and Feeds

End-to-end encryption (E2EE) allows devices to communicate with one another directly with the assurance of security and authenticity. This means that a user can encrypt data in a way that only the direct recipient can decrypt, and no one else—including the cloud storage provider that she uploads to and the manufacturer of the device she uses—can see what was sent. All they'll see is undecipherable "ciphertext."

Usually, end-to-end encryption happens between two or more devices owned by different people, and is implemented in communication apps like Signal and WhatsApp. E2EE can also happen between two devices owned by the same person to share sensitive data. Backup software SpiderOak One and tresorit use E2EE to back up files to the cloud in a secure way, and password managers like Dashlane and LastPass use it to store your passwords in the cloud securely. The backed-up files or passwords will be retrievable by multiple devices the user owns, but not by any other device. Not only does this protect the communication from the employees of these services, it also means that data breaches like the one LastPass experienced in 2015 do not result in any compromise of the sensitive encrypted data.

Ring has already experienced its share of data breaches and hacks in recent months, and responded by blaming its users and downplaying the dangers. The data breach resulted in username and password information on 3,600 customers being divulged, which put these users' footage in direct reach of hackers and the shadiest of data miners. Employees of Ring were found spying on customers through their doorbell cameras. Ring's history of lax security has made it the subject of a number of lawsuits, and a salient target for future hacks. To turn the tide and show that it's serious about security, the absolute best thing Ring could do is employ E2EE in its video feeds and AWS-based storage.

Not only would employing E2EE protect its users against their footage being divulged by a hack on user accounts or the AWS cloud, it would also implement just the kind of measure EFF calls for in ensuring law enforcement is required to request data directly from device owners. In E2EE schemes, the keys for the encrypted data are stored on users' devices directly, and are not held by the service provider. If a member of law enforcement wishes to obtain footage from a user's camera, they would have to ask the device owner to hand that footage over. This means that Ring would no longer provide law enforcement with a national video and audio surveillance system, since footage would have to be requested and delivered on an individual basis. It would be a return to the benefits that de facto rate-limiting of traditional surveillance systems provided, while retaining the convenience that Ring hinges its success on.

Moreover, this wouldn't necessarily be a very difficult system to implement. E2EE video feeds have been implemented in open-source encrypted communication platforms like Signal already, and encrypting stored video files with end-to-end can easily be done with inclusion of libraries made for just this purpose. In fact, some services seem to specialize in helping businesses with exactly this transition, intending to facilitate compliance with privacy legislation like HIPAA and GDPR.

Implementation Scheme Suggestion

Readers not interested in specific technical suggestions can safely skip this section. The TL;DR: an E2EE scheme for home security systems can and should be implemented.

So, how would such a system be implemented? In specifying this implementation suggestion, specific details such as choice of algorithm or keysize is omitted. Best practices should be followed for these. The intention is not to provide a spec, but rather to give a broad overview of the various pieces of infrastructure and how they could communicate with the assurances E2EE provides.

Keybase provides a good template for how to ensure key material is not lost through user mismanagement, sharing key material between multiple devices and using physical artifacts like paper copies in combination with digital devices to provide a guarantee of secure key redundancy.

The doorbell device, upon first activation, would generate a new doorbell keypair. The public key for the doorbell can be shared with the smart device app where the feed and videos will be viewed via a direct connection in a shared trusted network setting.

Likewise, the first smart device app connecting to the doorbell will generate a keypair, and communicate its public key to the doorbell in the same shared trusted network setting. The user would also be prompted to back up their key with a paper copy, and tips on best practices in physical security could be communicated.

If the doorbell has a speaker, it can at this point read off the digest form of the doorbell public key concatenated with the digest form of the app public key it received. This is the equivalent of safety numbers in Signal, and could be presented modulo the diceware list to generate a series of words, for the sake of usability. This should be verified by the user on the smart device app. Otherwise, trust in the public keys will have to be derived from the trusted network setting in which they were exchanged.

If a secondary smart device is added to the account, it should be linked with the primary smart device. Since the secondary device does not yet trust the primary device, secure key discovery has to be performed. The primary device should generate a symmetric key, and display it to the user in the form of a code. It should then send a copy of the app keypair encrypted with the symmetric key to the secondary device, which will prompt the user to enter the code to decrypt and start using the keypair. Alternatively, the secondary device could derive the keypair from the paper copy.

Any additional devices added to the account can follow the same process to receive the app keypair.

At this point, trust is established between the doorbell and all connected devices’ apps.

Upon activation, the doorbell would begin recording video (and possibly audio). The video should be encrypted to a random, newly-generated symmetric key. This symmetric key should then be encrypted to the app public key, signed by the doorbell private key, and saved as a separate file. Both files should then be stored on the cloud. Upon access of the video, the app will then decrypt the symmetric key with its private app key, and use that to decrypt and view the video. To share the video, the app can decrypt the symmetric key and share that with the server or whoever is requesting access.

Live video can also be provided to devices by encrypting it to the app public key directly and signing it with the doorbell private key. Likewise, if a two-way communication is desired, the app can encrypt any audio sent back to the doorbell with the doorbell public key.

Why Ring Won’t Want To Implement This...

Ring brands itself as a security-focused company, despite its digital security record. It handles the footage of millions of customers. Given the benefits to its customers, use of E2EE would seem like a natural next step. We hope Ring takes this step for its customers—it would certainly be a welcome turn for a company plagued by recent bad press following its insecure practices. It would show a real willingness to protect its customers and their data. But unfortunately, Ring currently has a direct incentive not to implement this strong security measure.

In the spring of 2018, Ring began a partnership program with police departments across the U.S. This program has expanded dramatically since its introduction to over 900 departments. Ring has carefully cultivated these relationships, with the expectation of troves of information from Ring's system being available to law enforcement. Additionally, these relationships are largely secretive, with agreements requiring confidentiality be maintained.

They’ve also expressed interest in implementing facial recognition for footage. In our post last week, we expressed serious concerns about this technology, including that it exacerbates racial bias and overpolicing. In order to perform identification using Amazon’s facial recognition infrastructure, Ring would need unencrypted access to user footage.

Privacy advocates and customers face an uphill battle to convince Ring to implement these features. In the past, Ring has been slow to take steps to address user security and privacy concerns. Their incentives are currently to maintain and expand the partnerships they’ve built, utilizing Amazon’s infrastructure to process the footage they possess. It will take Ring a significant reprioritization of their customers over their partnerships in order to take the next step forward.

...And A Competitor Just Might

Luckily, Ring isn't the only game in town. The field of smart home-security systems is filled with competitors, such as Google's Nest. These competitors, for whatever reason, haven't been as willing or able to build out a mass surveillance system for police use. This leaves them unencumbered by the agreements and expectations Ring has tied itself down with. A competitor in the field could implement a system that provided E2EE guarantees to its customers, protecting their feeds and footage in a very comprehensive way—from nosy employees, malicious hackers, and police agents all too eager to have this mass of data at their fingertips.

Whoever ends up implementing this forward-thinking system would signal that they are ready to take the sensitive data of their customers seriously. This would be a big step forward for the privacy and security of not just device owners, but also the community as a whole.

Empty Promises Won’t Save the .ORG Takeover

Wed, 02/26/2020 - 17:45

The Internet Society’s (ISOC) November announcement that it intended to sell the Public Interest Registry (PIR, the organization that oversees the .ORG domain name registry) to a private equity firm sent shockwaves through the global NGO sector. The announcement came just after a change to the .ORG registry agreement—the agreement that outlines how the registry operator must run the domain—that gives PIR significantly more power to raise registration fees and implement new measures to censor organizations’ speech.

It didn’t take long for the global NGO sector to put two and two together: take a new agreement that gives the registry owner power to hurt NGOs; combine it with a new owner whose primary obligation is to its investors, not its users; and you have a recipe for danger for nonprofits and NGOs all over the world that rely on .ORG. Since November, over 800 organizations and 24,000 individuals from all over the world have signed an open letter urging ISOC to stop the sale of PIR. Members of Congress, UN Special Rapporteurs, and US state charity regulators [pdf] have raised warning flags about the sale.

Take Action

Stand up for .ORG

The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Ethos Capital—the mysterious private equity firm trying to buy PIR—has heard the outcry. Ethos and PIR attempted last week to convene a secret meeting with NGO sector stakeholders to build support for the sale, and then abruptly canceled it. Ethos finally responded last Friday with an announcement that its promises to limit price increases and establish a “stewardship council” would be written into the registry agreement. But Ethos’s weak commitments don’t really address the NGO community’s concerns. Rather, they appear to be window-dressing: PIR would choose the initial council members itself and be able to veto future nominations, ensuring that the council remains in line with PIR’s management. And the limit on price increases still allows Ethos to double the fee over the next eight years, with no restrictions at all after that.

In a letter published today in the Nonprofit Times, EFF Executive Director Cindy Cohn and NTEN CEO Amy Sample Ward explain why the proposed measures wouldn’t solve the many problems with the sale of .ORG:

The proposed “Stewardship Council” would fail to protect the interests of the NGO community. First, the council is not independent. The Public Interest Registry (PIR) board’s ability to veto nominated members would ensure that the council will not include members willing to challenge Ethos’ decisions. PIR’s handpicked members are likely to retain their seats indefinitely. The NGO community must have a real say in the direction of the .ORG registry, not a nominal rubber stamp exercised by people who owe their position to PIR.

Moreover, your proposal gives the Stewardship Council authority only to veto changes to existing PIR policies concerning 1) use or disclosure of registration data or other personal data of .ORG domain name registrants and users, and, 2) appropriate limitations and safeguards regarding censorship of free expression in the .ORG domain name space. While these areas are important, they do not represent the universe of issues where PIR’s interest might diverge from the interest of .ORG domain registrants.

And even within these areas, PIR would “reserve the right at all times in [its] sole judgment to take actions consistent with PIR’s Anti-Abuse Policy and to ensure compliance with applicable laws, policies and regulations.” In other words, the Stewardship Council would be powerless to intervene so long as PIR maintained that a given act of censorship was consistent with its Anti-Abuse Policy or required by some government.

Cohn and Ward go on to explain that the limits on fee increases also fail to provide comfort to NGOs worried about price hikes: if Ethos raises prices as allowed by the proposed rules, the price of .ORG registrations would more than double over eight years. After those eight years, there would be no limits whatsoever, and the stewardship council would be expressly barred from trying to limit future increases. “The relevant question is not what registrants can afford; it’s why a for-profit company should siphon ever more value from the NGO sector year after year. The cost of operating a registry has gone down since 2002, not up.”

The sale of the .ORG registry affects every nonprofit and NGO you care about. If you believe that .ORG should listen to its users, not to investors, then please join us in demanding a stop to the sale.

Take Action

Stand up for .ORG

EFF Files Comments Criticizing Proposed CCPA Regulations

Tue, 02/25/2020 - 17:17

Today, EFF joined a coalition of privacy advocates in filing comments with the California Attorney General regarding its ongoing rulemaking process for the California Consumer Privacy Act (CCPA). The CCPA was passed in 2018, and took effect on January 1, 2020. Later this year, the Attorney General (AG) will finalize regulations that dictate how exactly the law will be enforced.

Last time we weighed in, we called the AG’s initial proposed regulations a “good step forward” but encouraged them to go further. Now, we are disappointed that the latest proposed regulations are, compared to the AG’s initial proposal, largely a step backwards for privacy.

To start, the modified regulations improperly reduce the scope of the CCPA by trying to carve out certain identifiers (such as IP addresses) from the definition of “personal information.” This classifies potentially sensitive information as outside the law’s reach—and denies Californians the right to access, delete, or opt out of the sale of that information.

Furthermore, the new regulations make it harder for consumers to exercise their right to opt out of the sale of their personal information. The proposed opt-out icon, which businesses will be required to display on their websites, is confusing; independent research has shown that many users don’t understand what it means. Worse, the new regulations provide that user-friendly, automatic controls like Do Not Track (DNT) cannot be used to opt out of data sale. Today, millions of users around the world use DNT to signal their clear intent to opt out of the collection, misuse, sharing, and sale of their data. Until now, few companies have chosen to honor that intent, but the CCPA gives user requests to opt-out of data sale the force of law. The AG should make sure that businesses treat well-established signals like DNT as an opt-out from sale of their data.

Our coalition letter details a number of other changes to the original draft regulations that reduce consumer protections. We urge the Attorney General to reconsider these changes and make sure CCPA does what it’s supposed to: protect Californians’ privacy. 

The other seven privacy organizations that joined the coalition comments are ACLU, Campaign for a Commercial Free Childhood, Common Sense Media, Consumer Federation of America, Media Alliance, Oakland Privacy, and Privacy Rights Clearinghouse.

Apple, Tell Us More About Your App Store Takedowns

Mon, 02/24/2020 - 16:24

EFF and 10 human rights organizations called out Apple for enabling China's censorship and surveillance regime through overly broad content restrictions on the App Store in China, and for its decision to move iCloud backups and encryption keys to within China. In a letter to Philip Schiller, Apple senior vice president and App Store lead, the groups asked for more transparency about App Store takedowns and to meet with Apple executives to discuss the company's decisions and ways Apple can rectify harms against Apple users most affected by the removals.

Apple removed thousands of applications in China, including news apps by Quartz and the New York Times, foreign software services like Google Earth, and network applications like Tor and other VPN apps. Last year, Apple capitulated to state pressure to remove HKmap.live, a crowdsourced map application being used by Hong Kong protestors. 

According to Apple’s transparency report, hundreds of applications were taken down in the first half of 2019 due to “pornography or illegal content”. In this case, “Illegal content” spans the breadth of China’s own draconian Internet security laws. In the letter, EFF and its partners asked that if Apple removes apps due to violations of local law, the company should pressure governments to be specific, transparent, and consistent in their requirements, and work to provide app developers and the general public with written documentation of the specific law violated, as well as the authority that is requiring the app’s removal. If Apple wishes to claim it is obliged to remove content in line with local law, they need to prove that’s behind the takedown, as opposed to the company caving to informal pressure and its own commercial incentives to keep in the good books of Chinese leadership.

The locked-down design of the App Store gives Apple unilateral control over its contents; unlike Android users, iOS users can’t sideload applications without first jailbreaking their phones entirely. Apple justifies its DRM-laden walled garden by saying it sets stronger security and privacy standards for applications by reviewing them. However, by centralizing that power, Apple has given governments a lever to impose their own measures of control over billions of users.

Court Report Provides New Details About How Federal Law Enforcement in Seattle Obtain Private Information Without Warrants

Mon, 02/24/2020 - 12:20

Federal law enforcement in Seattle sought an average of one court order a day to disclose people’s sensitive information such as calling history in the first half of 2019, according to a report released this year.

The report, the first of its kind by the U.S. District Court for the Western District of Washington, shows that officials sought 182 applications and orders for electronic surveillance between January and June 2019. These types of surveillance orders do not require law enforcement to get a warrant and are directed to third parties like phone companies, email providers, and other online services to demand private and revealing information about their users

Although the report does not provide specifics on the services or individuals targeted by the surveillance orders, it does detail how federal law enforcement in the region are using various forms of surveillance. The report is a result of a lawsuit by EFF’s client The Stranger.

EFF’s review of the report shows that the officials sought the vast majority of the orders—89—under the Pen Register Act, a law that allows investigators to collect details about who people communicate with via phone and Internet services. Another 69 orders sought information under the Stored Communications Act, which can be used to disclose user account information and other non-content records from a communications service provider.

Investigators also sought 24 so-called “hybrid” orders in which the applications rely on both the Pen Register Act and Stored Communications Act to obtain information. Although the report does not provide details about what records the hybrid orders sought, these requests are inherently questionable because they are not explicitly authorized by federal law.

Around 2005, creative lawyers at the Department of Justice devised the concept of a hybrid order as a novel legal theory to track people’s real-time locations without a warrant, though a many federal magistrate judges ultimately rejected them. Moreover, the legal basis for using hybrid orders to obtain real-time location data is undercut by the Supreme Court’s decision in U.S. v. Carpenter, which required police to obtain a warrant before obtaining historic cell-site location data.

Although we do not know for certain what sort of information the hybrid orders in the Seattle report sought, it’s possible that they were used to map a target’s communications network by recording data about who they contact and obtaining basic information about those individuals, an invasive practice sometimes called contact chaining.

Another interesting facet of the newly released report is that, by EFF’s count, 46 of the orders law enforcement sought were in support of criminal investigations under the Computer Fraud and Abuse Act (CFAA). The reports states that one of the SCA orders used as part of CFAA investigation targeted 39 accounts and another targeted 44 accounts.

EFF has long advocated for fixing the CFAA because its undefined terms and onerous penalties can be used to target behavior that law enforcement and civil litigants do not like, rather than going after malicious hacking. So, the prevalence of CFAA investigations piqued our interest and we look forward to seeing whether a similar trend occurs in future reports released by the court.

The  information disclosed in the report represents a crucial first step toward greater transparency about these types of non-warrant surveillance orders. Prior to EFF representing The Stranger in a 2017 lawsuit that sought to make these orders public, law enforcement was filing the requests in secret and no one, not even the court, could say how many applications had been filed.

As a result of The Stranger’s suit, federal prosecutors and the court agreed to start a two-year pilot program that changed how the court docketed these surveillance applications and orders so that they could be tracked. The agreement also required the court to publish a report every six months that provided basic information about the applications.

EFF thanks the court and federal prosecutors for creating greater transparency and we look forward to comparing this first report with the others that will be coming soon. We would also like to thank The Stranger for its efforts, as well as our co-counsel Geoffrey M. Godfrey, Nathan T. Alexander, David H. Tseng of Dorsey Whitney LLP’s Seattle, Washington office.

Related Cases: The Stranger Unsealing