You are here

Feed aggregator

Error message

  • Deprecated function: Unparenthesized `a ? b : c ? d : e` is deprecated. Use either `(a ? b : c) ? d : e` or `a ? b : (c ? d : e)` in include_once() (line 1389 of /usr/share/nginx/html/
  • Deprecated function: Unparenthesized `a ? b : c ? d : e` is deprecated. Use either `(a ? b : c) ? d : e` or `a ? b : (c ? d : e)` in include_once() (line 1389 of /usr/share/nginx/html/

Podcast Episode: Who Should Control Online Speech?

EFF - Tue, 11/30/2021 - 03:00
Episode 103 of EFF’s How to Fix the Internet

The bots that try to moderate speech online are doing a terrible job, and the humans in charge of the biggest tech companies aren’t doing any better. The internet’s promise was as a space where everyone could have their say. But today, just a few platforms decide what billions of people see and say online. 

Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Stanford’s Daphne Keller about why the current approach to content moderation is failing, and how a better online conversation is possible. 

Click below to listen to the episode now, or choose your podcast player: Privacy info. This embed will serve content from


More than ever before, societies and governments are requiring a small handful of companies, including Google, Facebook, and Twitter, to control the speech that they host online. But that comes with a great cost in both directions -- marginalized communities are too often silenced and powerful voices pushing misinformation are too often amplified.

Keller talks with us about some ideas on how to get us out of this trap and back to a more distributed internet, where communities and people decide what kind of content moderation we should see—rather than tech billionaires who track us for profit or top-down dictates from governments. 

When the same image appears in a terrorist recruitment context, but also appears in counter speech, the machines can't tell the difference.

You can also find the MP3 of this episode on the Internet Archive.

In this episode you’ll learn about: 

  • Why giant platforms do a poor job of moderating content and likely always will
  • What competitive compatibility (ComCom) is, and how it’s a vital part of the solution to our content moderation puzzle, but also requires us to solve some issues too
  • Why machine learning algorithms won’t be able to figure out who or what a “terrorist” is, and who it’s likely to catch instead
  • What is the debate over “amplification” of speech, and is it any different than our debate over speech itself? 
  • Why international voices need to be included in discussion about content moderation—and the problems that occur when they’re not
  • How we could shift towards “bottom-up” content moderation rather than a concentration of power 

Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. She’s a former Associate General Counsel at Google, where she worked on groundbreaking litigation and legislation around internet platform liability. You can find her on twitter @daphnehk. Keller’s most recent paper is “Amplification and its Discontents,” which talks about the consequences of governments getting into the business of regulating online speech, and the algorithms that spread them. 

If you have any feedback on this episode, please email

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.


Content Moderation: AI/Algorithms: Takedown and Must-Carry Laws: Adversarial Interoperability: Transcript of Episode 103: Putting People in Control of Online Speech

Daphne: Even if you try to deploy automated systems to figure out which speech is allowed and disallowed under that law, bots and automation and AI and other robot magic, they fail in big ways consistently.

Cindy: That’s Daphne Keller, and she’s our guest today. Daphne works out of the Stanford Centre for Internet and Society and is one of the best thinkers about the complexities of today’s social media landscape and the consequences of these corporate 

Danny: Welcome to how to fix the internet with the electronic frontier foundation. The podcast that explores some of the biggest problems we face online right now: problems whose source and solution is often buried in the obscure twists of technological development, societal change and the subtle details of internet law. 

Cindy: Hi everyone I'm Cindy Cohn and I'm the Executive Director of the Electronic Frontier Foundation. 

Danny: And I’m Danny O’Brien, special advisor to the Electronic Frontier Foundation.

Cindy: I'm so excited to talk to Daphne Keller because she's worked for many years as a lawyer defending online speech. She knows all about how platforms like Facebook, TikTok, and Twitter crack down on controversial discussions and how they so often get it wrong. 

Hi Daphne, thank you for coming. 

Daphne: First, thank you so much for having me here. I am super excited. 

Cindy: So tell: me how did the internet become a place where just a few platforms get to decide what billions of people get to see and not see, and why do they do it so badly?  

Daphne: If you rewind twenty, twenty-five years, you have an internet of widely distributed nodes of speech. There wasn't a point of centralized control, and many people saw that as a very good thing. At the same time the internet was used by a relatively privileged slice of society, and so what we've seen change since then, first, is that more and more of society has moved online So that's one big shift, is the world moved online—the world and all its problems. The other big shift is really consolidation of power and control on the internet. Even 15 years ago much more of what was happening online was on individual blogs  distributed on webpages and now so much of our communication, where we go to learn things, is controlled by a pretty small handful of companies, including my former employer Google, and Facebook and Twitter.  And that's a huge shift particularly since we as a society are asking those companies to control speech more and more, and maybe not grappling with what the consequences will be of our asking them to do that. 

Danny: Our model of how content moderation should work, where you have people looking at the comments that somebody has made and then picking and choosing, was really developed in an era where you assumed that the person making the decision was a little bit closer to you—that it was the person running your your neighborhood discussion forum or you're just editing comments on their blog. 

Daphne: The sheer scale of moderation on a Facebook for example means that they have to adopt the most reductive, non-nuanced rules they can in order to communicate them to a distributed global workforce. And that distributed global workforce inevitably is going to interpret things differently and have inconsistent outcomes. And then having the central decision-maker sitting in Palo Alto or Mountain View in the US subject to a lot of pressure from say, whoever sits in the White House, or from advertisers, means that there's both a huge room for error in content moderation, and inevitably policies will be adopted that 50% of the population thinks are the wrong policies. 

Danny: So when we see the platforms of Mark Zuckerberg go before the American Congress and answer questions from senators, one of the things that I hear them say again and again is that, we have algorithms that sort through our feeds. We're developing AI that can identify nuances in human communication, why does it appear that they failed so badly to kind of create a bot that reads every post and then picks and chooses which are the bad ones and then throw them off?

Daphne: Of course the starting point is that we don't agree on what the good ones are and what the bad ones are, but even if we could agree, even if you're talking about a bot that's supposed to enforce a speech law, a speech law which is something democratically enacted, and presumably has the most consensus behind it. And the crispest definition they fail in big ways consistently. You know they set out to take down ISIS and instead they take down the Syrian archive which exists to document war crimes for a future prosecution. The machines make mistakes a lot, and those mistakes are not evenly distributed, we have an increasing body of research showing disparate impact for example on speaker speakers of African-American English, and so there are just a number of errors that hit not just on free expression values but also on equality values  There's there's a whole bunch of societal concerns that are impacted when we try to have private companies deploy machines to police our speech. 

Danny: What kind of errors do we see machine learning making particularly in the example of like tackling terrorist content? 

Daphne: So I think the answers are slightly different depending which technologies we're talking about. A lot of the technologies that get deployed to detect things like terrorist content are really about duplicate detection. And the problems with those systems are that they can't take context into account. So when the same image appears in a terrorist recruitment context but also appears in counter speech the machines can't tell the difference.

Danny: And when you say counter-speech, you are referring to the many ways that people speak out against hate speech.

Daphne: They're not good at understanding things like hate speech because the ways in which humans are terrible to each other using language evolves so rapidly and so are the ways that people try to respond to that, and undermine it and reclaim terminology. I would also add most of the companies that we're talking about are in the business of selling things like targeted advertisements and so they very much want to promote a narrative that they have technology that can understand content, that can understand what you want, that can understand what this video is and how it matches with this advertisement and so forth. 

Cindy: I think you're getting at one of the underlying problems we have which is the lack of transparency by these companies and the lack of due process when they do the take-down, seem to me to be pretty major pieces of why the companies not only get it wrong but then double down on getting it wrong. There have also been proposals to put in strict rules in places like Europe so that if a platform takes something down, they have to be transparent and offer the user an opportunity to appeal. Let’s talk about that piece. 

Daphne: So those are all great developments, but I'm a contrarian. So now that I've got what I've been asking for for years I have problems with, my biggest problem really, has to do with competition. Because I think the kinds of more cumbersome processes that we absolutely should ask for from the biggest platforms can themselves become a huge competitive advantage for the incumbents if they are things that the incumbents can afford to do and smaller platforms can't.  And so the question of who should get what obligations is a really hard one and I don't think I have the answer. Like I think you need some economists thinking about it, talking to content moderation experts. But I think if we invest too hard in saying every platform has to have the maximum possible due process and the best possible transparency we actually run into a conflict with competition goals and and we need to think harder about how to navigate those two things.

Cindy: Oh I think that's a tremendously important point. It's always a balancing thing especially around regulation of online activities, because we want to protect the open source folks and the people who are just getting started or somebody who has a new idea. At the same time, with great power comes great responsibility, and we want to make sure that the big guys are really doing the right thing, and we also really do want the little guys to do the right thing too. I don't want to let them entirely off the hook but finding that scale is going to be tremendously important.  

Danny: One of the concerns that is expressed is less about the particular content of speech, more how false speech or hateful speech tends to spread more quickly than truthful or calming speech. So you see a bunch of laws or a bunch of technical proposals around the world trying to mess around with that aspect and to give something specific. There's been pressure on group chats like WhatsApp in India and Brazil and other countries to limit how easy it is to forward messages or have some way of the government being able to see messages that are being forwarded a great deal. Is that kind of regulatory tweak that you're happy with or is that going too far? 

Daphne: Well I think there may be two things to distinguish here: one is when WhatsApp limits how many people you can share a message with or add to a group. They don't know what the message is because it is encrypted and so they're imposing this purely quantitative limit on how widely people can share things. What we see more and more in the US discussion is a focus on telling platforms that they should look at what content is and then change what they recommend or what they prioritize in a newsfeed based on what the person is saying. For example,  there's been a lot of discussion in the past couple of years about whether YouTube recommendation algorithm is radicalizing. You know, if you search for vegetarian recipes will it push you to vegan recipes or as much more sinister versions of that problem. I think it's extremely productive for platforms themselves to look at that question to say, hey wait what is our amplification algorithm doing? Are there things we want to tweak so that we are not constantly rewarding our users worst instincts? What I see that troubles me, and that I wrote a paper on recently called Amplification and its Discontents, is this growing idea that this is also a good thing for governments to do. That we can have the law say, Hey platforms, amplify this, and don't amplify that. This is an appealing idea to a lot of people because they think maybe platforms aren't responsible for what their users say but they are responsible for what they themselves chose to amplify with an algorithm.  

All the problems that we see with content moderation are the exact same problems we would see if we applied the same obligations to what they amplify. The point isn't you can never regulate any of these things, we do in fact regulate those things. US law says if platforms see child sexual abuse material for example they have to take it down. We have a notice and take down system for a copyright. It's not that we live in a world where laws never can have platforms take things down, but those laws run into this very known set of problems about over removal, disparate impact, invasion of privacy and so forth. And you get those exact same problems with amplification laws.

Danny: We’ve spent some time talking about the problems with moderation, competition, and we know there are legal and regulatory options around what goes on social media that are being applied now and figured out for the future. Daphne, can we move on to how it’s being regulated now? 

Daphne: Right now we are seeing, we're going from zero government guidelines on how any of this happens to government guidelines so detailed that they take 25 pages to read and understand, and plus there will be additional regulatory guidance later. I think we may come to regret that, going from having zero experience with trying to set these rules to making up what sounds right in the abstract based on the little that we know now, with inadequate transparency and inadequate basis to really make these judgment calls. I think we're likely to make a lot of mistakes but put them in laws that are really hard to change.

Cindy: Where on the other hand, you don't want to stand for no change, because the current situation isn't all that great either. This is a place where perhaps a balance between the way the Europeans think about things which is often more highly regulatory and the American let the companies do what they want strategy. Like we kind of need to chart a middle path.

Danny: Yeah, and I think this raises another issue which of course, every country is struggling with this problem, which means that every country is thinking of passing rules about what should happen to speech. But it's the nature of the internet and it's one of its advantages, well it should be, is that everyone can talk to one another. What happens when this speech in one country that is being listened to in another with two different jurisdictional rules? Is that a resolvable problem?

Daphne: So there are a couple of versions of that problem. The one that we've had for years is what if I say something that's legal to say in the United States but illegal to say in Canada or Austria or Brazil? And so we've had a trickle of cases, and more recently some more important ones, with courts trying to answer that question and mostly saying, yeah I do have the power to order global take-downs, but don't worry, I'll only do it when it's really appropriate to do that. And I think we don't have a good answer. We have some bad answers coming out of those cases, like hell yeah, I can take down whatever I want around the world, but part of the reason we don't have a good answer is because this isn't something courts should be resolving. The newer thing that's coming, it's like kind of mind blowing you guys, which is we're going to have situations where one country says you must take this down and the other country says you cannot take that down, you'll be breaking the law if you do. 

Danny: Oh...and I think it's kind of counter intuitive sometimes to see who is making those claims. So for instance I remember there being a huge furor in the United States about when Donald Trump was taken off Twitter by Twitter, and in Europe it was fascinating, because most of the politicians there who were quite critical of Donald Trump were all expressing some concern that a big tech company could silence a politician, even though it was a politician that they opposed. And I think the traditional idea of Europe is that they would not want the kind of content that Donald Trump emits on something like Twitter.

Cindy: I think this is one of the areas where it's not just national, the kind of global split between that's happening in our society plays out in some really funny ways….because there are, as you said, these, we call these kind of must carry laws. There was one in Florida as well, and EFF participated, in, at least getting an injunction against that one. Must carry laws are what we call a set of laws that require social media companies to keep something up and give them penalties if they take something down. This is a direct flip of some of the things that people are talking about around hate speech and other things that require companies to take things down and penalize them if they don't.

Daphne: I don't want to geek out on the law too much here, but it feels to me like a moment when a lot of settled First Amendment doctrine could become shiftable very quickly, given things that we're hearing, for example, from Clarence Thomas who issued a concurrence in another case saying, Hey, I don't like the current state of affairs and maybe these platforms should have to carry things they don't want to.

Cindy: I would be remiss if I didn't point out I think this is completely true as a policy matter, it's also the case as a First Amendment matter, that this distinction between the speech and regulating the amplification is something that the Supreme Court has looked at a lot of times and basically said it's the same thing. I think the fact that it's causing the same problems shows that this isn't just kind of a First Amendment doctrine hanging out there in the air, the lack of a distinction in the law between whether you can say it or whether it can be amplified comes because they really do cause the same kinds of societal problems that free speech doctrine is trying to make sure don't happen in our world. 

Danny: I was talking to a couple of Kenyan activists last week. And one of the things that they noted is while the EU and the United States fighting over what kind of amplification controls are lawful and would work, they're facing the situation where any law about amplification in their own country is going to silence the political opposition because of course politics is all about amplification. Politics, good politics, is about taking a voice of a minority and making sure that everybody knows that something bad is happening to them. So I think that sometimes we get a little bit stuck in debating things from an EU angle or US legal angle and we forget about the rest of the world.

Daphne: I think we systematically make mistakes if we don't have voices from the rest of the world in the room to say, hey wait, this is how this is going to play out in Egypt or this is how we've seen this work in in Colombia. In the same way that, to take it back to content moderation generally, that in-house content moderation teams make a bunch of really predictable mistakes if they're not diverse. If they are a bunch of college educated white people making a lot of money and living in the Bay area there are issues they will not spot and that you need people with more diverse backgrounds and experience to recognize and plan around. 

Danny: Also by contrast if they're incredibly underpaid people who are doing this in a call center and have to hit ridiculous numbers and being traumatized by the fact that they're getting to filter through the worst garbage on the internet, I think that's a problem too.

Cindy: My conclusion from this conversation so far is just having a couple large platforms try to regulate and control all the speech in the world is basically destined to failure and it's destined to failure in a whole bunch of different directions. But the focus of our podcast is not merely to name all the things broken with modern Internet policy, but to draw attention to practical and even idealistic solutions. Let's turn to that.

Cindy: So you have dived deep into what we at EFF call adversarial interoperability or ComCom. This is the idea that users can have systems that operate across platforms, so for example you could use a social network of your choosing to communicate with your friends on Facebook without you having to join Facebook yourself. How do you think about this possible answer as a way to kind of make Facebook not the decider of everybody's speech?  

Daphne: I love it and I want it to work, and I see a bunch of problems with it. But, but I mean, part of, part of why I love it is because I'm old and I love the distributed internet where there weren't these sort of choke hold points of power over online discourse. And so I love the idea of getting back to something more like that.

Cindy: Yeah. 

Daphne: You know, as a first amendment lawyer, I see it as a way forward in a neighborhood that is full of constitutional dead ends. You know, we don't have a bunch of solutions to choose from that involve the government coming in and telling platforms what to do with more speech. Especially the kinds of speech that people consider harmful or dangerous, but that are definitely protected by the first amendment. And so the government can't pass laws about it. So getting away from solutions that involve top-down dictates about speech towards solutions that involve bottom up choices by speakers and by listeners and by community is about what kind of content moderation they want to see, seems really promising.

Cindy:  What does that look like from a practical perspective? 

Daphne: And there are a bunch of models of this that you can envision this as what they call a federated system, like the Mastodon social network where each node has its own rules. Or you can say, oh, you know, that goes too far, I do want someone in the middle who is able to honor copyright take down requests or police child, sexual abuse material, be a point of control, for things that society decides should be controlled.

You know, then you do something like what I've called magic APIs or what my Stanford colleague Francis Fukuyama has called middleware, where the idea is Facebook is still operating, but you can choose not to have their ranking or their content moderation rules, or maybe even their user interface and you can opt to have the version, from ESPN that prioritizes sports or from a Black Lives Matter affiliated group that prioritizes racial justice issues.

So you bring in competition in the content moderation layer, while leaving this underlying, like treasure trove of everything we've ever done, instead on the internet sitting with today's incumbents.

Danny: What are some of your concerns about this approach? 

Daphne: I have four big practical problems. The first is does the technology really work? Can you really have APIs that make all of this organization of massive amounts of data happen instantaneously in distributed ways. The second is about money and who gets paid. And the last two are things I do know more about. One is about content moderation costs and one is about privacy.  I unpack all of this in a recent short piece in the Journal of Democracy if people want to nerd out on this. But the content moderation costs piece is, you're never going to have all of these little distributed content moderators all have Chechen speakers and Arabic speakers and Spanish speakers and Japanese speakers. You know, so there's just a redundancy problem, where if you have all of them have to have all of the language capabilities to assess all of the content, that becomes inefficient. Or you know you're you're never going to have somebody who is enough of an expert in say American extremist groups to know what a Hawaiian shirt means this month you know versus what it meant last month.  

Cindy: Yeah.

Daphne: Can I just raise one more problem with competitive compatibility or adversarial interoperability? And I raise this because I've just been in a lot of conversations with smart people who I respect who really get stuck on this problem, which is aren't you just creating a bunch of echo chambers where people will further self isolate and listen to the lies or the hate speech. Doesn't this further undermine our ability to have any kind of shared consensus reality and a functioning democracy? 

Cindy: I think that some of the early predictions about this haven't really come to pass in the way that we're concerned about. I also think there's a lot of fears that are not really grounded in empirical evidence about where people get their information and how they share it, and that need to be brought into play here before we decide that we're just stuck with Facebook and that our only real goal here is to shake our fist at Mark Zuckerberg or write laws that will make sure that he protects a speech I like and takes down the speech I don't like, because other people are too stupid to know the difference. 

Daphne: If we want to avoid this echo chamber problem is it worth the trade-off of preserving these incredibly concentrated systems of power over speech? Do we think nothing's going to go wrong with that? Do we think we have a good future with greatly concentrated power over speech by companies that are vulnerable to pressure from say governments that control access to lucrative markets like China, which has gotten American companies to take down lawful speech? Companies that are vulnerable to commercial pressures from their advertisers which are always going to be at best majoritarian. Companies that faced a lot of pressure from the previous administration and will so from this and future administrations to do what politicians want. The worst case scenario to me of having a continued extremely concentrated power over speech looks really scary and so as I weigh the trade-offs, that weighs very heavily, but it kind of goes to almost questions you want to ask a historian or a sociologist or a political scientist or Max Weber.

Danny: When I talk to my friends or my wider circle of friends on the internet it really feels like things are just about to veer into an argument at every point. I see this in Facebook comments where someone will say something fairly innocuous and we're all friends, but like someone will say something and then it will spiral out of control. And I think about how rare that is when I'm talking to my friends in real life. There are enough cues there that people know if we talk about this then so-and-so is going to go on a big tirade, and I think that's a combination of coming up with new technologies, new ways of dealing with stuff, on the internet, and also as you say, better research, better understanding about what makes things spiral off in that way. And the best thing we can fix really is to change the incentives, because I think one of the reasons why we've hit what we're hitting right now is that we do have a handful of companies and they all have very similar incentives to do the same kind of thing. 

Daphne: Yeah I think that is absolutely valid. I start my internet law class at Stanford every year by having people read Larry Lessig. He lays out this premise that what truly shapes people's behavior is not just laws, as lawyers tend to assume. It's a mix of four things, what he calls Norms, the social norms that you're talking about, markets, economic pressure, and architecture, by which he means software and the way that systems are designed to make things possible or impossible or easy or hard. What we might think of as product design on Facebook or Twitter today. And I think those of us who are lawyers and sit in the legal silo tend to hear ideas that only use one of those levers. They use the lever of changing the law, or maybe they add a changing technology, but it's very rare to see more systemic thinking that looks at all four of those levers, and how they have worked in combination to create problems that we've seen, like there are not enough social norms to keep us from being terrible to each other on the internet but also how those levers might be useful in proposals and ideas to fix things going forward.

Cindy: We need to create the conditions in which people can try a bunch of different ideas, and we as a society can try to figure out which ones are working and which ones aren't. We have some good examples. We know that Reddit for instance made some great strides in turning that place to something that has a lot more accountability. Part of what is exciting to me about ComCom and this middleware idea is not that they have the answer, but that they may open up the door to a bunch of things, some of which are going to be not good, but a couple of which might help us point the way forward towards a better internet that serves us. We may need to think about the next set of places where we go to speak as maybe not needing to be quite as profitable. I think we're doing this in the media space right now, where we're recognizing that maybe we don't need one or two giant media chains to present all the information to us. Maybe it's okay to have a local newspaper or a local blog that gives us the local news and that provides a reasonable living for the people who are doing it but isn't going to attract Wall Street money and investment. I think that one of the the keys to this is to move away from this idea that five big platforms make this tremendous amount of money. Let's spread that money around by giving other people a chance to offer services. 

Daphne: I mean VCs may not like it but as a consumer I love it.

Cindy: And one of the ideas about fixing the internet around content moderation, hate speech, and these must carry laws, is really to try to to create more spaces where people can speak that are a little smaller and shrink the content moderation problem down to a size where we may still have problems but they're not so pervasive.   

Daphne: And on sites where social norms matter more.  You know where that lever, the thing that stops you from saying horrible racist things in a bar or at church or to your girlfriend or at the dinner table, if those sorts of the norms element of public discourse becomes more important online, by shrinking things down into manageable communities where you know the people around you, that might be an important way forward.

Danny: Yeah, I'm not an ass in social interactions not because there's a law against being an ass but because there's this huge social pressure and there's a way of conveying that social pressure in the real world and I think we can do that. 

Cindy: Thank you so much for all that insight Daphne and for breaking down some of these difficult problems into kind of manageable chunks we can begin to address directly. 

Daphne: Thank you so much for having me.

Danny: So Cindy, having heard all of that from Daphne, are you more or less optimistic about social media companies making good decisions about what we see online? 

Cindy: So I think if we're talking about today's social media companies and the giant platforms, making good decisions, I'm probably just as pessimistic as I was when we started. If not more so. You know, Daphne really brought home how many of the problems we're facing in content moderation in speech these days are the result of the consolidation of power and control of the internet in the hands of a few tech giants. And how the business models of these giants play into this in ways that are not good.

Danny: Yeah. And I think that like the menu, the palette of potential solutions in this situation is not great either. Like, I think the other thing that came up is, is, you watch governments all around the world, recognize this as a problem, try and come in to fix the companies rather than fix the ecosystem. And then you end up with these very clumsy rules. Like I thought the must carry laws where you go to a handful of companies and say, you absolutely have to keep this content up is such a weird fix. When you start thinking about it. 

Cindy: Yeah. And of course it's just as weird and problematic as  you must take this down, immediately. Neither of these directions are good ones. The other thing that I really liked was how she talked about the problems with this idea that AI and bots could solve the problem.

DANNY: And I think part of the challenge here is that we have this big blob of problems, right? Lots of articles written about, oh, the terrible world of social media and we need an instant one off solution and Mark Zuckerberg is the person to do it. And I think that the very nature of conversation, the very nature of sociality is that it's, it is a small scale, right? It is at the level of a local cafe.

Cindy: And of course, it leads us to the the fixing part that we liked a lot, which is this idea that we try to figure out how do we redistribute the internet and redistribute these places so that we have a lot more local cafes or even town squares. 

The other insight I really appreciate is kind of taking us back to, you know, the foundational thinking that our friend Larry Lessig did about how we have to think, not just about law as a fix, and not just about code, how do you build this thing as a fix, but we have to look at all four things. The law. Code, social norms, and markets as leverage that we have to try to make things better online.

Danny: Yeah. And I think it comes back to this idea that we have, like this big stockpile of all the world's conversations and we have to like crack it open and redirect it to these, these smaller experiments. And I think that comes back to this idea of interoperability, right? There's been such an attempt, a reasonable commercial attempt by these companies to create what the venture capitalists call a moat, right? Like this, this space between you and your potential competition. Well, we have to breach those modes and bridging them involves either by regulation or just by people building the right tools, having interoperability between the past, of social media giants and the future of millions and millions of individual social media places. 

Cindy: Thank you to Daphne Keller for joining us today. 

Danny: And thank you for joining us. If you have any feedback on this episode please email We read every email. 

Music for the show is by Nat Keefe and Reed Mathis of BeatMower. 

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. 

I’m Danny O’Brien.

 And I’m Cindy Cohn. Thank you for listening, until next time. 

Dream Job Alert: Media Relations Director for EFF

EFF - Mon, 11/29/2021 - 14:55

We’ve got an amazing opportunity for a senior media relations person to join the EFF team.

Right now, we are hiring for a Media Relations Director role, a leadership role that oversees and directs EFF’s press strategy and engagement. Join EFF and help explain to journalists and the world why civil liberties matter so much for the future of technology and society. Apply today.

We are open to many different types of candidates for this role. We are especially interested in meeting people who have had experience working in journalism themselves. We are also interested in people who have handled press strategy for other nonprofit advocacy or civil liberties organizations. But more than any particular experience, we’re looking for someone who is a great communicator, has terrific organizational skills, can manage a team, and who loves to solve problems and tell stories. And, we value diversity in background and life experiences. 

EFF is committed to supporting our employees. That’s why we’ve got competitive salaries, incredible benefits (including student loan reimbursement and fantastic healthcare), and ample policies for paid time off and holidays. We want this work to be sustainable and fun—so that you’ll be part of our organization for a long time.

Please check out our job description and apply today!  And if you are a senior media relations professional or working journalist who has a question about this role, please email

Even if this job isn’t the right fit for you, please take a moment to spread the word on social media.

Our Patent Review System is Ten Years Old. It’s Time to Make It Stronger.

EFF - Mon, 11/29/2021 - 13:25

The U.S. Patent and Trademark Office (USPTO) grants more than 300,000 patents each year.  Some of those patent grants represent genuine new inventions, but many of them don’t. On average, patent examiners have about 18 hours to spend on each application. That’s not enough time to get it right. 

Thousands of patents get issued each year that never should have been issued in the first place. This is a particular problem in software, which is a bad fit for the patent system. That’s why it’s so critical that we have a robust patent review system. It gives companies that get threatened over patents the opportunity to get a second, more in-depth review of a patent—without spending the millions of dollars that a jury trial can cost. 

Our patent review system is ten years old now, and patent trolls and other aggressive patent holders have learned to game the system. Unfortunately, the USPTO has let them get away with it. A recently introduced bill, the Restoring the America Invents Act (S. 2891) will close some of the loopholes that patent owners have used to dodge or weaken reviews. 

Inter Partes Review

Congress recognized the need for such a system when it passed the 2011 America Invents Act, and created a review system called “inter partes review,” or IPR. The IPR process lets a particular department of the patent office, the Patent Trial and Appeal Board (PTAB), hold a quasi-judicial process in which they take a second look to decide if a patent really should have been granted in the first place. 

The IPR system isn’t perfect, but the process has been a big improvement over the patent office’s previous review systems. Over the 10 years it’s been in operation, the PTAB has reviewed thousands of patents. In the majority of cases that have gone to a final decision, PTAB judges have decided to cancel all or some of the claims in question. 

It’s important to put this in context. The thousands of canceled patents are just a tiny fraction of the number that the government is giving away. In the most recent fiscal year, 265 patents had one or more claims canceled, according to USPTO statistics. That’s less than .1% of the 340,000 patents that were granted in the same period, and a minute fraction of the 3.8 million patent monopolies that the patent office believes are active. 

The IPR system isn’t perfect, but overall it has been a win for the public. It’s no surprise that some patent owners don’t like it.  With more administrative tools to get at the truth, more patents are found to be invalid.

Closing Four Loopholes

First, the bill would close a big loophole in the process that has come to be known as “discretionary denial.” Basically, this is when a panel of PTAB judges refuses to even consider the merits of an IPR petition. The most common excuse for a discretionary denial is that there’s related court litigation on the same patents that is coming up soon. There’s nothing in the law that says PTAB needs to consider this, but it’s been happening more and more in recent years. 

This loophole even got an official stamp of approval in a PTAB proceeding called Fintiv. So-called Fintiv denials now represent nearly 40% of all denied IPR petitions. One particular federal judge has even marketed his court as a good place to go for patent owners who would like to get a Fintiv denial, as well as other benefits. 

EFF has spoken out against this problem, asking for Congress to stop patent owners’ gamesmanship of the IPR process. This bill would do just that. The Restoring the AIA Act states simply, “a petition that meets the requirements of this chapter shall be instituted.” It’s time to close this loophole, before patent trolls make it even bigger. 

Second, the bill creates new rules regarding the role of the Director of the USPTO. That’s important since a recent Supreme Court ruling (U.S. v. Arthrex) gave the Director the power to review, and even overturn, the results of IPR proceedings. The Restoring the AIA Act would make it a requirement that the Director issue a written decision when she chooses to use that power. 

While this new power of the Director may not be used frequently, it’s important that there be a written record. Just like a judicial decision, a Director review could decide whether a patent stands or falls—and whether or not accused infringers must pay royalties or cease making a product. It’s basic good government that there should be a written record of such a decision. 

Third, the bill will allow government agencies to file for IPRs. It doesn’t happen often, but government agencies do get accused of  patent infringement. These are important cases, since any damages or royalties will be paid to patent owners with public money. Accused government agencies should have the opportunity to ask for a PTAB review, just as a private company or individual would. 

That’s what EFF and some of our allies advocated for when this issue came up at the Supreme Court, in a case called Return Mail v. U.S. Postal Service. Unfortunately, the high court held otherwise in a 6-3 decision. We don’t think barring government agencies from the IPR process is what Congress intended, and this bill would make that clear. 

Finally, this bill will make sure that the patents that get knocked down by IPR, stay down. It prevents the patent office from issuing patents that are “not patentably distinct” from patents that have been canceled in the IPR process. 

The inter partes review process that Congress created 10 years ago is one of a few changes to IP law that has actually served the public interest. It’s no surprise that it’s made enemies over the years, some of whom have fought hard to dismantle the process altogether. Fortunately, so far, they’ve failed. By closing these loopholes and making the process even stronger, Congress can make clear that the patent system works for all of the public—not just a small group of large patent owners. 

Coalition Against Stalkerware Celebrates Two Years of Work to Keep Technology Safe for All

EFF - Thu, 11/25/2021 - 03:13

In this guest post by the Coalition Against Stalkerware marking its second anniversary, the international alliance takes a look back on its achievements while seeing a lot of challenges ahead.

Two years ago, in November 2019, the Coalition Against Stalkerware was founded by 10 organizations. Today, there are more than 40 members with experts working in different relevant areas including victim support and perpetrator work, digital rights advocacy, IT security, academia, security research and law enforcement. 

Stalkerware makes it possible to intrude into a person’s private life and is a tool for abuse in cases of domestic violence and stalking. By installing these applications on a person’s device, abusers can get access to someone’s messages, photos, social media, geolocation, audio or camera recordings (in some cases, this can be done in real-time). Such programs run hidden in the background, without a victim’s knowledge or consent.

This year, the Coalition welcomed new supporters like INTERPOL and members, among them CyberPeace Institute; Gendarmerie Nationale; the Gradus Project; Kandoo; Luchadoras; the Florida Institute for Cybersecurity Research; National Center for Victims of Crime (US); North Carolina A&T State University’s Center of Excellence for Cybersecurity Research, Education, and Outreach; Refuge UK; Sexual Violence Law Center (US), and The Tor Project. 

Fulfilling one of the founding missions, the Coalition’s partners in July launched a new technical training on stalkerware aimed at helping increase capacity-building among nonprofit organizations that work with survivors and victims, as well as law enforcement agencies and other relevant parties. In addition, the Coalition has put together a revised page with advice for survivors who suspect they may have stalkerware on their device.

Other key activities during the year include: 

  • In October, Coalition members Wesnet, Australia’s national umbrella organization for domestic violence services, US-based National Network to End Domestic Violence (NNEDV), and global privacy company Kaspersky teamed up with INTERPOL to provide more than 210 police officers with knowledge to investigate digital stalking on the basis of the Coalition’s technical training on stalkerware
  • Also last month, the EU-wide DeStalk project—in which WWP EN and Kaspersky are project partners, and Martijn Grooten, Coalition coordinator, and Hauke Gierow from G DATA are Advisory Board members—launched an e-learning course for public officials of regional authorities and workers of victim support services and perpetrator programs on how to tackle cyberviolence and stalkerware. DeStalk is supported by the Rights, Equality and Citizenship (REC) Program of the European Commission.
  • In October, Coalition members Refuge and Avast published an online tool that helps detect abuse of Internet-of-Things (IoT) devices and provides tips on how to secure them. IoT is increasingly used for harassment and control in abusive relationships.  
  • In January 2021, the Stalking Prevention, Awareness, and Resource Center marked the 17th annual Stalking Awareness Month, an annual call to action in the United States to recognize and respond to the crime of stalking. Hundreds of organizations across the country hosted workshops, promoted awareness and encouraged responders to promote victim safety and offender accountability.

Beyond that, members conducted a series of new research:

  • NNEDV’s Tech Abuse in the Pandemic and Beyond report (2021) found that the most common types of tech abuse—harassment, limiting access to technology, and surveillance—increased during the pandemic. Phones, social media, and messaging were the technologies most commonly misused as a tactic of tech abuse.
  • Malwarebytes published their Demographics of Cybercrime report (2021), a global study of consumer cybercrime impacts, showcasing the disproportionate impact of cybercrime on vulnerable populations.
    Kaspersky presented its Digital Stalking in Relationships report (2021), a global survey of more than 21,000 participants in 21 countries about their attitudes towards privacy and digital stalking in intimate relationships. The survey found that a significant share of people (30%) see no problem at all and find it acceptable to monitor their partner without consent. Additionally, 24% of respondents reported having been stalked by means of technology at least once. Partners advising on the research were Centre Hubertine Auclert, NNEDV, Refuge, Wesnet and WWP EN.

Data from member organizations show the following picture on the issue of cyberviolence and stalkerware:

  • The Centre Hubertine Auclert conducted research on technology-facilitated domestic violence (2018) and found that 9 out of 10 women victims of domestic violence are also victims of cyberviolence.
  • WESNET, with the assistance of Dr. Delanie Woodlock and researchers from Curtin University, published the Second National Survey of Technology Abuse and Domestic Violence in Australia (2020). The survey asks practitioners what kinds of abuse tactics other forms of violence against women frontline workers are seeing in their day-to-day work with survivors of domestic and family violence. The survey shows that 99.3% of Australian domestic violence workers say they have clients experiencing technology abuse. 18% of workers see  spyware “often” and 35% see it “sometimes.” Tracking and monitoring of women and stalking, often via technological means, by perpetrators rose 244% between 2015 and 2020. Stalking is a known factor associated with an increased risk of lethal and near-lethal harm. 
  • Following the Coalition Against Stalkerware’s detection criteria on stalkerware, Kaspersky analyzed its statistics, revealing how many of its users were affected by stalkerware in the first 10 months of the year. From January to October 2021, almost 28,000 mobile users were affected by this threat. During the same period, there were more than 3,100 cases in the EU and more than 2,300 users affected in North America. According to Kaspersky figures, Russia, Brazil, and the United States remain the most affected countries worldwide so far. Likewise, in Europe the picture has not changed: Germany, Italy and the United Kingdom (UK) are the top most-affected countries respectively. When looking only at the EU, instead of the UK, France comes in third place.
  • Malwarebytes, in comparing stalkerware activity before and well into the COVID-19 pandemic, found that the threat of stalkerware continues to rise. From October 2020 to September 2021, Malwarebytes recorded more than 62,000 detections of applications with stalkerware capabilities on Android devices. These detections represent a 52% increase compared to the same 12-month period the year before, which accounted for roughly 41,000 detections.
Quotes from the Coalition’s members:
  • "Stalkerware is only part of a whole ecosystem of tech-enabled abuse, but it is one of the most frightening tools and it leaves survivors especially vulnerable to physical stalking, coercive control, and escalating violence. The Coalition Against Stalkerware has been instrumental in changing the way the tech industry treats these tools and helping survivors, and the people who support them, to detect the threat. No one industry can stop stalkerware all by itself. The Coalition's interdisciplinary approach has been essential to its success. When academics, tech companies, and domestic violence service providers work together, we can quantify the problem, raise awareness, and push for both technical and policy solutions in way that none of us can by ourselves." - Eva Galperin, Electronic Frontier Foundation Cybersecurity Director and Coalition co-founder.
  • “Stalking is a prevalent, traumatic, and dangerous crime that impacts over 1 in 6 women and 1 in 17 men in the United States. Technology is used—or misused—to stalk in the majority of stalking cases. Too often, victims and responders have limited resources and little to no insight on what kind of technology is being used, how to safety plan around it, and/or how to collect evidence on it. We are so grateful to be part of the Coalition to better educate responders at recognizing and responding to stalking. This work is truly helping to keep victims safe and hold offenders accountable." - Jennifer Landhuis, SPARC Director
  • “Two years ago, the public and law enforcement understood little about the threat of stalkerware. Apps that non-consensually spy on users were readily available online, promoted on social media feeds, and off the radar of national governments. By combining varied expertise across several countries, the Coalition Against Stalkerware has defined and raised significant awareness about the threat of stalkerware. We are proud to be one of several antivirus vendors working together to offer more comprehensive stalkerware protection for all users. We are equally proud of our nonprofit members who have produced and offered tailored device trainings and guidance to law enforcement and targeted individuals alike. The commitment of every member of the Coalition Against Stalkerware has shaped this reality in ways we couldn’t even imagine just two years ago.” - David Ruiz, Online Privacy Advocate, Malwarebytes
  • “It’s amazing how impactful collective action can be, especially if you work with engaged people. One small step after another, and at the end, all together, it will make a big change. What makes me think in this positive way is reading about the intention of the European Parliament and European Commission to propose a law to combat violence against women that will include prevention, protection and effective prosecution, both online and offline, by the end of 2021. The Coalition still has a long way ahead, but I believe we’re going in the right direction.” - Kristina Shingareva, Head of External Relations at Kaspersky. 
About Coalition Against Stalkerware

The Coalition Against Stalkerware (“CAS” or “Coalition”) is a group dedicated to addressing abuse, stalking, and harassment via the creation and use of stalkerware. Launched in November 2019 by ten founding partners—Avira, Electronic Frontier Foundation, the European Network for the Work with Perpetrators of Domestic Violence, G DATA Cyber Defense, Kaspersky, Malwarebytes, The National Network to End Domestic Violence, NortonLifeLock, Operation Safe Escape, and WEISSER RING—the Coalition has grown into a global network of more than forty partners. It looks to bring together a diverse array of organizations working in domestic violence survivor support and perpetrator intervention, digital rights advocacy, IT security and academic research to actively address the criminal behavior perpetrated through stalkerware and raise public awareness about this important issue. Due to the high societal relevance for users all over the globe, with new variants of stalkerware emerging periodically, the Coalition Against Stalkerware is open to new partners and calls for cooperation. To find out more about the Coalition Against Stalkerware please visit the official website


UN Human Rights Committee Criticizes Germany’s NetzDG for Letting Social Media Platforms Police Online Speech

EFF - Tue, 11/23/2021 - 15:56

A UN human rights committee examining the status of civil and political rights in Germany took aim at the country’s Network Enforcement Act, or NetzDG, criticizing the hate speech law in a recent report for enlisting social media companies to carry out government censorship, with no judicial oversight of content removal.

The United National Human Rights Committee, which oversees the implementation of the United Nations International Covenant on Civil and Political Rights (ICCPR), expressed concerns, as we and others have, that the regulation forces tech companies to behave as the internet police with power to decide what is free speech and what is hate speech. NetzDG requires large platforms to remove content that appears “manifestly illegal” within 24 hours of having been alerted of it, which will likely lead to take downs of lawful speech as platforms err on the side of censorship to avoid penalties. The absence of court oversight of content removal was deemed especially alarming, as it limits “access to redress in cases where the nature of content is disputed.”

“The Committee is concerned that these provisions and their application could have a chilling effect on online expression,” according to a November 11 Human Rights Committee report on Germany. The report is the committee’s concluding observations of its independent assessment of Germany’s compliance with its human rights obligations under the ICCPR treaty.

It’s important that the UN body is raising alarms over NetzDG. We’ve seen other countries, including those under authoritarian rule, take inspiration from the regulation, including Turkey. A recent study reports that at least thirteen countries—including Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia—have proposed or enacted laws based on the regulatory structure of NetzDG since it entered into force, with the regulations in many cases taking a more privacy-invasive and censorial form.

To quote imprisoned Egyptian technologist Alaa Abd El Fattah, “a setback for human rights in a place where democracy has deep roots is certain to be used as an excuse for even worse violations in societies where rights are more fragile.”

The proliferation of copycat laws is disturbing not only because of what it means for freedom of expression around the world, but also because NetzDG isn’t even working to curb online abuse and hate speech in Germany. Harassment and abuse by far-right groups aimed at female candidates ahead of Germany’s election showed just how ineffective the regulation is at eliminating toxic content and misinformation. At the same time, the existence of the law and its many imitations provides less of an incentive for companies to work to protect lawful speech when faced with government demands.

And in general, holding companies liable for the user speech they host has the chilling effect on freedom of expression the UN body is concerned about. With the threat of penalties and shutdowns hanging over their heads, companies will be prone to over-remove content, sweeping up legitimate speech and silencing voices. Even if massive platforms like Facebook and YouTube can afford to pay any penalties assessed against them, many other companies cannot and the threat of costly liability will discourage new companies from entering the market. As a result, internet users have fewer choices and big tech platforms garner greater monopoly power.

The UN Committee recommended Germany take steps to prevent the chilling effects NetzDG is already having on online expression. Germany should ensure that any restrictions to online expression under NetzDG meet the requirements of Article 19 (3) of ICCPR. This means that restrictions under the law should be proportional and necessary for respect of the right or reputations of others; or for the protection of national security or of public order (ordre public), or of public health or morals.” Moreover, the Committee recommended that Germany considers revisiting NetzDG “to provide for judicial oversight and access to redress in cases where the nature of online material is disputed.”

Germany should adopt these recommendations as a first step to protect freedom of expression within its borders. Germans deserve it. We’ll wait.    


Indonesian Court Allows Internet Blocking During Unrest, Tightening Law Enforcement Control Over Users’ Communications and Data

EFF - Tue, 11/23/2021 - 15:27

Indonesia’s Constitutional Court dealt another blow to the free expression and online privacy rights of the country’s 191 million internet users, ruling that the government can lawfully block internet access during periods of social unrest. The October decision is the latest chapter in Indonesia’s  crackdown on tech platforms, and its continuing efforts to force compliance with draconian rules controlling content and access to users’ data. The court’s long-awaited ruling came in a 2019 lawsuit brought by Indonesia NGO SAFEnet and others challenging Article 40.2b of the Electronic Information and Transactions (EIT) Law, after the government restricted Internet access during independence protests and demonstrations in Papua. The group had hoped for a ruling reining in government blocking, which interferes with Indonesians’ rights to voice their opinions and speak out against oppression. Damar Juniarto, SAFEnet Executive Director told EFF:  

We are disappointed with the Constitutional Court’s decision. We have concerns that the Indonesian government will implement more Internet restrictions based on this decision that are in violation of, or do not address, human rights law and standards. 

SAFENET and Human Rights Watch have been sounding the alarm about threats to digital rights in Indonesia ever since the government last year passed, without public consultation, Ministerial Regulation #5 (“MR 5/2020”), a human rights-invasive law governing online content and user data and imposing drastic penalties on companies that fail to comply. 

From Data Localization to Other Government Mandates

In 2012, Indonesia adopted a data localization mandate requiring all websites and applications that provide online services to store data within Indonesia’s territorial jurisdiction. The mandate’s goal was to help Indonesian law enforcement officials force private electronic systems operators (ESOs)—anyone that operates “electronic systems” for users within Indonesia, including operators incorporated abroad—to provide data during an investigation. The 2012 regulation was largely not enforced, while a 2019 follow-up initiative (M71 regulation) limited the data localization mandate to those processing government data from public bodies. 

Since the adoption of MR5, Indonesia’s data localization initiative shifted its approach: private sector data can once again be stored abroad, but the regulation requires Private ESOs to appoint an official local contact in Indonesia responsible for ensuring compliance with data and system requests. Private ESOs will be obligated to register with the government if they wish to continue providing services in the country, and, once registered, will be subject to penalties for failing to comply with MR5’s requirements. Penalties range from a first warning to temporary blocking, full blocking, and finally revocation of its registration. Indonesia has mandated broad access to electronic systems for law enforcement and oversight and proactive monitoring of online intermediaries, including private messaging services and online games providers.

Proactive Monitoring Mandate

EFF has warned that, by compelling private platform operators to ensure that they do not host or facilitate prohibited content, MR5 forces them to become an arm of the government’s censorship regime, monitoring their users’ social media posts, emails, and other communications (Article 9 (3)). 

MR5 governs all private sector ESOs accessible in Indonesia, such as social media services, content-sharing platforms, digital marketplaces, search engines, financial and data processing services, communications services providing messaging, cloud service providers, video calling, and online games. The definition of prohibited information or content includes vague concepts such as content causing “community anxiety” or “disturbance in public order,” and grants the Indonesian Ministry of Communication and Information Technology (Kominfo) unfettered authority to define these terms (Article 9(5)). 

Along with SAFENET and Human Rights Watch, we pointed out earlier this year that  the phrase “prohibited” is  open to  interpretation  and  debate. For example, what is meant by “public disturbance,” what is the standard or measure for public disturbances and who has the authority to determine what qualifies? What if the public feels that peaceful demonstrations and protests are a fundamental right, not “disturbing the society”?

Article 9(3)(b) of the Ministerial Regulation also prohibits any system from facilitating either “access to prohibited Electronic information and/or documents” or informing people how to do that. Under Article 9 (4)(c) of the regulation, prohibited Electronic information or documents could be any information or document that explains how to use or how to get access to the Tor browser, virtual private networks (VPNs), or even materials showing how to bypass censorship. Adding insult to injury, companies failing to comply will be subject to draconian penalties ranging from temporary or full blocking of their service to revocation of their authorization to provide online services within Indonesia (Article 9(6)). Moreover, under Article 13, private sector ESOs are also required to take down and block any prohibited information and documents (Article 9(4)). 

Even worse, secure private messaging apps (such as WhatsApp, Signal, or iMessage) are also obliged to comply with Article 9. Private messaging services that offer end-to-end encryption do not know the content of users’ messages. Thus, MR5 effectively seeks to ban all end-to-end encryption and thus the ability for anyone in Indonesia to message or text someone without the threat of the provider or government listening in. Moreover, MR5 requires these providers, as it does with others, to determine if content is “prohibited.” 

The new regulation interferes with rights to free expression and privacy, and requires platform providers to carry out these abuses. This is why, together with SAFENET, Human Rights Watch, and others, we called upon Kominfo to repeal MR 5/2020. In a joint statement, we said  MR5 runs contrary to Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Covenant on Civil and Political Rights. 

Mandatory Registration for Platforms 

MR5 requires private platforms to register with the government—in this case  Kominfo—and obtain an identification (ID) certificate to provide online services to people within Indonesia. This gives the government much more direct power over companies’ policies and operations: after all, the certificate can be revoked if a company later refuses the government’s demands. Even platforms that may not have infrastructure inside Indonesia have  to register and get a certificate before people in Indonesia can start accessing its services or content. Those that fail to register will be blocked within Indonesia. The original deadline to register was May 24, 2021, but was later extended for six months. As of today, the government has not extended the deadline or enforced the mandatory registration provision.  

The idea that websites and apps need to obtain an ID certificate from the government is a powerful and far-reaching form of state control because every interaction between authorities and companies, and every decision that might please or displease authorities, takes place against a backdrop of potential withdrawal of the ID and being blocked in the country.

ID Certificate and Appointment of Local Contact

 The Indonesian government has many expectations for companies that register for these IDs, including cooperation with orders for data about users. In some cases, that even includes granting the government direct access to their systems.

For example, MR5 compels private platforms that register to grant access to their “systems” and data to ensure effectiveness in the “monitoring and law enforcement process.” If a registered platform disobeys the requirement, for example, by failing to provide direct access to their systems (like computer servers—see Article 7 (c)), it can be punished in ways similar to the penalties for failing to flag “prohibited” content, from a written warning to temporary blocking to full blocking and a final revocation of its registration. 

Article 25 of MR5 forces companies to appoint at least one contact person domiciled in the territory of Indonesia to be responsible for facilitating Kominfo or Institution requests for access to systems and data. Laws forcing companies to appoint a local representative exist, for example, in Turkey or India

Both the ID requirement and the forced appointment of a local point of contact person are powerful coercive measures that give governments new leverage for informal pressure and arbitrary orders. As we noted in our post in February, with a representative on the ground, platforms will find it much harder to resist arbitrary orders and risk domestic legal action against that person, including potential arrest and criminal charges.

Human Rights Watch’s Asia Division has similarly worried that,

[w]hile the establishment of local representatives for tech companies can help them navigate and better understand the different contexts in which they operate, this is dependent on the existence of a legal environment in which it is possible to challenge unfair removal or access requests before independent courts. MR5 provides no mechanism for appeal to the courts, and the presence of staff on the ground makes it much harder for companies to resist overbroad or unlawful requests.

Remote Direct Access to Systems 

Direct access to system mechanisms are situations in which  law enforcement have a “direct connection to telecommunications networks in order to obtain digital communications content and data (both mobile and internet), often without prior notice, or judicial authorization, and without the involvement and knowledge of the Telco or ISP that owns or runs the network.” Direct access to personal data interferes  with the right to privacy, freedom of expression, and other human rights. The United Nations High Commissioner for Human Rights stated that direct access is “particularly prone to abuse and tends to circumvent key procedural safeguards.” The Industry Telecom Dialogue has explained that some governments require such access as a condition for operating in their country: 

some governments may require direct access into companies’ infrastructure for the purpose of intercepting communications and/or accessing communications-related data. This can leave the company without any operational or technical control of its technology. While in countries with independent judicial systems actual interception using such direct access may require a court order, in most cases independent oversight of proportionate and necessary use of such access is missing.

MR5 expanded its approach and applies it to all private ESOs including cloud computing services. It does not say exactly what type of access to “systems” (servers and infrastructure) private platforms may be requested to provide, though access to information technology systems (Art. 1)—which includes communication or computer systems, hardware and software—is explicitly called out as a possible subject of an order, over and above requests to turn over particular data. 

When it comes to access to systems for oversight purposes, MR5 compels providers to grant access either by letting the government in, handing over what the government is asking for, or giving the government results of an audit (Art. 29(1) and Art. 29(4) ). When it comes to access to systems for criminal law enforcement purposes, MR5 fails to explicitly include an audit result as a valid option. Overall, direct access to systems is an alarming provision.

Access to Data and System

Under MR5, broad remote direct access mandates compel any provider or private ESO to grant access to “data” and “systems” to Kominfo or another government institution for “oversight” purposes (administrative monitoring or regulatory administrative compliance) (Art. 21). They are also required to grant access to law enforcement officials for criminal investigations, prosecutions, or trials for crimes carried out within Indonesian territory (Art. 32 and 33).  Law enforcement is required to obtain a court order to access ESO systems when investigating crimes that carry prison sentences of two to five years. But there’s no such requirement for crimes that carry heavier sentences of over five years imprisonment (Art. 33). 

MR5 also requires private ESOs that process and/or store data or systems to grant cross-border direct access requests about Indonesian citizens or business entities established within Indonesia even if that information is processed and stored outside the country (Art. 34). The cross-border obligation to disclose data applies for crimes carrying penalties of two to five years imprisonment (Art 32), while the obligation to grant access to the providers’ system applies for investigations or prosecution of crimes that carry sentences of over five years imprisonment (Art. 35). Unlike Mutual Legal Assistance Treaty (MLAT) agreements, MR5 fails to include a “dual criminality” requirement, meaning Indonesian police could seize data from foreign providers while investigating activity that is not a crime in the foreign country but it is a crime in Indonesia. While practical challenges currently exist in cross-border access to data, these challenges can be addressed through: 

  • The express codification of a dual privacy regime that meets the standards of both the requesting and the host state. Dual data privacy protection will help ensure that as nations seek to harmonize their respective privacy standards, they do so on the basis of the highest privacy standards. Absent a dual privacy protection rule, nations may be tempted to harmonize at the lowest common denominator.
  • Improved training for law enforcement to draft requests that meet such standards, and other practical measures.

Cross-border data demands for the content of users’ communications imposed on companies like Google, Twitter, and Facebook may create a conflict of law between Indonesia and countries like the European Union or the United States. The EU’s General Data Protection Regulation (GDPR) does not allow companies to disclose data voluntarily without a domestic legal basis. US law also forbids companies from disclosing communications content without an MLAT process which requires first obtaining a warrant issued by a US judge. While we understand that Indonesia does not have an MLAT with the United States, the process for resolving conflicts of law needs considerable work. The Indonesian government should not expect companies to stride deliberately into legal paradoxes, where complying with a regulation in one country would lead them to not only violate the law in another country but also violate international human rights law and standards. The principle of dual criminality should also be taken into account when a cross-border request is needed.

Access to “Electronic Data”

Access to “electronic data” for oversight purposes can be ordered by Kominfo or other competent government institutions (Art. 26). When such access is requested for criminal investigations, it can be done by a law enforcement official (Article  38 (1)). 

In both cases, MR5 explicitly states that remote access should be granted using a link created by the private platform, or any other way as agreed between Kominfo or Institutions and the platform or the platform and law enforcement. In many cases, private ESOs can satisfy these requests through negotiating a compliance plan with the requester (which may avoid actually giving Indonesian government officials direct access to companies’ servers, at least most of the time (Article 28 (1), Article 38 (1)).  Specifically, MR5 provides no information regarding factual background of the investigation or regarding any grounds establishing investigative relevance and necessity

Law enforcement officials can also get access to very broad categories of data, like subscriber identities (“electronic system user information”), traffic data, content, and “specific personal data.” This last category can include sensitive data such as health or biometric data, political opinions, religious or philosophical beliefs, trade union membership, and genetic data. Law enforcement can get access to it, without a court order, for investigations of crimes that carry sentences of over five years imprisonment. Court orders are only required for crimes carrying penalties of two to five years imprisonment.

Gag Orders

Kominfo or government institution orders to access “systems” for oversight purposes (Art. 30) and for criminal law enforcement (Art. 40) are expected to be “limited” and “confidential” but must be responded to quickly—within five calendar days upon receipt of the order (Art. 31 and 41)—a very short time period that does not allow providers to assess the legality, necessity, and proportionality of the request. 

Confidentiality provisions such as those featured in MR5 have also been problematic in the past, and sidestep surveillance transparency, as well as the right of individuals to challenge surveillance measures. While investigative secrecy may be necessary, it can also shield problematic practices that pose a threat to human rights.  This is why providers should be able to challenge gag orders, and get authorities to provide a reasoned opinion as to why confidentiality is necessary.

Civil society has strongly advocated for the public’s right to know and understand how police and other government agencies obtain customer data from service providers. Service providers should be able to publicly disclose aggregate statistics about the nature, purpose, and disposition of government requests in each jurisdiction, and notifying targets as soon as possible, unless doing so would endanger the investigation. 

Technical Assistance Mandates

Technical assistance mandates such as those set out in MR5 have, in the past, been leveraged in attempts to erode encryption or gain direct access to providers’ networks. Article 29, too, uses similar language; the government entities requesting access to a “system” may also request “technical assistance” from the private ESOs, which they are expected to provide. The government is planning to issue technical guidelines regulating the procedures for data retrieval and access to the system by December 2021.

Cloud Computing in Case of Emergency 

Article 42 compels cloud service providers to allow access to electronic systems or data (voice, images, text, photos, maps, and emails) by law enforcement in cases of emergency. While other laws and treaties (even less-protective treaty mechanisms for streamlining this kind of international access like those in the Council of Europe’s Second Additional Protocol to the Budapest Convention) have narrowly defined  emergencies  as  preventing imminent threats to people’s physical safety, Article 42(2) defines emergency more broadly to include terrorism, human trafficking, child pornography, and organized crime, in addition to physical injury and life-threatening situations. These categories may implicate life-threatening emergency threats, like a terrorist bomb plot or a child in current danger of ongoing sexual exploitation. But if there is no imminent threat to safety, Article 42 should not apply. 


MR5 runs afoul of Article 12 of the Universal Declaration on Human Rights and Article 17 of the International Covenant on Civil and Political Rights (ICCPR). MR5 is a regulation adopted by the Executive Branch. It lacks detailed procedural safeguards and its wording is overly broad, giving unfettered discretion to the authorities to request a wide range of user data and access to system. 

Under international human rights law, restrictions to the right to privacy can only be permissible if it meets the test that applies to Article 19 of the ICCPR. Such position has been clearly set out by the UN Special Rapporteur on Promotion of Human Rights while Countering Terrorism, the UN Human Rights Committee, and the UN Commission on Human Rights

To protect human rights in a democratic society, data access laws should make clear that authorities can access personal information and communications only under the most exceptional circumstance and only as long as access is prescribed by enacted legislation, after public debate and scrutiny by legislators. Further, such laws must be clear, precise, and non discriminatory, while data requests should be always necessary, proportionate and adequate. User data should only be accessed for a specific legitimate aim, authorized by an independent judicial authority that is impartial and supported with sufficient due process guarantees such as transparency, user notifications, public oversight, and the right to an effective remedy. The law should spell out a clear evidentiary basis for accessing the data; ensuring that providers will obtain enough factual background to assess compliance with human rights standards, and protected privileges. Confidentiality should be the exception, not the rule, only to be invoked where strictly necessary to achieve important public interest objectives and in a manner that respects the legitimate interests and fundamental rights of individuals. Moreover, in case of cross border request, the law should ensure the respect of the principle of dual criminality as most MLATs do.

MLATs have traditionally provided the primary framework for government cooperation on cross-border criminal investigations. MLATs are typically bilateral agreements, negotiated between two countries. While specific details may vary across different MLATs, most share the same core features: a mechanism for requesting assistance to access data stored in a hosting country; a Central Authority that assesses and responds to assistance requests from foreign countries, and a lawful authority for the central authority to obtain data on behalf of the requesting country. Generally speaking, in responding to foreign requests for assistance, the Central Authority will rely on domestic search powers (and be bound by accompanying national privacy protections) to obtain the data in question.

MR5's draconian requirements hand the Indonesian government a dangerous level of control and power over online free expression and users’ personal data, making it a tool for censorship and human rights abuses. The regulation copies many provisions used by authoritarian regimes to compel platforms to bend to government demands to break encryption, hand over people’s private communications, and access personal information without procedural safeguards and proportionality requirements against arbitrary interference. The Indonesian people deserve better. Their privacy and security are at risk unless MR5 is repealed. EFF is committed to working with SAFENET in urging Kominfo to roll back this unacceptable regulation. The 13 Necessary and Proportionate Principles can provide a blueprint for States to consider safeguards when it comes to law enforcement access to data.

Podcast Episode: The Revolution Will Be Open Source

EFF - Tue, 11/23/2021 - 03:35
Episode 102 of EFF’s How to Fix the Internet

The open source movement focuses on collaboration and empowerment of users. It plays a critical role in building a better digital future, but the movement is changing as more people from around the world join and bring their diverse interests with them. Join EFF’s Cindy Cohn and Danny O’Brien as they talk to EFF board member and Open Tech Strategies partner James Vasile about the challenges that growth is creating and the opportunities it presents to make open source, and the internet, even better.

Click below to listen to the episode now, or choose your podcast player: Privacy info. This embed will serve content from


To James Vasile, an ideal world is one where all technology is customizable by the people who use it and that people have control over the devices they use. The open source dream is that all software should be licensed to be free, modified, distributed, and copied without penalty.

The open source movement is growing, and that growth is creating pressures. Some stem from too many projects and not enough resources. Others arise because, as more people worldwide join in, they bring different dreams for open source. Balancing initial ideas and new ones can be a real challenge, but it can also be healthy for a growing movement.

All tech should be customizable by the people who use it. Because as soon as things are proprietary, you lose control over the world around you.

 You can also find the MP3 of this episode on the Internet Archive.

In this episode, you’ll learn about

  • Some of the roots and founding principles of the open source and free software communities.
  • How the open source and free software communities are changing and adapting as more people get involved and bring their own ideals with them.
  • How licenses affect the open source community, including how communities are working to bring additional values like protecting labor and protecting against abusive uses to these licenses. Policy changes that could help support the open source community and its developers and how those could ultimately help support transparency and civil liberties.
  • How critical open source is to the decentralization of the web and more.

James Vasile is an EFF board member and a partner at Open Tech Strategies, a company that offers advice and services to organizations that make strategic use of free and open source software. Jame’s work centers on improving access to technology and reducing centralized control over the infrastructure of our daily lives. You can find him on Twitter @jamesvasile.

If you have any feedback on this episode, please email

Below, you’ll find legal resources - including important cases, books, and briefs discussed in the podcast - and a full transcript of the audio.

Resources Transcript of Episode 102: The Revolution Will Be Open Source

James:  I feel like in an ideal world, everything would be customizable in this way, would be stuff that I can take and do new things with. . right? All tech should be customizable by the people who use it. Because as soon as things are proprietary, as soon as you lose the ability to do that, you lose control over the world around you

Cindy: That's James Vasile. And he's our guest today on How to Fix the Internet. James:  has been building technology and community for many years. And he's going to share his insights with us. He's going to tell us all about how free software and open source is growing, it's changing and it's getting better.

Danny: We’re going to unpack how more and more people in the tech space are joining the free software movement, but with bigger crowds do come some growing pains.

Cindy: I am Cindy Cohn, EFF's Executive Director.

Danny:  And I'm Danny O'Brien and welcome to How to Fix the Internet, a podcast of the Electronic Frontier Foundation

Cindy: Today we're going to talk about open source communities and how they have been changing as more people get involved. We're going to talk about the roots of the movement, some of the challenges presented by the growth of the community and then we get to my favorite part, how we fix the internet. 

James Vasile is here with us. He's on our board of directors at the Electronic Frontier Foundation, and he's been working in the open source community for decades. James: , you consult through OpenTech strategies as well as the Decentralized Social Networking Protocol. Welcome to How to Fix the Internet.

James: Hi, thanks for having me.

Cindy: So you're well positioned to know what's going on in the open source world. Can you give us a sense of the health of the community right now?

James: I mean, in some senses, things are going amazingly well. The way in which free and open source software has become an indispensable part of every project, every stack, every sector. Anything touched by technology depends on open source today. And increasingly, everything in the world is touched by technology, which is to say that open source is everywhere. 

Free software is at the heart of every device that people are buying on a daily basis, using on a daily basis, relying on. Whether they can see it or not, it's there and our tech lives are just powered by free software collaboration, very broadly speaking. So, that's pretty cool right? 

And that's amazing. So from that point of view, we're doing really well. 

Unfortunately, there are other aspects in which that growth has created some problems. We have lots and lots of free software projects that are under-resourced, that are not receiving the help that they need to succeed and be sustainable. And at the same time, we have a bunch of crucial infrastructure that depends on that software. 

And that becomes a problem How do we sustain free software at this scale? How do we make it reliable as it grows and as the movement changes character? Adding people is such a big deal.

Danny: Yeah. 

When it first started the free and open source movement was powered by idealism and ideology. What are those founding principles that it was built on? And are they still there? Have they been eaten away by the practicalities of supporting the whole of the internet?

James:   Yeah, that's a really good question. I mean, free software as it exists based on these initial notions laid down by Richard Stallman, that's a wing of the community that is very identifiable, but is not growing as fast as the free and open source software movement writ large. And one of the things that happens when you add people to a movement is you don't always get the same people joining for the same reasons as initially the first people started it.

You add people who are joining in for reasons of practicality, you add people who are joining in just because peer production of software works.

This is a really good way to go about linking arms with other producers and making really cool stuff. So there's some people who are just here for the cool stuff. And that's okay too. 

And as we add more and more people, as we convince the next generation of free and open source software developers, I think we're finding that people are getting further and further away from these initial ideals

Danny: What are those initial ideals? What are the things that if you were going to give an open source  newbie, the potted guide to what open source means, how would you describe those principles?

James: Yeah, I mean, we describe them very broadly as the freedom to run the software, the freedom to modify the software, the freedom to distribute those modifications, the freedom to copy the software, the freedom to learn about the software. And that notion that you can open it up, look inside and learn about it. And that that is a thing you should be able to do by right and that you should be able to then share those things with everyone else in the community. 

And that when you pass on that sharing, you are also passing on those rights is baked into some of the earliest licenses that this community has used. Like the licenses published by the Free Software Foundation, the GNU family of licenses had this ideal, this notion that not only do I have the right to look, the right to copy, the right to modify, the right to distribute, but that when I hand the software to the next person, they should get all of those rights as well. 

So that's what this community was founded upon. But very early on, from those initial free software ideals, we also had the rise of the open source wing of the movement, which was very much trying to make free software palatable to commercial interests and to expand the pool of contributors to this world. There was a weakening of the ideals where people decided that in order to appeal to new audiences, we needed licenses and we needed projects that use the licenses that we can do the passing on of rights.

So you can hand it to somebody but not give them all of those rights along with it. You could hand somebody a piece of software and while you yourself enjoyed the ability to study it, to copy it, to modify it, maybe you don't give those rights to the next person that you hand the software to. 

And that was the first big shift.

Danny: Right. Why are those initial rights so important, do you think?

James: I mean, do you remember in the early days of the web when-

Danny:  I do.

James:  ... you could view source? You know? And a bunch of people have about this. This isn't an idea I came up with, but I think Anil Dash talks about this a lot, this notion that we all learned about the internet by going to the top of our browser and clicking view source. And that's how I learned, that's how I learned HTML. 

And the notion that you could make the ecosystem legible all the way down to anyone who approaches is extremely powerful for bringing more people the ability to affect their environment. Without that ability to look inside and tinker with it and make it your own, you don't have a lot of entry points into software if you're not already a professional developer. And so this really just expands the universe of people and gives them the ability to take control of the software that they are using and make real substantive changes to it.

So those are the original principles. And then have those principles changed again in the modern era? I mean, I think they are slowly changing. I mean, also all of the conversation we have been having so far has been very much localized to the United States and Europe. 

So in the United States, you have a lot of techno libertarianism in the free software world. 

But then if you look in South America, you will find a lot of communities that are much more explicitly political on the left side of the spectrum and building software as a way to empower communities as opposed to a way to empower individuals. That diversity is increasing as free software moves to more places and moves to new places.

And in order to accept those people into the community, you can't just demand that they shed all of their old identity and values and adopt yours wholesale.

Instead, what happens is as more people join in, they start pulling the community in their direction, which is what every successful movement has ever done. Every time you talk to anyone who has ever been part of a movement that has gained in popularity, you will hear a bunch of people complaining about how their original ideals have been diluted, but it turns out that that shift, that evolution is just part of growth. 

It's actually a good thing. It's a sign that things are working. It's a sign that what you are doing is going to be more acceptable to more people over time. It is how you maintain that growth and sustain it over time. So I'm actually really excited about that diversity.

Cindy: Yeah.

To your point about this growing community, we've seen proposals to embed human rights principles into new standard form open source licenses. It's long been a dream and I think it was some of the original dream to use these licenses as a lever to force a more ethical use of technology. How do you see that working and not working as this movement grows?

James: Yeah. Man, I love all the folks working to try to figure out what is the next step. So there's a bunch of people who want to address labor issues in these licenses. So there's a 996 license that is meant to address harsh labor conditions in China. There's ethical source licenses that are designed to address what we use the software for. 

If you're going to use this software, make sure that you are not promoting war or oil, that you're protecting climate change. There's a variety of licenses for different areas of concern. And that notion that we can stop allowing anyone to use our software for whatever purpose, but instead put some guide rails around it to say, "Okay, we're going to come together as a community to make software, but we are going to only allow that use to track in certain ways and we're going to exclude what we consider to be unethical use of software."

I love the notion that people are thinking about that. And there's a couple debates going on about that right now. One is, is that stuff open source? And honestly, I don't care. There are people who care a lot about protecting the term open source as a brand. And I guess, that's important to some degree, but the question of whether this particular thing is open source or not open source is not actually a thing I lose a lot of sleep over. 

But the real question is, how would you make any of that work? How would that work as a practical matter? 

Could you, as a group of people, get together and decide that you are going to contribute, pool your effort and make something really valuable, but not have it get used by say the Defence Department or get used by pharmaceutical companies or oil companies or whoever it is that you believe is acting unethically and you don't want to benefit from your labor? 

That starts to get really interesting. That is people getting together to make technology with very particular political aims. And from my point of view, the point of technological collaboration is to uplift communities, to help communities achieve their social goals. It's not just to make cool tech. And if you believe that this technology is supposed to be enabling and empowering, then folks who are trying to figure out how to do it in ways that drive change towards the good, that makes a lot of sense to me.

I love that experimentation. I don't know where it comes out practically

Every major company, every enterprise company has taken the position that they will not use these ethical source licenses. 

You can go make them, you can go make the tech, but we just won't use it. In the absence of that corporate investment, is there a sustainability model? Is there a way to grow that niche, grow that market in the same way that we've grown the corporate invested software? And I don't know the answer to that question. I think it's probably too early to say.

Cindy: Yeah. I have to say, I mean, I find this stuff really fascinating as well, but as a lawyer who spends a lot of time around licenses and copyrights and trying to make them as small as possible because we create so much space for innovation, the street finds its uses for things if they're not locked down. There's a tension at the bottom of this that trying to use licensing and contractual terms to try to limit what you can do with something is a double-edged sword because it can go the other way as well.

And so in general, we try to create as much open space as possible. And this movement towards licenses that are trying to push towards ethical tech is really interesting to me. And it'll be interesting to see if the copyright licensing framework can really hold that. And I have my doubts, but the whole idea of copyleft to begin with was a bit of a strange thing for people who came up in a traditional copyright background. And it's succeeded so far. 

So I also don't like to pre predict, but I think as someone who spends a lot of time trying to think about how end user license agreements could be made smaller so they're not so limiting and so dumb about people's privacy and other contractual and licensing places shrinking them. This approach is really a whole different direction. And I admit to a little bit of uneasiness about it because we're always working on the unintended consequences are often the nasty tail that comes up and slaps you in the face when you're trying to do something good.

Danny: I think there's always this challenge where you look at copyright as a tool to achieve a certain aim.

And definitely one of the things we've experienced at EFF is that people are always using intellectual property as a way of achieving something in the digital space because it's so powerful, because it's been written and armed with so much power. And I think there's always a risk attached to that, partly because you're trying to bend intellectual property law to achieve different aims. 

But the other thing is once you start depending on it, you have to make it stronger. If it doesn't work, you have this instinct to go, "Oh, we just need to enforce this even more drastically on the internet. And I think that's a really risky temptation to be drawn into.

James:  It is. I tend to think that because of some quirks in our history, we over-indexed on licensing activity. We built this thing and we described it as, "Oh, look, the licenses. They're doing all this work." And from my point of view, it turns out really, it was actually just communities doing this work and the licenses were convenient and helpful. And there has been a little bit of enforcement activity, but the amount of enforcement activity relative to the amount of activity to the size of the free and open source software worldwide is tiny. 

The licenses are good for setting expectations. They almost never get enforced. And that's useful to keep in mind because it turns out that what's keeping folks inside the tent, what's keeping folks contributing and working together is not really about the license because they know they're never going to enforce the license.

Most projects don't have the resources to do that. Most projects, even if they had the financial resources to do that would not want to spend their time doing that.

Cindy: I think that's such a great insight that... that the community is so much more important than whatever the legal scheme is that's around it. It's such a great insight that I think helps us point to how we continue to make things better.

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science, enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. 

We're seeing people all around the world with very different values coming in and beginning to use open source. How is the movement changing? And then what are some of your insights about sustaining it?

James:  Man, we are at this moment of doing many different experiments to try to figure out what does sustainability look like? 

And into that space has stepped a bunch of efforts.

I guess, the most prominent one, the one that gets the most attention is probably Tidelift. And Tidelift is doing, from my point of view, a great job. They have a program where they say to companies, "Look, you're using a bunch of open source software. We will help you figure out what you're using. And then we will give you a number and you will give that to us and we will then support all of these developers who you are relying on, who you don't know and you don't even really know you're relying on these people yet, but you don't want those people to stop doing their work. You don't want them to burn out. You need to support them like you would any other vendor that you want to be reliable."

And that's been pretty good. They've gotten a bunch of companies to agree that yes, the need is there. And Tidelift makes it easy for them to address that need. And a bunch of projects have signed up with Tidelift to be on the receiving end of that. And that's pretty promising for one sector of the movement. 

 But they're not the only one. That's not the only model. There are so many other models. Open Collective is also a really cool model. And that's just a straight community crowdfunding campaign as far as I can tell. And some projects there are bumping along and some are wildly successful where they've got full-time devs who are getting monthly payments and doing real work that the community really needs.

And so from my point of view, there are a million different ways you could do sustainability for a project. And it should run the gamut from individuals on LibraPay, individuals on Kickstarter, groups on Open Collective, corporate funding through Tidelift, individuals on GitHub. They have efforts to try to allow you to pay developers on GitHub directly. The Linux Foundation has a program as well that's I think like Tidelift. 

 And all these programs together form many different options, many different models that a project could address. And some projects of course are doing multiple ways to address it. And because every project is different and a project's needs could change over time, we need many different models to address them, but we all also need many different forms of the infrastructure necessary to support all these different models.

And I'm excited to see more of that. I'm excited to see more experiments in more different directions.

Cindy: Let me shift gears a little bit and talk about how we get to our better future. What are some of the policy areas where you could see some support for open source coming through?

James:  Well, I mean, I could give you a really nerdy one. You want a really nerdy one?

Danny: Yes.

Cindy: We are deep nerds here. No problem at all.

James:  I mean, we could decide that contributions to open source projects are tax deductible. That the labor you put into your project gives you some tax deduction, which could instantly provide a financial benefit very broadly across all of that unpaid developer space. And that's not crazy. You write code, you make a thing. That thing has value. 

 You donate it to a project and you have just transferred value to them instead of keeping it for yourself and selling it or whatever. And that is a policy shift that I have been trying to get people to pay attention to for, I don't know, 15 years or so. Well, there's definitely been places where it could plug in. This is a piece with tax rules around making art and then donating that art.

It's the same idea. You make a thing and you donate it. And your tax deductibility is the paints and the canvas. The basis of that is not the value of the thing you created. And it should be. It should be the value that you create. And so from an extremely nerdy tax perspective, I would love to see just basic government recognition that the work we do, even on an unpaid basis, actually has tremendous value. It should be recognized because it is valuable.

Danny:I'm always surprised by actually how little government support open source and free software projects. And I think it's maybe because the open source community got there first. That traditionally, people look to governments for the provision of public goods. And here we have a system that's actually doing a pretty good job separate from that system providing for public goods. But do you think there's a role for governments not only in financially supporting projects, but maybe also using free software and open source software themselves?

James: Yeah, absolutely. I mean, so there's a couple places where government could plug in. We see uptake of open source software in government at much lower levels than in the mainstream tech industry.

Danny: Interesting.

James: And I've done a lot of work with a lot of government agencies trying to help them figure that out and get them over the hump. And there are a lot of institutional barriers, there's a lot of cultural barriers, but the thing that could move it is leadership from the top, is rules about procurement that require a certain consideration for open source, approaches to open source that are not just about it has to have a license that is open, it has to actually have practices that are open, that actually make a thing susceptible to the dynamics of open source. 

And we don't have any of that in this country. We don't have much movement on that. California, I think, is doing better than other states, but there's not a lot of work in most states. And at the federal level, you have AT&F pushing along this way, but you don't have any requirements. You don't have agencies saying, Everything we do is going to be open." 

And to some degree, that doesn't really make a lot of sense. If software is going to be funded by public money, shouldn't it be a public good? Shouldn't it be a thing that everyone should have access to, that everyone can use, can share, can learn from, can contribute to the general welfare of anybody in the country? I always-

Cindy: Well, I just love this idea and I love it for some other tactical reasons, which is of course we spend a lot of time trying to get access to the software used to surveil people by the cops. We've just seen a tool called ShotSpotter be revealed to be really poorly created to try to do the thing that it does, because when we get a look at the source code in some of these things and how it works, we realize that so many things that the government buys, especially in the context of surveillance are really snake oil. 

 They're not very good at what they're trying to do. And it gets even harder when you're talking about machine learning systems. So to me, a government rule that requires transparency of the code that the government is relying on to do the things they do, now that could work for some proprietary systems, but it's going to be such a smooth ride for the open source systems because they start that way.

 That could be, I think, a really important step forward around transparency that would have this tremendous benefit to the open source community, but frankly would help us in all other situations in which we find the government is using code and then hiding behind trade secrets or proprietary agreements that they have with vendors to stop the public from having access, even in situations in which somebody's going to go to jail as a result. 

 So I think this is a tremendous idea. It's certainly something that we've pushed a little bit, but reframing this as a transparency goal to me is one of the things that could be really terrific about our fixed future.

James:  Yeah. You should not have to request that software, it should just be downloadable. It should have been reviewed by the public before it gets put into service. And there's no actual good reason why we can't do that.

Danny: So we always try and envisage what this better future should be like on the show. And I mean, I'm guessing, James: , that you are the person who uses a lot of free software. Do you think that's-

James:  That's right.

Danny: Is that your vision of the future? Is the vision of future that all software is free software or are you more humble in your dreams?

James:  I mean, I feel like in an ideal world, everything would be susceptible to inspection, would be customizable in this way, would be stuff that I can take and do new things with, that I can drag in new directions maybe that only matter to me. All tech should be customizable by the people who use it.  

People should have control over the tech they use. And so yes, I would like as much of it as possible to be susceptible to those dynamics because as soon as things are proprietary, as soon as you lose the ability to do that, you lose control over the world around you. And so much of our world is mediated by technology.

And as soon as you start removing the ability to look under the hood and the ability to tinker with it, the ability to change it, you just rob everyone of their ability to control the world around them. So as much of it as possible, yes, I would never say that you should never have any proprietary software. I would never say we should have rules that outlaw it. But what I would say is that everywhere we can insert it, everywhere that we can move it, we do a great benefit to all the people who have to interact with that software and have to use it over time.

So one of the things that I think is inherent in this embrace of the open source culture is the way that it will help us facilitate a redistributed internet where we have communities that are writing the tools that they need for themselves. And I think open source is critical to this conversation. And I know you think so too. So I'm hoping you can talk us a little bit more about that. 

James: The ways in which people are trying to re decentralize the web to go back to a world in which we did not all live inside monolithic silos like the Facebook stack, the Google stack, the Yahoo stack for the people who are still living in that world, all of that activity is based on open source and open standards because there is not actually any way to build a worldwide tech ecosystem except to use open source and open standards. 

So all of that future that people are trying to build where you have a little bit more control over the technology around you, where things are a little bit more modular, where you can choose the provider of your various social services and your communication services, all of that is going to depend very heavily on being open source, on being portable, on being interoperable, on adhering to open standards to enable that interoperability. 

 And so yes, I think without open source, we would not get there, but with open source, we actually have a pretty good chance at building vital ecosystems that can accept lots and lots of people from all walks of life from all around the world. So I'm pretty excited about that.

Cindy: James, thank you so much for joining us today. You've given us a lot to think about, about how we can work together for a better open source future and frankly, how a better open source future is the key to a better future.

So thank you so much for taking the time to talk to us today.

James:  Thanks for having me. This has been a lot of fun.

Danny: That was fascinating and actually changed my mind on a few things. I mean, we always talk a little bit in these bits of the show about Lawrence Lessig's four levers of change in the technological world, which is... See if I can get them right, is the law, is code, is markets, and cultural norms. And I thought that this was going to be very much a discussion of code and law because obviously open source code and the licenses. The licenses are pivotal in free and open source software. But he really brought it out that it's more about the culture and the cultural norms.

 Cindy: I really love that insight about how, how communities become successful in the long term with open source.

Danny: Yeah. And talking about how things have changed. He really hit home with that point about how open source is moving to a global community and the something that was, I mean, not only, you know, very American in Europe centric, but actually was rooted in a very specific subculture of MIT. And these hacker cultures is now being used to empower folks in the global south, in different communities and Asia in Africa. And inevitably, because of that change the actual values  of the community as a whole, uh, changing. And I'm, I'm going to be fascinated to see. I have no idea how that's going to play out, but I'm fascinated to see how it does.

Cindy: I really appreciated his concrete thinking about how we get to a fixed place and specifically the proposals he had for the government. So everything from his tiny little nerdy suggestion that we let open source developers get a tax write off for contributing code to a thing, to something as broad as how the transparency requirement including transparency into the code itself would be a way that the government could support open source being used more by the communities, but also, of course, is I was excited about how that could help things more broadly.

Danny: It's inevitable that a vibrant open source and free software community is going to help is, this movement that we are all part of, to re-decentralize the internet. And I hadn't quite taken on board that as James:  says, there's no other way of doing it. If you are moving away from these centrally controlled platforms, you'll be moving towards protocols, and protocols have to be open so that everyone can interoperate, but more importantly, that software that implements them has to be free and open source software too. 

Danny: And thank you out there for joining us on How to Fix the Internet. Please visit where you find more episodes, learn about these issues, donate to become a member,and lots more. 

Members are the only reason we can do this work, plus you can get cool stuff like an EFF hat or an EFF hoodie or even an EFF camera cover for your laptop. 

Music for the show is from Nat Keefe and Beat Mower. How to fix the internet is supported by the Alfred P Sloan Foundations program in public understanding of science and technology. 

I'm Danny O'Brien.

Cindy: And I'm Cindy Cohn.



EU Parliament Takes First Step Towards a Fair and Interoperable Market

EFF - Tue, 11/23/2021 - 02:00

The EU’s Proposal for a Digital Market Act (DMA) is an attempt to create a fairer and more competitive market for online platforms in the EU. It sets out a standard for very large platforms, which act as gatekeepers between business users and end users. As gatekeepers “have substantial control over the access to, and are entrenched in digital markets,” the DMA sets out a list of dos and don’ts with which platforms will have to comply. There was a lot to like in the initial proposal: we agreed with the DMA’s premise that gatekeepers are international in nature, applauded the “self-executing” nature of many obligations, and supported the use of effective penalties to ensure compliance. Several anti-monopoly provisions showed ambition to end the corporate concentration and revitalize competition, such as the ban on mixing data ((Art 5(a)), the ban on forced single sign-ons (Art 5(e)), the ban on cross-tying (Art 5(f)), and the ban on lock-ins (Art 6(e).

End-Users Perspective: EFF Pushes for Interoperability

However, we didn’t like that the DMA proposals missed the mark from the end-user perspective, in particular the lack of interoperability obligations for platforms. The Commission met us half-way by introducing a real-time data portability mandate into the DMA, but it failed to go the full distance. Would it lead to a measurable behavioral change of Facebook if frustrated users could only benefit from data portability if they continued being signed up to Facebook’s terms of service? We doubt it.

The EU Parliament’s Lead Committee Calls for Interconnection and Functional Interaction

In today’s vote, the Internal Market Committee (IMCO) of the EU Parliament overwhelmingly agreed to preserve most of the proposed anti-monopoly rules and agreed on key changes of the Commission’s Proposal. We'll analyze them in more details in the coming weeks, but some elements are striking. One is that the Committee opts for an extremely high threshold before platforms will be hit by the rules (market capitalization of at least €80bn) which means that only a few, mainly U.S based, firms would legally be presumed to act as gatekeepers and hold an entrenched and durable position in the internal market. Members of Parliament also agreed on incremental improvements on the ban of mixing data, added clarification on the limits of targeted ads, including substantial protection of minors, and introduced an ambitious dark patterns prohibition in the DMA’s anti-circumvention provision. It also added a prohibition on new acquisitions as a possible punishment for systematic non-compliance with the anti-monopoly rules.

On interoperability, Members of Parliament followed the strong recommendation by EFF and other civil society groups to not settle for the low-hanging fruits of data portability and interoperability in ancillary services. Focusing on the elephant in the room - namely, messaging services and social networks - the DMA’s lead committee proposes key provisions that would allow any providers of “equivalent core platform services” to interconnect with the gatekeeper’s number independent interpersonal communication services (like messaging apps) or social network services upon their request and free of charge. To avoid discrimination, interconnection must be provided under objectively the same conditions and quality that are available or used by the gatekeeper, its subsidiaries, or its partners. The objective is a functional interaction with these services while guaranteeing a high level of security and data protection.

Competitive Compatibility

Another positive feature is the DMA’s anti-circumvention provision, which follows EFF’s suggestions by stating that gatekeeper should abstain from any behavior that discourages interoperability by using “technical protection measures, discriminatory terms of service, subjecting application programming interfaces to copyright or providing misleading information.” (Article 6(a)).

Interoperability Caveats

The interoperability obligations for gatekeeper come with caveats and question marks. The implementation of the interconnection rules for messaging services is subject to the requirements of the Electronic Communications Code, while those for social networks depend on yet-to-be-defined specifications and standards. The phrasing, too, leaves room for interpretation. For example, the relationship between the obligation to provide interconnection and how to provide it (“same conditions available or used”) is unclear and could lead to restrictions in practice. On the other hand, the Preamble of the DMA is makes the legislative intent crystal-clear. It explains that “the lack of interconnection features can affect users’ choice and ability to switch due to the incapacity for end user to reconstruct social connections and networks provided by the gatekeeper.” Providers of alternative core platform services should thus be allowed to interconnect. For number-dependent intercommunication services, this means that third-party providers can request interconnection for features “such as text, video, voice and picture;” for social networking services, this means interconnection on basic features “such as posts, likes, and comments”.

Next Steps: Vote and Negotiations

It’s now the job of the EU lawmakers to put this objective into a clear and enforceable language. The text approved in committee will be submitted for vote to the full House in an upcoming plenary session and the Council of the EU, whose position is much less ambitious, must also agree on the text for it to become law. EFF will continue pushing for rules that can end corporate concentration.

Manifest V3: Open Web Politics in Sheep's Clothing

EFF - Mon, 11/22/2021 - 18:29

When Google introduced Manifest V3 in 2019, web extension developers were alarmed at the amount of functionality that would be taken away for features they provide users. Especially features like blocking trackers and providing secure connections. This new iteration of Google Chrome’s web extensions interface still has flaws that might be addressed through thoughtful consensus of the web extension developer community. However, two years and counting of discussion and conflict around Manifest V3 have ultimately exposed the problematic power Google holds over how millions of people experience the web. With the more recent announcement of the official transition to Manifest V3 and the deprecation of Manifest V2 in 2023, many privacy based web extensions will be mitigated in how they are able to protect users.

The security and privacy claims that Google has made about web extensions may or may not be addressed with Manifest V3. But the fact remains that the extensions that users have relied on for privacy will be heavily stunted if the current proposal moves forward. A move that was presented as user-focused, actually takes away the user’s power to block unwanted tracking for their security and privacy needs.

Large Influence, Little Challenge

First, a short history lesson. In 2015, Mozilla announced its move to adopt the webRequest API, already used by Chrome, in an effort to synchronize the landscape for web extension developers. Fast forwarding to the Manifest V3 announcement in 2019, Google put Mozilla in the position of choosing to split or sync with their Firefox browser. Splitting would mean taking a strong stand against Manifest V3 as an alternative and supporting web extensions developers’ innovation in user privacy controls. Syncing would mean going along with Google’s plan for the sake of not splitting up web extension development any further.

Mozilla has decided to support Manifest V2’s blocking webRequest API and MV3’s declarativeNetRequest API for now. A move that is very much shaped by Google’s push to make MV3 the standard, supporting both APIs is only half the battle. MV3 dictates an ecosystem change that limits MV2 extensions and would likely force MV2 based extensions to conform to MV3 in the near future. Mozilla’s acknowledgement that MV3 doesn’t meet web extension developers’ needs shows that MV3 is not yet ready for prime time. Yet, there is pressure to get stable, trusted extensions to allocate resources to port their extensions to more limited versions of themselves with a less stable API.

Manifest V3 Technical Issues

Even though strides have been made in browser security and privacy, web extensions like Privacy Badger, NoScript, and uBlock Origin have filled the gap of providing the granular control users want. One of the most significant changes outlined in Manifest V3 is the removal of blocking webRequest API and the flexibility it gave developers to programmatically handle network requests on behalf of the user. Queued to replace blocking webRequest API, the declarativeNetRequest API includes low caps on how many sites these extensions could cover. Another mandate is moving from Background Pages, a context that allows web extension developers to properly assess and debug, to an alternative, less powerful context called Background Service Workers. This context wasn’t originally built with web extension development in mind, which has led to its own conversation in many forums.

In short, Service Workers were meant for a sleep/wake cycle of web asset-to-user delivery—for example, caching consistent images and information so the user won’t need to use a lot of resources when reconnecting to that website again with a limited connection. Web extensions need persistent communication between the extension and the browser, often based on user interaction, like being able to detect and block ad trackers as they load onto the web page in real time. This has resulted in a significant list of issues that will have to be addressed to cover many valid use cases. These discussions, however, are happening as web extension developers are being asked to port to MV3 in the next year without a stable workflow available with pending issues such as no defined service worker context for web extensions, pending WebAssembly support, and lack of consistent and direct support from the Chrome extensions team itself.

Privacy SandStorm

Since the announcement of Manifest V3, Google has announced several controversial “Privacy Sandbox” proposals for privacy mechanisms for Chrome. The highest-stakes discussions about these proposals are in the World Wide Web Consortium, or W3C. While technically “anyone” can listen into the open meetings, only W3C members can propose formal documentation on specifications and have leadership positions. Being a member has its own overhead of fees and time commitment. This is something a large multinational corporation can easily overcome, but it can be a barrier to user-focused groups. Unless these power dynamics are directly addressed, a participant’s voice gets louder with market share.

Recently this year, after the many Google forum-based discussions around Manifest V3, a WebExtensions Community Group has been formed in the W3C. Community group participation does not require W3C membership, but they do not produce standards. Chaired by employees from Google and Apple, this group states that by “specifying the APIs, functionality, and permissions of WebExtensions, we can make it even easier for extension developers to enhance end user experience, while moving them towards APIs that improve performance and prevent abuse.”

But this move for greater democracy would have been more powerful and effective before Google’s unilateral push to impose Manifest V3. This story is disappointingly similar to what occurred with Google’s AMP technology: more democratic discussions and open governance were offered only after AMP had become ubiquitous.

With the planned deprecation of Manifest V2 extensions, the decision has already been made. The rest of the web extensions community are forced to comply, deviate from, or leave a large browser extension ecosystem that doesn’t include Chrome. And that’s harder than it may sound: Chromium, the open-source browser engine based on Chrome, is the basis for Microsoft Edge, Opera, Vivaldi, and Brave. Statements have been made by Vivaldi, Brave, and Opera on MV3 and their plans to preserve ad-blockers and privacy preserving features of MV2, yet the ripple effects are clear when Chrome makes a major change.

What Does A Better MV3 Look Like?

Some very valid concerns and asks have been raised with the W3C Web Extensions Community Group that would help to propel the web extensions realm back to a better place.

  1. Make the declarativeNetRequest API optional in Chrome, as it is currently. The API provides a path for extensions that have more static and simplistic features without needing to implement more powerful APIs. Extensions that use the blocking webRequest API, with it’s added power can be given extra scrutiny upon submission review. 
  2. In an effort to sooth the technical issues around Background Service Workers, Mozilla proposed in the W3C Group an alternative to Service Workers for web extensions, dubbed “Limited Event Pages”. Where this approach restores a lot of the standard website APIs and support lost with Background Service Workers. Safari expressed support, but Chrome has expressed lack of support with reasons pending but not explicitly stated at the time of this post.
  3. No further introduction of regressions in important functionality that MV2 has. For example, being able to inject scripts before page load. This is broken with pending amendments in MV3.

Even though one may see the moves between web extensions API changes and privacy mechanism proposals as two separate endeavors, it speaks to the expansive power of how one company can impact the ecosystem of the web; both when they do great things, and when they make bad decisions. The question that must be asked is who has the burden of enforcing what is fair: the standards organizations that engage with large company proposals or the companies themselves? Secondly, who has the most power if one constituency says “no” and another says “yes”? Community partners, advocates, and smaller companies are permitted to say no and not work with companies who enter the room frequently with worrying proposals, but then that company can claim that silence means consensus when they decide to go forward with a plan. Similar dynamics have occurred before when the W3C grappled with Do Not Track (DNT) where proponents of weaker privacy mechanisms feigned concern over user privacy and choice. So in this case, large companies like Google can make nefarious or widely useful decisions without much incentive to say no to themselves. In the case of MV3, they gave room and time to discuss issues with the web extensions community. That is the bare minimum standard for making such a big change, so to congratulate a powerful entity for making space for many other voices would only add to the sentiment that this should be the norm in open web politics.

No matter how well meaning a proposal can be, the reality is millions of people’s experiences on the internet are often left up to the ethics of a few in companies and standards organizations.

Police Aerial Surveillance Endangers Our Ability to Protest

EFF - Mon, 11/22/2021 - 17:58

The ACLU of Northern California has concluded a year-long Freedom of Information campaign by uncovering massive spying on Black Lives Matter protests from the air. The California Highway Patrol directed aerial surveillance, mostly done by helicopters, over protests in Berkeley, Oakland, Palo Alto, Placerville, Riverside, Sacramento, San Francisco, and San Luis Obispo. The footage, which you can watch online, includes police zooming in on individual protestors, die-ins, and vigils for victims of police violence.

You can sign the ACLU’s petition opposing this surveillance here

Dragnet aerial surveillance is often unconstitutional. In summer 2021, the Fourth Circuit ruled that Baltimore’s aerial surveillance program, which surveilled large swaths of the city without a warrant, violated the Fourth Amendment right to privacy for city residents. Police planes or helicopters flying overhead can easily track and trace an individual as they go about their day—before, during, and after a protest. If a government helicopter follows a group of people leaving a protest and returning home or going to a house of worship, there are many facts about these people that can be inferred. 

Not to mention, high-tech political spying makes people vulnerable to retribution and reprisals by the government. Despite their constitutional rights, many people would be chilled and deterred from attending a demonstration protesting against police violence if they knew the police were going to film their face, and potentially identify them and keep a record of their First Amendment activity.

The U.S. government has been spying on protest movements for as long as there have been protest movements. The protests for Black Lives in the summer of 2020 were no exception. For over a year, civil rights groups and investigative journalists have been uncovering the diversity of invasive tactics and technologies used by police to surveil protestors and activists exercising their First Amendment rights. Earlier this year, for example, EFF uncovered how the Los Angeles Police Department requested Amazon Ring surveillance doorbell footage of protests in an attempt to find “criminal behavior.” We also discovered that police accessed BID cameras in Union Square to spy on protestors

Like the surveillance used against water protectors at the Dakota Access Pipeline protests, the Occupy movements across the country, or even the Civil Rights movements in the mid-twentieth century, it could takes years or even decades to uncover all of the  surveillance mobilized by the government during the summer of 2020. Fortunately, the ACLU of Northern California has already exposed CHPs aerial surveillance against the protests for Black lives.

We must act now to protect future protestors from the civil liberties infringements the government conjures on a regular basis. Aerial surveillance of protests must stop.

Digital Rights Updates with EFFector 33.7

EFF - Fri, 11/19/2021 - 13:02

Want the latest news on your digital rights? Then you’ve come to the right place! Version 33, issue 7 of EFFector, our monthly-ish newsletter, is out now! Catch up on the latest EFF news, from how Apple is listening and retracting some of its phone-scanning features to how Congress can act on the Facebook leaks, by reading our newsletter or listening to the new audio version below.


EFFECTOR 33.07 - Victory: Apple will retract some harmful phone-scanning

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Apple’s Self Service Repair Program Must Live Up To Its Promises

EFF - Thu, 11/18/2021 - 19:12

The Right to Repair movement got a boost this week, when Apple announced a new program, Self Service Repair, that will let people buy genuine Apple parts and tools to make some of their own repairs to limited Apple products such as newer iPhones and some Macs. It will be starting early next year. Implemented well, Apple’s program could be huge for everyone who supports the right to repair.

This is a major shift for the company, which has fought for years against movements to expand people’s right to repair their Apple products. Right-to-repair advocates have not only pushed the company to move on this issue, but also to get regulators and lawmakers to acknowledge the need to protect the right to repair in law. Apple’s announcement is only one illustration of how far the advocacy on the right to repair has come; in just the past two years, advocates have won at the ballot box in Massachusetts, received a supportive directive from the Biden Administration, changed policy at Microsoft, and made some gains at the Library of Congress to expand repair permissions.

The Self Service Repair Program could be another feather in that cap. But now that Apple has announced the program, we urge them to roll it out in ways that truly expand their customers’ access and choice.

It’s important that Apple’s program, or any program, does not come with strings attached that make it unworkably difficult or too expensive for a normal person to use. In the past, Apple has done both—as YouTuber and professional repairer Louis Rossman pointed out.

Apple’s Independent Repair Provider Program, which was supposed to make manuals and parts more available to independent repairers, did not live up to its early promise. In practice, it saddled those who wanted to participate with restrictive non-disclosure agreements, made it difficult to obtain parts, and made it impossible for independent repair shops to keep parts in stock to respond quickly to repair requests.

The company also ultimately limited the Independent Repair Provider Program to a few parts for a few devices. Apple should not do this again with the Self Service Repair Program. At launch, the forthcoming program is very limited — first to parts for the iPhone 12 and 13, and soon Mac computers with M1 chips. Apple has said its repair program will support the most frequently serviced parts, but they are not the only components that break; it would be great to see this list continue to expand. As it does, Apple should strive to make the program accessible and provide parts in ways that protect device owners from high charges for replacement. For example, if someone drops their phone, they should be able to just buy a screen, and not a whole display assembly.   

We urge Apple not to repeat past mistakes, but instead move forward with a program that truly encourages broader access to parts, manuals, and tools.

Expanding access to repair also means providing support for the independent repair shops who help people who need their products fixed but lack the technical knowledge or confidence to do so. The company should go further to support independent shops—which, after all, are also working toward the goal of keeping Apple’s customers happy.

We’ve worked for years with our fellow advocates, such as the Repair Coalition, iFixit, U.S. PIRG, and countless others, to shift the conversation around the right to repair. We must ensure that the market continues to get better for people who want choice when it comes to fixing their devices—whether that’s protecting individual rights to fix devices, supporting independent repair shops, encouraging more companies to take steps that embrace this right, or winning cases and passing laws to make it crystal clear that people have the right to repair their own devices.

Apple’s announcement shows there has been considerable pressure on the company to change its designs and policy to answer consumer demand for the right to repair. Let’s keep it up and keep them on the right track.

EFF Tells Court to Protect Anonymous Speakers, Apply Proper Test Before Unmasking Them In Trademark Commentary Case

EFF - Thu, 11/18/2021 - 15:45

Judges cannot minimize the First Amendment rights of anonymous speakers who use an organization’s logo, especially when that use may be intended to send a message to the trademark owner, EFF told a federal appeals court this week.

EFF filed its brief in the U.S. Court of Appeal for the Second Circuit after several anonymous defendants in a case brought by Everytown for Gun Safety Action Fund appealed a district court’s order that mandated the disclosure of their identifying information. Everytown’s lawsuit alleges that the defendants used its trademarked logos in 3-D printed gun part plans and sought the order to learn the identities of several online speakers who printed them.

Unmasking can result in serious harm to anonymous speakers, exposing them to harassment and intimidation, which is why the First Amendment offers strong protections for such speech. So courts around the country have applied a now well-established three-step test when parties seek to unmask Doe speakers, to ensure that the litigation process is not being abused to pierce anonymity unnecessarily. But in granting the order in this case, the district court instead applied a looser test that is usually used only in P2P copyright cases. The court then ruled that the online speakers could not rely on the First Amendment here because “anonymity is not protected to the extent that it is used to mask the infringement of intellectual property rights, including trademark rights.”

That ruling cannot stand. As we explained in our friend-of-the-court brief, “Although the right to speak anonymously is not absolute, the constitutional protections it affords to speakers required the district court to pause and meaningfully consider the First Amendment implications of the discovery order sought by Plaintiffs, applying the correct test designed to balance the needs of plaintiffs and defendants in Doe cases such as this one.”  By choosing to apply the wrong test, and even then in the most cursory way, the district court fell far short of its obligations.

To be clear, at this point we aren’t commenting on the merits of Everytown’s trademark claim. Instead, we’re worried about something else: that the court’s ruling, if affirmed by the Second Circuit, will be used in other trademark cases to minimize the interests of speakers who use trademarks as part of their commentary. 

The traditional, robust legal test under the First Amendment requires those seeking to identify anonymous speakers to give them notice and meet a high evidentiary standard, which ensures the plaintiffs have meritorious legal claims and are not misusing courts to intimidate or harass anonymous speakers. If those steps are met, the First Amendment requires courts to weigh several factors, including the nature of the expression at issue and whether there are ways to provide plaintiffs with the information they need short of publicly identifying the anonymous speakers.

The district court instead relied on the lower standard used in cases involving peer-to-peer networks, which offers insufficient protections for anonymous speakers and should never have been used in this case.

If the court had looked instead to trademark precedent, it would have found that several sister courts have applied the more traditional test in trademark cases. And that is as it should be. As we explain:

as courts around the country have recognized, trademark uses may implicate First Amendment interests in myriad ways. Thus, trademark rights must be carefully balanced against constitutional rights, to ensure that trademark rights are not used to impose monopolies on language and intrude on First Amendment values

The Second Circuit granted defendants’ request for an administrative stay of the district court’s order and plans to more fully review the appeal next week. We hope that the court will reverse the district court’s order and require it to seriously consider the competing interests here before issuing any other unmasking orders, in this or any other case. 

Podcast Episode: What Police Get When They Get Your Phone

EFF - Tue, 11/16/2021 - 04:00
Episode 101 of EFF’s How to Fix the Internet

If you get pulled over and a police officer asks for your phone, beware. Local police now have sophisticated tools that can download your location and browsing history, texts, contacts, and photos to keep or share forever. Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Upturn’s Harlan Yu about a better way for police to treat you and your data. 

Click below to listen to the episode now, or choose your podcast player: Privacy info. This embed will serve content from


Today, even small-town police departments have powerful tools that can easily access the most intimate information on your cell phone. 

When Upturn researchers surveyed police departments on the mobile device forensic tools they were using on mobile phones, they discovered that the tools are being used by police departments large and small across America. There are few rules on what law enforcement can do with the data they download, and not very many policies on how the information should be stored, shared, or destroyed.

Recently Upturn researchers surveyed police departments on the mobile device forensic tools they were using on mobile phones, and discovered that the tools are being used by police departments large and small across America. There are far too few rules on what law enforcement can do with the data they download, and not very many policies on how the information should be stored, shared or destroyed. 

Mobile device forensic tools can access nearly everything—all the data on the phone—even when they’re locked. 

You can also find the Mp3 of this episode on the Internet Archive.

In this episode you’ll learn about:

  • Mobile device forensic tools (MDFTs) that are used by police to download data from your phone, even when it’s locked
  • How court cases such as Riley v. California powerfully protect our digital privacy-- but those protections are evaded when police get verbal consent to search a phone
  • How widespread the use of MDFTs are by law enforcement departments across the country, including small-town police departments investigating minor infractions 
  • The roles that phone manufacturers and mobile device forensic tool vendors can play in protecting user data 
  • How re-envisioning our approaches to phone surveillance helps address issues of systemic targeting of marginalized communities by police agencies
  • The role of warrants in protecting our digital data. 

Harlan Yu is the Executive Director of Upturn, a Washington, D.C,-based organization that advances equity and justice in the design, governance, and use of technology. Harlan has focused on the impact of emerging technologies in policing and the criminal legal system, such as body-worn cameras and mobile device forensic tools, and in particular their disproportionate effects on communities of color. You can find him on Twitter @harlanyu

If you have any feedback on this episode, please email

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

EFF is deeply grateful for the support of the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible.  

Resources Legal Cases Other Resources Transcript of Episode 101: What Police Get When They Get Your Phone 

Harlan: the fact that all of this information is collected and saved on your phone, right? Your web browsing history, your location, history. This is all information that is now kept digitally in ways that we'd never had records of this. And so over this past decade, smartphones have become this treasure trove for law enforcement. 

Cindy: That's Harlan Yu. And he's our guest today on How to Fix The Internet. Harlan is the executive director at Upturn where he's working to advance equity and justice in the way technology is used. 

Danny: Harlan's going to talk to us about some of the tools used in policing. This tech makes law enforcement much more powerful when it comes to street level surveillance, and we'll explore some of the dangers in that.

Cindy: Harlan has solutions that will make us all safer and protect our privacy. One of our central themes at EFF is that when you go online or use digital tools, your rights should go with you. Harlan is going to tell us how to get there.

Cindy: I'm Cindy Cohn, EFFs executive director.

Danny: and I'm Danny O'Brien and this is how to fix the internet, a podcast of the Electronic Frontier Foundation.

Cindy: Harlan. Thank you so much for joining us. At Upturn,  you have been working in the space where technology and justice meet, and I'm really excited to dig into some of this with you.

Harlan: Thanks so much for having me, Cindy.

Cindy: So let's start by giving an explanation about what kinds of tools police are using when it comes to our digital phones.

Harlan: last year and over the past two years, my team and upturn and I, we published and have been doing a lot of research on law enforcement's use of mobile device forensic tools. Now what a mobile device forensic tool does is it's a device where law enforcement will plug your cell phone into that device. It allows law enforcement to extract and copy all of the data. So all of the emails, texts, photos, locations, and contacts even deleted data, off of your cell phone.  And if necessary, we'll also circumvent the security features on the phone. 

Harlan: So for example, device level encryptio in order to do that extraction, once it has all of the data from your phone, these tools also help law enforcement to analyze all of that data in much more efficient ways. So imagine, you know, gigabytes of data on your phone, it can help law enforcement do keyword searches, create social graphs, make maps of all of the places that you've been.You know, so an officer who's not super tech savvy, will be able to easily pour over that information. So it can help officers automatically detect photos and filter for photos that have, say weapons or tattoos or do text level classification as well. 

Cindy: Yeah, there were some screenshots in that report that were really pretty stunning. You know, a little cute little touchscreen that lets you push a button and find out whether people are talking about drugs. Uh, another little touch screen that lets you identify who the people are that you talk to the most often.

Cindy: you know, really user-friendly 

Harlan: these tools are made by a range of different vendors. The most popular being, uh, Celebrite,  gray shift, which makes a tool called gray key, magnet forensics. And, you know, there's a whole industry of vendors that make these tools. And what our report did was we submitted about 110 public records request to local and state law enforcement agencies around the country, asking about what tools they've purchased, how they're using them, and whether there are any policies in place that constraints their use. And what we found was that almost every major law enforcement agency across the United States already has these tools. Including all 50 of the largest police departments in the country and state law enforcement agencies in all 50 states and the District of Columbia. 

Cindy: Wow, all across the country. How much are police using it? 

Harlan: We found through our public records requests that law enforcement have been doing, you know, hundreds of thousands of cell phone searches, and extractions, since 2015. This is not just limited to, you know, the major law enforcement agencies that have the resources to purchase these tools. We also found that many smaller agencies can afford them. So cities and towns with under, you know, tens of thousands of residents with maybe a dozen or two dozen officers, places like Shaker Heights in Ohio or Lompoc in California, or Walla Walla, Washington. The breadth and availability of these tools, was pretty shocking to us.

Cindy: You know, people might think that this is something that the FBI can do in national security cases or that, you know, we can do in other situations, in which we've got very serious crimes by very dangerous people. But the thing that was stunning to me about the report you guys did was just how easy it is to do this, how often and how mundane the crimes are that are being, uh, that are being identified through this. Can you give me a couple more examples or talk about that a little more? 

Harlan: Yeah, that's exactly right. I think one of the main takeaways from our report is just how pervasive this tool is. Even for the most common. You know, I think there's this narrative, especially at the national level around encryption back doors, right. And the way that story gets told is that, you know, that law enforcement will use these tools in high profile cases, cases like terrorism and child exploitation, you know, they even use a term. Around exceptional or extraordinary access, which kind of indicates that access will be rare. I think what our report does is that it challenges this prevailing wisdom that law enforcement is going dark. 

What law enforcement is saying as far from the entire story as a report points out where these kinds of tools and the law enforcement interest in accessing data on people's cell phones happens, not only in cases involving major harm, but we documented our report where across the country, these tools are being used to investigate cases, including graffiti, shoplifting, vandalism, traffic crashes, parole violations, petty theft, public intoxication, you know, the full gamut of drug related offenses, you name it. These tools are being used every day on the streets, in the United States right now.

Danny: “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science, enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. 


Danny: So you say that, that these devices not only can scan for data, but also make copies. Is there any kind of understanding we have about how long those copies are kept?

Harlan: That is a really important issue. One thing that we asked law enforcement agencies to provide to us through our public records requests were whether they have any policies in place. Just about half indicated that no policies at all among those only about nine had policies that we would consider detailed enough to provide any meaningful guidance to constrain what officers do.

So I think for in large part law enforcement agencies don't have specific policies in place around the use of these tools and that includes, you know, how long a law enforcement agency can retain and save that data. Now, maybe I'll just raise here a recent case in Wisconsin state vs Birch, which is a case that the eff, the ACLU and, Epic recently filed an Amicus brief in which was a case in Wisconsin where this suspect Birch was involved in this hit and run. So the police verbally asked Birch whether or not they could see his text messages. The suspect said yes, and the police had Mr. Birch sign kind of a vague consent form to search the phone. Right.

And rather than just searching and looking at text messages based on the vague consent form, law enforcement did a full forensic extraction of the phone and copied all of the data. Ultimately they found no evidence in that particular case, but then they stored that data. Now months later, Brown County Sheriff's office was investigating a homicide and they suspected that Mr. Birch was somehow involved.

And so based on the extraction that a different police department did and retaining that data, Brown County Sheriff's office then was able to get a copy of that extraction and searched the phone again and found that the suspect viewed news about the murder and there was location data on the phone that indicated that he might be around the location.

And in any case, he was then arrested and charged with the homicide an entirely different case from the first extraction. So I think that case illustrates, I think the dangers of, well, not only consent searches, which we can talk about, but the dangers of indefinite retention and the use of these tools overall.

Cindy: Oh, it's just chilling, right? I mean, essentially the police have a time machine. Right. And if they get your information at any, any point in time, then they can just go back to it later and look at it. And I think it's important to recognize that the cases that we hear about, like this case in Wisconsin, are the cases in which they found something that could be incriminating, but that says nothing about all the stories underneath the surface, where they didn't find anything, but they still did a time machine search on people.

Cindy: I want to talk about consent in a second, but I think one of the things that your report really points out is that given the racial problems that we have in our law enforcement right now, this has very serious implications for equity, for who's going to get caught up in these kinds of broad searches and time machine searches And I wonder if you want to talk a little bit more about that.

Harlan: overall we see law enforcement adoption of these tools as a dangerous expansion in their investigatory power. Given how widespread and routine these tools are being used at the local level, and given also our history of racist and discriminatory policing practices, across the country that continues to this day, it's highly likely that these tools disparately affect and are used against communities of color. The communities that are already being over policed.

Danny: What are the kinds of things they can get from these searches?

Harlan: Mobile device forensic tools can access nearly everything, all the data on the phone, even when sometimes when they're locked, right. You know, in creating mobile phone operating systems, designers have to balance security with user convenience, right? So even when your phone's locked, it'd be really nice to get notifications, to know when there's an email or a, an event on your calendar. 

Moreover, many Americans, especially people of color and people of lower incomes rely solely on their cell phones to connect to the internet.

Harlan: And so over this past decade, smartphones have become this treasure trove for law enforcement, where, you know, the information that we store on our phones, arguably. Contains much more sensitive information than even the physical artifacts that are in our homes. Which has traditionally been, perhaps the most sacred place, in terms of constitutional protection from intrusion from the government.

Cindy: Now I want to talk a little bit about, you know, how the courts have been addressing the situation we know, and a great victory for, uh, for privacy. We won a case called Riley vs California in the Supreme Court a few years ago that basically said that you can't search somebody’s phone incident to arrest without a warrant. You need to go get a warrant.

Harlan: Law enforcement is required to get a warrant to perform these kinds of phone searches, but there are many exceptions to this warrant requirement. One of them being the consent exception- this is a really common practice that we're seeing on the ground, right? When there's a consent search, those searches are then not subject to the constraints and oversight that warrants typically provide.

Now, that's not to say that warrants actually provide that many constraints in reality. And we can talk about that. We see that more as a speed bump, but even those basic legal constraints are not in place. And so this is one of the reasons why one of the recommendations in our report is to ban the use of consent searches of mobile phones, because this idea of consent search in the policing context is essentially a legal fiction. There are several states that have banned the use of consent searches for in traffic stop situations, New Jersey in 2002 Minnesota in 2003. 

Earlier this year the DC police reform commission, they made the recommendation to the DC council that the DC council prohibit all consent searches, not just for mobile phones, but a blanket prohibition across the board. And if DC council takes up this recommendation, as far as I know it would be the first full ban of consent searches anywhere across the country.

And so that's where Upturn believes that, the law should go. 

Cindy: Yeah. I just think that the idea of even calling them consent searches is a bit of a lie, right? You know, the, the, you know, either let us search your phone or let us search your house, or we're going to take you down, you know, and book you and hold you for how many hours they possibly can, like that isn't a consent, right?

I think that one of the things that we're doing here is we're trying to be honest about a situation in which consent is actually the wrong word for what's going on here, you know. I consent to lots of things because I have free will. These are not situations like that. 

Danny: And I don't think that people would necessarily understand what they were consenting to. I mean, this has been eye-opening for me and I, I feel like I track this kind of thing, but if we're talking about banning consent searches using this technology, do you think the technology as a whole should be banned, do you think police should have access to these tools at all?

Harlan:I think the goal needs to be, to reduce the use of these tools and the data available to law enforcement.

Danny: So, would that be a question of wrapping the use of these tools into sort of serious crimes or putting some constraints about how the data is used or how long it is stored for.

Harlan: I mean, I, I would worry about even legitimizing the use of these tools in certain cases, right? Again, when there's a charge, it's just the accusation that a person committed a particular crime. And I think no matter what the charge is, I think people should have the same rights. And so I don't necessarily think that we should relax the rules for certain kinds of charges or not.

Cindy: It's, it's a big step to deny law enforcement a tool, and so what's the other side of that?

Harlan: Well, I think we can look toward, all of the costs that our system of policing has on our society, right? When people get roped up into the criminal legal system in the United States, it's extremely hard to then, you know, with a criminal record, get a job, have other economic opportunities to the extent that these tools are you know making law enforcement more powerful in their investigative powers. I'm just not sure that that's the direction that our society needs to go. Right. The incarceration rate in the United States is already, you know, far outside the norm. 

Danny: I think the way I tend to think about it is that we have this protection, as you say, in a how a home and possessions, but when you talk about mobile phones, you're actually getting much more closer to kind of your people's internal thought processes and it feels more like either an interrogation or in some cases when you can go back and forth like this, a kind of mind reading exercise. And so if anything, these very intimate devices should have even more protections than we have to our closest living environments.

Harlan: One commentator called the use of these tools in particular, create a window into the soul. Right? These searches are incredibly invasive. They're incredibly broad. And yeah, as you're saying, you know, traditionally the home has been the most sacred place. There's an argument today that our phones should be just as sacred because they have the potential to reveal much more about us than any physical search. 

Cindy: We talked about the fourth amendment briefly, but it plays a role here too right? 


Harlan: The fourth amendment requires warrants to describe with particularity the places to be searched and the things to be seized. But in this context, oftentimes law enforcement agents also rely on the plain view exception which effectively allows law enforcement to do anything during these searches, right?

Harlan: This is a problem that legal scholars have wrestled with and EFF has wrestled with for decades where for physical searches, the plain view exception allows law enforcement to seize evidence in plain view of any place that they're lawfully permitted to be right. If it's immediately obvious that the incriminating character of the evidence is there.

But for digital searches, you know, this standard makes no sense, right? This idea that digital evidence can, can exist in quote unquote, plain view, right? In the way that physical evidence can considering how the software can display in sort, oversees data. I think is just incoherent. The language can vary from warrant to warrant, but they all authorize essentially an unlimited, and unrestricted search of the cell phone. So I think there's a questions here too, even in the search warrant context is whether these warrants are sufficiently particular. I think in many cases, the answer has got to be clearly no.

Danny: So these tools to analyze these phones are made by companies all around the world. Do you think they're used all around the world?

Harlan: Yeah, I think, human rights activists have been seeing this happen all around the world, especially for journalists who live in authoritarian countries, where, yeah, we're seeing, you know, lots of governments, purchasing these tools and using them to limit freedom of speech and freedom of expression in many other places, in addition to here.

Cindy: So let's, let's switch gears and talk a little bit about what the world looks like if we get this right. This is unlike a lot of difficult problems, this is one where you've really clearly articulated a way that we can fix it. So let's say that we ban law enforcement use of these devices, or we ban evidence you know, collected through the use of these devices from being admissible, some kind of extension of the exclusionary rule. How's this going to feel and work for those of us who have phones, which is by the way, all of us.

Harlan: I think,  you know, people will probably need to worry a little bit less, or less frequently about the ways that powerful institutions like the police can have that window into your soul to have an inside look to the things that you're thinking, the things that you're searching online, the things that you're curious about, the places that you're going right with location data being stored on the phone. Whether you're going to a doctor's office or a church or other, other religious institution, all sorts of sensitive information will at least be accessed less frequently by law enforcement in a way that hopefully will provide a greater sense of freedom and liberation especially in the society that we live in here in the United States.

Cindy: the freedom and the space of privacy that we get is not just for the individual whose phone is seized. There's a broader effect on this, not just for the people who are, you know, find themselves pulled over by the cops. It's going to be for all the people who ever talked to interact with, learn from, or read about, uh, the people who get pulled over by the cops.

Harlan: Yeah, that's absolutely right. Right. The photos on my phone have some pictures of me, but are also of my family are also of my community. And my text messages also include obviously sensitive data that other people are providing to me. The contacts in my phone, right? Just my social graph. 

Danny: So one of the things that I think can make people feel a little bit hopeful in what can feel a very oppressive story. Is what they can do to change this. And what is the role of individuals in transforming, this story? 

Harlan: I'm not sure that individual decisions are really gonna. Us to the future where we want to be. Right. We can't tell individuals to buy a higher end cell phone if they don't have the resources to do so. Right. Or to have every individual, you know, configure their phones in just the right way. I'm not sure that that is a realistic outcome. To get to where we want to be. I think, you know, the better approach is to look more systemically at the problems with our law, with the problems in law enforcement and problems where, you know, we can fix it, for everyone, you know, at the systemic level. And I think that those are the areas of opportunity in which we should focus.

Danny: In this positive vision where we're, we're presenting, is there a role for the phone companies themselves? Is there some capacity that they should be playing even in a sort of utopia where the laws and policies in the courts support, protecting your privacy?

Harlan: Yeah. The phone manufacturers have essentially been playing a cat and mouse game with law enforcement over decades, right. Uh, these tools that are being created by celebrite and gray shift, you know, they can break into the latest, you know, iPhones and the highest end Samsung, Android phones, with rare exception, right? And so there's this idea too, that even in the case of a locked phone, that law enforcement is having trouble getting access to even if, you know, you just turn on the phone and there is device encryption, there's actually a significant amount of information on iPhones that remains un-encrypted outside of the encrypted portion of the phone, what technical folks called before first unlock now after the first unlocks of the user unlocks the phone and then it gets locked.

Even more un-encrypted data becomes available. Right. 

Danny: Why is that? 

Harlan: That's a design decision that most manufacturers make to provide users with, you know, convenient features. This is just what they believe is the right balance. And so, yeah, I think there's a role here for the phone manufacturers to continue to address vulnerabilities and to make it more difficult for law enforcement to get access. 

I think there's also a potential role here to play by the vendors of the mobile device forensic tools. Right? I think one thing that we suggest in our report is that the vendors of these tools ought to maintain an audit log for every search, right, that details, the precise steps that a law enforcement officer took when extracting and analyzing the phone. The goal here would be better equipped. In cases to push back and to challenge the scope of these searches. If we could, for instance, played back using, say automatic screen reading technology, play back exactly what an examiner looked at or the process in what the examiner took in doing the search.

This would allow the judge and the defense lawyers, a chance to ask questions, and for defense lawyers to have a better chance potentially of suppressing over seized information.

Cindy: What does public safety look like in this world?

Harlan: Public safety is not the same as policing. Right? I think public safety means communities and individuals who have economic safety, who have economic opportunity, have stable housing, have job opportunities, have a good education. Right? I think we need to, you know, as, many black feminists have laid out the vision around defunding the police, right? The idea here isn’t just to tear down the police, but the process of what we have to build up. 

Cindy: I really agree with you Harlan, getting this right isn’t about whether we give or take away a particularly sophisticated law enforcement tool. It’s about shoring up the systems in communities that are too often unfairly targeted by surveillance. At EFF we say we can’t surveil ourselves to safety and I think your work really demonstrates that. 

Harlan: The idea here isn't just to tear down the police, but really the process of what we need to build up to support people and their families and their communities, which is things that don't look like surveillance tools and law enforcement as we have it today, but the absence of that and the creation and the existence of other structures that are supportive of people's livelihoods and ability to thrive and to be free.

Cindy: Oh, Harlan. This has been so interesting. And we really enjoyed talking with you. And the work that you guys do at upturn is just fabulous. Right? Really bringing a deep tech lens into the tools that law enforcement is using and recognizing how that's going to impact all of us in society, but especially the most vulnerable.

Cindy: Thank you so much for taking some time to talk with us. And, uh, let's, let's hope we move towards this, this vision of this better world together.

Danny: Thanks Harlan, it’s been great. 

Harlan: Thanks so much for having me. 

Cindy: Well, that was just terrific. You know, one of the things that struck me on this as we've spent a lot of time on this podcast, and of course, EFF has, you know, fighting for the ability for people to have strong encryption, especially on their devices. One of the things that the research that Upturn did demonstrated is that's just a tiny little piece of things. In general, our phones are broadly available and everything that's on our phones and even stuff in the cloud that's accessible through our phones is widely available to law enforcement. So it really strikes me as funny that we're spending on this tiny little piece where law enforcement might have some small problems getting access to stuff where in the gigantic piece of it they're already having free access to everything that's on our phone. 

Danny: Well, I think that there's always this framing that the world is going dark for law enforcement because of encryption. And no one talks about the fact that it's lighting up like a huge scanning display when it comes to the devices themselves and every technologist you talk to says, yeah, all bets are off once you hand a device to someone else because they can undo whatever protections that you might have on it. I think the thing that really struck me about this, though, that I hadn't realized is just how cheap and available this is. I did have it in my head that this was an FBI thing, and now we're seeing it used by really quite small local town police departments and for very low level crime too.

Cindy: Yeah,  it's eye opening. I think the other thing that's opening about this work is about how law enforcement is using consent, or at least the fiction of consent, to get around a very powerful Supreme Court protection that we got in a case called Riley vs California in 2014 that bans search incident to arrest without a warrant and the cops are just simply walking right around that by getting, you know, phony consent from people. 

Danny: I've been in that situation going through immigration where I'm asked to hand over my phone and that it's very hard to say no, because you just kind of assume they're going to flick through the last few entries and that's not what happens in these situations. 

Now Harlen wants to ban these consent searches completely. Do you agree with that? 

Cindy: Yeah, I really do and the reason I do is because it's so phony. I mean, it's the idea that these are consensual, it doesn't pass the giggle test, right? The way that power works in these situations and the pressure that cops put on you to call this consent, I think it's really not true. And so I don't think we should embrace legal fictions and the legal fiction that these searches are consensual is one that we just need to do away with because they are not. 

Danny: So while we're talking about banning consent search is one of the more positive things I got out of this discussion is there's no implication that we should be banning or forcing people to be more cautious in how they, they use their phones. Harlan called these tools essentially creating a window into the soul. But I think they also enhance our lives. I mean, they're not just a window into the soul. They actually give us ways to remember things that we would forget. They give us instant access to the world's knowledge. They make sure that I will never get lost again. And, and all of these things are things that we should be able to preserve in a free society. Despite the fact that they are so intimate and so revealing, I think that just means that they have to have the same protections that we would give to the thoughts in our head. 

Cindy: I think this is one of the ways that we need to make sure that we fix things. We need to fix things so that people can still have their devices. They can still have their tools. They can still outsource their memory and part of their brain to a device that they carry around in their pockets all the time. And  that is protected. The answer here isn't to limit what we can do with our devices. The answer is to lift up the protections that we get from law enforcement in society over the fact that we want to use these tools. 

Danny:Thank you for joining us on how to fix the internet. Check out our show notes, which will include a link to Upturn’s report. You can also get legal resources there, transcripts of the episode, and much more “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.

And the music is by Nat Keef and Reed Mathis of Beatmower. Thanks for being with us today. And thanks again to our guest Harlan Yu from Upturn. I'm Danny O'Brien. 

Cindy: And I'm Cindy Cohn.

Related Cases: Riley v. California and United States v. Wurie

EFF’s New Series of How to Fix the Internet Podcast Tackles Toughest Issues in Tech

EFF - Tue, 11/16/2021 - 03:00
Episodes Feature Innovators Seeking Creative Solutions to Build a Better Online World

San Francisco—Troubled when Twitter takes down posts of people or organizations you follow? Concerned about protecting yourself and your community from surveillance? Electronic Frontier Foundation (EFF) has got you, with the launch today of the first season of the How to Fix the Internet podcast, featuring conversations that can plot a pathway out of today’s tech dystopias.

Hosted by EFF Executive Director Cindy Cohn and Special Advisor Danny O’Brien, How to Fix the Internet wades into topics that are top of mind among internet users and builders—ways out of the big tech lock-in, protecting our connected devices, and keeping texts and emails safe from prying eyes, just to name a few.

This season’s episodes will feature guests like comedian Marc Maron, who’ll talk about how, with EFF’s help, he marshaled the podcast community to fend off a troll claiming to own the patent for podcasting. Cohn will also host cybersecurity expert Tarah Wheeler, who’ll discuss how companies can better protect our data from attacks by giving the researchers who report vulnerabilities in their security networks a hearty thank you instead of slapping them with a lawsuit for exposing holes in their information systems.

“We piloted How to Fix the Internet last year, and it took off, because our conversations go beyond just complaining about the problems in our digital lives to explore how people are envisioning and building a better online world,” said Cohn. “We can’t create a better world unless we can envision it, and these conversations are needed to help us see how the world will look when technology better supports, protects, and empowers users.”

In today’s episode, Harlan Yu, executive director of Upturn, a nonprofit advocating justice in technology, will talk about the increasingly sophisticated tools used by police departments across the country to access the sensitive data on phones, even when they are locked. Yu will explain how straightforward changes in the law and technology can create a world where we can walk around with greater security in the tremendous amount of sensitive data we keep on our phones.

The new season of How to Fix the Internet is made possible with the support of the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology.

“We are thrilled to partner with EFF to support the launch of this major new podcast about the challenges posed by Big Tech and what consumers can do to protect their online privacy and security,” said Doron Weber, Vice President and Program Director at the Alfred P. Sloan Foundation. “How to Fix the Internet joins the nationwide Sloan radio effort, which supports shows such as Science Friday, Planet Money, and Radiolab, as well as Sloan programs to protect consumer privacy and promote the dissemination of credible information online with Wikipedia, Consumer Reports, and the Digital Public Library of America.”

To listen to today’s podcast:


The Alfred P. Sloan Foundation
is a New York based, philanthropic, not-for-profit institution that makes grants in three areas: research in science, technology, and economics; quality and diversity of scientific institutions; and public engagement with science. Sloan’s program in Public Understanding of Science and Technology supports books, radio, film, television, theater and new media to reach a wide, non-specialized audience. Sloan's program in Universal Access to Knowledge aims to harness advances in digital information technology to facilitate the openness and accessibility of all knowledge in the digital age for the widest public benefit under fair and secure conditions. For more information, visit or follow the Foundation on Twitter and Facebook at @SloanPublic.

Contact:  JasonKelleyAssociate Director of Digital

EFF’s How to Fix the Internet Podcast Offers Optimistic Solutions to Tech Dystopias

EFF - Tue, 11/16/2021 - 02:00

It seems like everywhere we turn we see dystopian stories about technology’s impact on our lives and our futures—from tracking-based surveillance capitalism to street level government surveillance to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say—the landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future. That’s where our new podcast comes in.

Click below to listen to episodes now, or choose your podcast player: Privacy info. This embed will serve content from


EFF's How to Fix the Internet podcast offers a better way forward. Through curious conversations with some of the leading minds in law and technology, we explore creative solutions to some of today’s biggest tech challenges.

After tens of thousands of listeners tuned in for our pilot mini-series last year, we are continuing the conversation by launching a full season. Listen today to become deeply informed on vital technology issues and join the movement working to build a better technological future. 

EFF is deeply grateful for the support of the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible.  

“We are proud to partner with EFF to support this new podcast,” said Doron Weber, Vice President and Program Director at the Alfred P. Sloan Foundation. “How to Fix the Internet will bring an unprecedented level of expert knowledge and practical advice to one of the most complex and urgent problems of our technological age.”

With hosts Cindy Cohn and Danny O’Brien, this season we will explore ways that people are building a better world by fighting back against software patent trolls, empowering their communities to stand up for their privacy and security, supporting real security in our networks, phones and devices, creating social media communities that thrive, and safeguarding financial privacy in a world of digitized payments. 

We piloted the concept of an EFF podcast last year in a 6-episode mini-series of the same name. Not only was it a success, garnering tens of thousands of listens, but it also started a conversation. At the end of each episode, we asked how you would fix the internet, and we heard directly from our listeners about what they would do to build a better future. From technical solutions to policy fixes, people across the globe sent in thoughtful responses to what we discussed as well as their own ideas for how they’d like to see tomorrow’s Internet be more vibrant, equitable, decentralized, and free. As we kick off this season, we want to keep the invitation open and the conversation going: send your ideas and suggestions for improving the digital world to

Our goal is to start to imagine how the world will look when technology better supports user power and choices. This means examining how the modern Internet is often rooted in power imbalances, insecurity, and surveillance advertising in ways that have huge consequences for our ability to access information, hold private conversations, and connect with one another. But rather than reiterating everything that’s wrong on the Internet today, we also turn our attention to the solutions —both practical and idealistic — that can help to offer a better path for technology users.

 We also recognize that there is no one perfect fix for technology’s problems—in part because there’s no agreement on what those problems are, and also because there is not just one problem. Through this podcast, we seek to explore a range of different solutions rather than offer any one policy solution. We believe there are a plethora of ways to get it right.

We’re excited to be able to offer this podcast conversation, to spark us all thinking together about how we build a better future.  Please join us—the podcast is available in your podcast player of choice today.  



The Alfred P. Sloan Foundation is a New York based, philanthropic, not-for-profit institution that makes grants in three areas: research in science, technology, and economics; quality and diversity of scientific institutions; and public engagement with science. Sloan’s program in Public Understanding of Science and Technology supports books, radio, film, television, theater and new media to reach a wide, non-specialized audience. For more information, visit or follow the Foundation on Twitter and Facebook at @SloanPublic.

Federal Agencies Need to Be Staffed to Advance Broadband and Tech Competition

EFF - Mon, 11/15/2021 - 14:13

In the U.S., we need better internet. We need oversight over Big Tech, ISPs, and other large companies. We need the federal agencies with the powers to advance competition, protect privacy, and empower consumers to be fully staffed and working. New infrastructure legislation aimed at ending the digital divide gives new responsibilities to Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA), and Congress relies on the Federal Trade Commission (FTC) to reign in Big Tech and others. That means we need those agencies staffed—now, more than ever.

The new infrastructure package gives the FCC and the NTIA a lot of new work to do, including deciding how to allocate a large amount of funds to update our lagging internet infrastructure. In the meantime, we are relying on the FTC to police bad acts on the part of technology companies of all levels. When the FCC under Ajit Pai repealed net neutrality protections, they and the ISPs claimed that the FTC could police any abuse—even though the FTC already has big jobs, like safeguarding user privacy and advancing tech sector competition.

However, none of these agencies can do their jobs unless they are fully staffed. And that means that the Senate must confirm President Biden’s nominees. The consequences of sitting back are significant. These agencies have been given once-in-a-generation responsibilities. Senate leadership should commit itself to fully staffing each of these agencies before they leave for the holidays this December, so that the work on behalf of the public can begin. 

Congress must act on four critical nominations at these agencies, by the end of the year, or the agenda for better internet will fall by the wayside. Jessica Rosenworcel should be confirmed to another term on the FCC, as its chair. Gigi Sohn should be nominated to a term on the FCC, as well. At the FTC, the Senate should confirm Prof. Alvaro Bedoya. And the NTIA's work should be supported by confirming Biden's nominee Alan Davidson. 

The Incoming FCC Chairman Wants to Fix Internet Access for Children, But Can’t Without a Full Commission

In the middle of the pandemic, children—predominantly those from low-income neighborhoods—were forced onto overpriced and low-quality internet plans to do remote schooling from home. To address this, schools were forced to give wireless ISPs millions of public dollars to rent out mobile hotspots. This provided a “better than nothing” alternative to kids camping out in fast-food parking lots to do homework on wifi. In many places in the United States, this is the product of intentionally discriminatory deployment choices that happened because these companies were unregulated. While too many in Washington DC were busy praising the ISPs during the pandemic, FCC then-Commissioner Jessica Rosenworcel made clear we have to do better. The FCC has been given the power to address “digital discrimination” under the new infrastructure law. EFF and many others support an outright ban on digital redlining in order to prevent deployment practices that target 21st century access to high-income neighborhoods while forever excluding low-income areas.

However, both Chair Rosenworcel and President Biden’s FCC nominee Gigi Sohn need to be confirmed by the Senate by the end of the year to provide the Chair a working majority at the commission. (Disclosure: Sohn is a member of the EFF Board of Directors.) The FCC was inactive in 2021 due to its lack of a working majority, despite all the suffering happening in the public. It should be clear that not confirming both Rosenworcel and Sohn would be akin to doing nothing, despite the new infrastructure law Congress passed.

No One Is Addressing Big Tech if the Federal Trade Commission Remains Understaffed

The Chair of the FTC, Lina Khan, wants to improve the competitive landscape in the technology sector. She has written groundbreaking analysis on how antitrust and competition law should be updated. However, her goals remain at risk if the agency is deadlocked with four sitting commissioners, rather than five.

Professor Alvaro Bedoya, a major critic of Big Tech’s corporate surveillance practices, was nominated by the president to fully staff the FTC following the departure of Commissioner Rhohit Chopra. Long known as a privacy hawk, Professor Bedoya is generally inclined to side with Khan on the importance of regulating Big Tech, particularly when it comes to matters involving the corporate surveillance business model. That's why EFF and many civil rights and privacy organizations support his confirmation to the FTC. In essence, Professor Bedoya’s confirmation as an FTC Commissioner will provide Chair Khan with a working majority needed to begin the needed reboot to our competition policies and address consumer privacy when dealing with Big Tech. Preventing or stalling his nomination serves only one purpose—to block those efforts. 

The NTIA has been Given a Massive Job to Close the Digital Divide but Has No Leader

Biden’s NTIA nominee Alan Davidson will be given one of the biggest jobs among the new nominees: spending $65 billion to build long-term infrastructure for all Americans. This will be a multi-year effort with a range of complicated issues, guided by an agency that has never had a mission of this magnitude. The NTIA was in charge back in 2009, when Congress passed the American Recovery and Reinvestment Act, tasking the agency with implementing a much smaller $4 billion grant program

All of this complicated work remains rudderless until the Senate confirms Davidson as NTIA Administrator to do the job. The absence of an Administrator will greatly hamstring the Biden Administration and the states’ efforts to close the digital divide. 

There is a lot of work to be done, all of it important and necessary. And it can’t be done until the Senate confirms these four nominees.

After Facebook Leaks, Here Is What Should Come Next

EFF - Mon, 11/15/2021 - 11:28

Every year or so, a new Facebook scandal emerges. These blowups follow a fairly standard pattern, at least in the U.S. First, new information is revealed that the company misled users about an element of the platform—data sharing and data privacy, extremist content, ad revenue, responses to abuse—the list goes on. Next, following a painful news cycle for the company, Mark Zuckerberg puts on a sobering presentation for Congress about the value that Facebook provides to its users, and the work that they’ve already done to resolve the issue. Finally, there is finger-wagging, political jockeying, and within a month or two, a curious thing happens: Congress does nothing.

It’s not for lack of trying, of course—much like Facebook, Congress is a many-headed beast, and its members rarely agree on the specific problems besetting American life, let alone the solutions. But this year may be different.

Many of the problems highlighted by these documents are not particularly new. Regardless, we may finally be at a tipping point.

For the last month, Facebook has been at the center of a lengthy, damaging news cycle brought on by the release of thousands of pages of leaked documents, sent to both Congress and news outlets by former Facebook data scientist Frances Haugen. The documents show the company struggling internally with the negative impacts of both Facebook and its former-rival, now-partner platform, Instagram. (Facebook’s attempt to rebrand as Meta should not distract from the takeaways of these documents, so we will continue to call the company Facebook here.)

In addition to internal research and draft presentations released several weeks ago, thousands of new documents were released last week, including memos, chats, and emails. These documents paint a picture of a company that is seriously grappling with (and often failing in) its responsibility as the largest social media platform. In no particular order, the documents show that:

Many of the problems highlighted by these documents are not particularly new. People looking in at the black box of Facebook’s decision-making have come to similar conclusions in several areas; those conclusions have simply now been proven. Regardless, we may finally be at a tipping point.

When Mark Zuckerberg went in front of Congress to address his company’s role in the Cambridge Analytica scandal over three years ago, America’s lawmakers seemed to have trouble agreeing on basic things like how the company’s business model worked, not to mention the underlying causes of its issues or how to fix them. But since then, policymakers and politicians have had time to educate themselves. Several more hearings addressing the problems with Big Tech writ large, and with Facebook in particular have helped government develop a better shared understanding of how the behemoth operates; as a result, several pieces of legislation have been proposed to rein it in.

Now, the Facebook Papers have once again thrust the company into the center of public discourse, and the scale of the company’s problems have captured the attention of both news outlets and Congress. That’s good—it’s high time to turn public outrage into meaningful action that will rein in the company.

But it’s equally important that the solutions be tailored, carefully, to solve the actual issues that need to be addressed. No one would be happy with legislation that ends up benefitting Facebook while making it more difficult for competing platforms to coexist. For example, Facebook has been heavily promoting changes to Section 230 that would, by and large, harm small platforms while helping the behemoth.

Here’s where EFF believes Congress and the U.S. government could make a serious impact:

Break Down the Walls

Much of the damage Facebook does is a factor of its size. Other social media sites that aren’t attempting to scale across the entire planet run into fewer localization problems, are able to be more thoughtful about content moderation, and have, frankly, a smaller impact on the world. We need more options. Interoperability will help us get there.

Interoperability is the simple idea that new services should be able to plug into dominant ones. An interoperable Facebook would mean that you wouldn’t have to choose between leaving Facebook and continuing to socialize with the friends, communities and customers you have there. Today, if you want to leave Facebook, you need to leave your social connections behind as well: that means no more DMs from your friend, no more access to your sibling’s photos, and no more event invitations from your co-workers. In order for a new social network to get traction, whole social groups have to decide to switch at the same time - a virtually insurmountable barrier. But if Facebook were to support rich interoperability, users on alternative services could communicate with users on Facebook. Leaving Facebook wouldn’t mean leaving your personal network. You could choose a service - run by a rival, a startup, a co-op, a nonprofit, or just some friends - and it would let you continue to connect with content and people on Facebook, while enforcing its own moderation and privacy policies.

We need more options. Interoperability will help us get there.

Critics often argue that in an interoperable world, Facebook would have less power to deny bad actors access to our data, and thus defend us from creeps like Cambridge Analytica. But Facebook has already failed to defend us from them. When Facebook does take action against third-party spying on its platform, it’s only because that happens to be in its interests: either as a way to quell massive public outcry, or as a convenient excuse to undermine legitimate competition. Meanwhile, Facebook continues to make billions from its own exploitation of our data. Instead of putting our trust in corporate privacy policies, we’d need a democratically accountable privacy law, with a private right of action. And any new policies which promote interoperability should come with built-in safeguards against the abuse of user data.

Interoperability isn’t an alternative to demanding better of Facebook - better moderation, more transparency, better privacy rules - rather, it’s an immediate, tangible way of helping Facebook’s users escape from its walled garden right now. Not only does that make those users’ lives better - it also makes it more likely that Facebook will obey whatever rules come next, not just because those are the rules, but because when they break the rules, their users can easily leave Facebook.

Facebook knows this. It’s been waging a “secret war on switching costs” for years now. Legislation like the ACCESS Act that would force platforms like Facebook to open up are a positive step toward a more interoperable future. If a user wants to view Facebook through a third-party app that allows for better searching or more privacy, they ought to be able to do so. If they want to take their data to platforms that have better privacy protections, without leaving their friends and social connections behind, they ought to be able to do that too.

Pass a Baseline, Strong Privacy Law

Users deserve meaningful controls over how the data they provide to companies is collected, used, and shared. Facebook and other tech companies too often choose their profits over your privacy, opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of Facebook’s harms. Facebook’s core business model depends on collecting as much information about users as possible, then using that data to target ads - and target competitors. Meanwhile, Facebook (and Google) have created an ecosystem where other companies - from competing advertisers to independent publishers - feel as if they have no choice but to spy on their own users, or help Facebook do so, in order to squeak out revenue in the shadow of the monopolists.

Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data.  

Stronger baseline federal privacy laws would help steer companies like Facebook away from collecting so much of our data. They would also level the playing field, so that Facebook and Google cannot use their unrivaled access to our information as a competitive advantage. A strong privacy law should require real opt-in consent to collect personal data and prevent companies from re-using that data for secondary purposes. To let users enforce their rights, it must include a private cause of action that allows users to take companies to court if they break the law. This would tip the balance of power away from the monopolists and back towards users. Ultimately, a well-structured baseline could put a big dent in the surveillance business model that not only powers Facebook, but enables so many of the worst harms of the tech ecosystem as well.

Break Up the Tech

Facebook’s broken system is fueled by a growth-at-any-cost model, as indicated by some of the testimony Haugen delivered to Congress. The number of Facebook users and the increasing depth of the data it gathers about them is Facebook’s biggest selling point. In other words, Facebook’s badness is inextricably tied to its bigness

We’re pleased to see antitrust cases against Facebook. Requiring Facebook to divest Instagram, WhatsApp, and possibly other acquisitions and limiting the companies’ future mergers and acquisitions would go a long way toward solving some of the problems with the company, and inject competition into a field where it’s been stifled for many years now. Legislation to facilitate a breakup also awaits House floor action and was approved by the House Judiciary Committee.

Shine a Light On the Problems

Some of the most detailed documents that have been released so far show research done by various teams at Facebook. And, despite being done by Facebook itself, much of that research’s  conclusions are critical of Facebook’s own services.

For example: a large percentage of users report seeing content on Facebook that they consider disturbing or hateful—a situation that the researcher notes “needs to change.” Research also showed that some young female Instagram users report that the platform makes them feel bad about themselves.

But one of the problems with documents like these is that it’s impossible to know what we don’t know—we’re getting reports piecemeal, and have no idea what practical responses might have been offered or tested. Also, some of the research might not always mean what first glances would indicate, due to reasonable limitations or the ubiquity of the platform itself.

EFF has been critical of Facebook’s lack of transparency for a very long time. When it comes to content moderation, for example, the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.

The company must make it easier for researchers both inside and outside to engage in independent analysis.

Transparency about decisions has increased in some ways, such as through the Facebook Oversight Board’s public decisions. But revelations from the whistleblower documents about the company’s “cross-check” program, which gives some “VIP” users a near-blanket ability to ignore the community standards, make it clear that the company has a long way to go.  Facebook should start by embracing the Santa Clara Principles on Transparency and Accountability in Content Moderation, which are a starting point for companies to properly indicate the ways that they moderate user speech.

But content moderation is just the start. Facebook is constantly talking out of both sides of its depressingly large mouth—most recently by announcing it would delete face recognition templates of users of Facebook, then backing away from this commitment in its future ventures. Given how two-faced the company has frankly, always been, transparency is an important step towards ensuring we have real insight into the platform. The company must make it easier for researchers both inside and outside to engage in independent analysis.

Look Outside the U.S. 

Facebook must do more to respect its global user base. Facebook—the platform—is available in over 100 languages, but the company has only translated its community standards into around 50 of those (as of this writing). How can a company expect to enforce its moderation rules properly when they are written in languages, or dialects, that its users can’t read?

The company also must ensure that its employees, and in particular its content moderators, have cultural competence and local expertise. Otherwise it is literally impossible for them to appropriately moderate content. But first, it has to actually employ people with that expertise. It’s no wonder that the company has tended to play catch-up when crises arrive outside of America (where it also isn’t exactly ahead of the game).

And by the way: it’s profoundly disappointing that the Facebook Papers were released only to Western media outlets. We know that many of the documents contain information about how Facebook conducts business globally—and particularly how the company fails to put appropriate resources behind its policymaking and content moderation practices in different parts of the world. Providing trusted, international media publications that have the experience and expertise to provide nuanced, accurate analysis and perspective is a vital step in the process—after all, the majority of Facebook’s users worldwide live outside of the United States and Europe.

Don’t Give In To Easy Answers

Facebook is big, but it’s not the internet. More than a billion websites exist; tens of thousands of platforms allow users to connect with one another. Any solutions Congress proposes must remember this. Though Zuckerberg may “want every other company in our industry to make the investments and achieve the results that [Facebook has],” forcing everyone else to play by their rules won’t get us to a workable online future. We can’t fix the internet with legislation that pulls the ladder up behind Facebook, leaving everyone else below.

For example: legislation that forces sites to limit recommended content could have disastrous consequences, given how commonly sites make (often helpful) choices about the information we see when we browse, from restaurant recommendations to driving directions to search results. And forcing companies to rethink their algorithms, or offer “no algorithm” versions, may seem like fast fixes for a site like Facebook. But the devil is in the details, and in how those details get applied to the entire online ecosystem.

The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone. 

Facebook, for its part, seems interested in easy fixes as well. Rebranding as “Meta” amounts to a drunk driver switching cars. Gimmicks designed to attract younger users to combat its aging user base are a poor substitute for thinking about why those users refuse to use the platform in the first place.

Zuckerberg has gotten very wealthy while wringing his hands every year or two and saying, “sorry. I’m sorry. I’m trying to fix it.” Facebook’s terrible, no good, very bad news cycle is happening at the same time that the company reported a $9 billion dollar profit for the quarter.

Zuckerberg insists this is not the Facebook he wanted to create. But, he’s had nearly two decades of more-or-less absolute power to make the company into whatever he most desired, and this is where it’s ended up—despised, dangerous, and immensely profitable. Given that track record, it’s only reasonable that we handicap his suggestions during any serious consideration about how to get out of this place.

Nor should we expect policymakers to do much better unless and until they start listening to a wider array of voices. While the leaks have been directing the narrative about where the company is failing its users, there are plenty of other issues that aren’t grabbing headlines—like the fact that Facebook continues collecting data on deactivated accounts. A focused and thoughtful effort by Congress must include policy experts who have been studying the problems for years.

The Facebook leaks should be the starting point—not the end—of a sincere policy debate over concrete approaches that will make the internet—not just Facebook—better for everyone. 

EFF to Supreme Court: Warrantless 24-Hour Video Surveillance Outside Homes Violates Fourth Amendment

EFF - Fri, 11/12/2021 - 15:23
Police in Illinois Filmed Defendant’s Home Nonstop for 18 Months

Washington, D.C.—The Electronic Frontier Foundation (EFF) today urged the Supreme Court today to review and reverse a lower court decision in United States v. Tuggle finding that police didn’t need a warrant to secretly record all activity in front of someone’s home 24 hours a day, for a year and a half.

The Fourth Amendment protects people against lengthy, intrusive, always-on video recording—especially when that video records all activity outside their homes, EFF said today in a  brief filed with the court. Our homes are our most private and protected spaces, and police should not film everything that happens at the home without prior court authorization—even if police cameras are positioned on public property. In this case, police used three cameras mounted on utility poles to secretly record Travis Tuggle’s life 24/7 for 18 months. Surveillance like this can reveal intimate details of our private lives, such as when we’re home, who visits and when, what packages we receive, who our children are, and more.

The Supreme Court recognized in the landmark 2018 case Carpenter v. United States that tracking people’s physical movements using cell phone records creates a chronicle of our lives, and collecting such data without a warrant violates the Fourth Amendment. Because of its capacity to create detailed records of what goes on at people’s homes, long-term, warrantless pole camera surveillance is likewise unconstitutional.

EFF, along with the Brennan Center for Justice, the Center for Democracy and Technology, the Electronic Privacy Information Center, and the National Association of Criminal Defense Lawyers, is urging the Supreme Court to take up Tuggle’s case. It would be the first time the court has considered the rules around warrantless pole camera surveillance.

“If left to stand, the lower court’s ruling would allow police to secretly video record anyone’s home, at any time,” said EFF Senior Staff Attorney Andrew Crocker.

EFF and its partners argue that today’s video cameras make it easy for the government to collect massive amounts of information about someone’s private life. They are small, inexpensive, easily hidden, and capable of recording in the dark and zooming in to record even small text from far away. Footage can be retained indefinitely and combined with other police tools like facial recognition and filtering to enhance police capabilities.

“We urge the Court to grant certiorari and rule that using pole cameras to collect information about the comings and goings around someone’s home implicates Fourth Amendment protections,” said EFF Surveillance Litigation Director Jennifer Lynch.

For the brief:



Apple Has Listened And Will Retract Some Harmful Phone-Scanning

EFF - Fri, 11/12/2021 - 13:49

Since August, EFF and others have been telling Apple to cancel its new child safety plans. Apple is now changing its tune about one component of its plans: the Messages app will no longer send notifications to parent accounts.

That’s good news. As we’ve previously explained, this feature would have broken end-to-end encryption in Messages, harming the privacy and safety of its users. So we’re glad to see that Apple has listened to privacy and child safety advocates about how to respect the rights of youth. In addition, sample images shared by Apple show the text in the feature has changed from “sexually explicit” to “naked,” a change that LBTQ+ rights advocates have asked for, as the phrase “sexually explicit” is often used as cover to prevent access to LGBTQ+ material. 

Now, Apple needs to take the next step, and stop its plans to scan photos uploaded to a user’s iCloud Photos library for child sexual abuse images (CSAM). Apple must draw the line at invading people’s private content for the purposes of law enforcement. As Namrata Maheshwari of Access Now pointed out at EFF’s Encryption and Child Safety event, “There are legislations already in place that will be exploited to make demands to use this technology for purposes other than CSAM.” Vladimir Cortés of Article 19 agreed, explaining that governments will “end up using these backdoors to … silence dissent and critical expression.” Apple should sidestep this dangerous and inevitable pressure, stand with its users, and cancel its photo scanning plans.

Apple: Pay attention to the real world consequences, and make the right choice to protect our privacy.

Read further on this topic: 



Subscribe to Headup aggregator