You are here

EFF

Error message

  • Deprecated function: Unparenthesized `a ? b : c ? d : e` is deprecated. Use either `(a ? b : c) ? d : e` or `a ? b : (c ? d : e)` in include_once() (line 1389 of /usr/share/nginx/html/blog.headup.ws/htdocs/includes/bootstrap.inc).
  • Deprecated function: Unparenthesized `a ? b : c ? d : e` is deprecated. Use either `(a ? b : c) ? d : e` or `a ? b : (c ? d : e)` in include_once() (line 1389 of /usr/share/nginx/html/blog.headup.ws/htdocs/includes/bootstrap.inc).
Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 2 hours 8 min ago

John Gilmore Leaves the EFF Board, Becomes Board Member Emeritus

Fri, 10/22/2021 - 14:00

Since he helped found EFF 31 years ago, John Gilmore has provided leadership and guidance on many of the most important digital rights issues we advocate for today. But in recent years, we have not seen eye-to-eye on how to best communicate and work together, and we have been unable to agree on a way forward with Gilmore in a governance role. That is why the EFF Board of Directors has recently made the difficult decision to vote to remove Gilmore from the Board.

We are deeply grateful for the many years Gilmore gave to EFF as a leader and advocate, and the Board has elected him to the role of Board Member Emeritus moving forward. "I am so proud of the impact that EFF has had in retaining and expanding individual rights and freedoms as the world has adapted to major technological changes,” Gilmore said. “My departure will leave a strong board and an even stronger staff who care deeply about these issues."

John Gilmore co-founded EFF in 1990 alongside John Perry Barlow, Steve Wozniak and Mitch Kapor, and provided significant financial support critical to the organization's survival and growth over many years. Since then, Gilmore has worked closely with EFF’s staff, board, and lawyers on privacy, free speech, security, encryption, and more.

In the 1990s, Gilmore found the government documents that confirmed the First Amendment problem with the government’s export controls over encryption, and helped initiate the filing of Bernstein v DOJ, which resulted in a court ruling that software source code was speech protected by the First Amendment and the government's regulations preventing its publication were unconstitutional. The decision made it legal in 1999 for web browsers, websites, and software like PGP and Signal to use the encryption of their choice.

Gilmore also led EFF’s effort to design and build the DES Cracker, which was regarded as a fundamental breakthrough in how we evaluate computer security and the public policies that control its use. At the time, the 1970s Data Encryption Standard (DES) was embedded in ATM machines and banking networks, as well as in popular software around the world. U.S. government officials proclaimed that DES was secure, while secretly being able to wiretap it themselves. The EFF DES Cracker publicly showed that DES was in fact so weak that it could be broken in one week with an investment of less than $350,000. This catalyzed the international creation and adoption of the much stronger Advanced Encryption Standard (AES), now widely used to secure information worldwide.

Among Gilmore’s most important contributions to EFF and to the movement for digital rights has been recruiting key people to the organization, such as former Executive Director Shari Steele, current Executive Director Cindy Cohn, and Senior Staff Attorney and Adams Chair for Internet Rights Lee Tien.

EFF has always valued and appreciated Gilmore’s opinions, even when we disagree. It is no overstatement to say that EFF would not exist without him. We look forward to continuing to benefit from his institutional knowledge and guidance in his new role of Board Member Emeritus.

New Global Alliance Calls on European Parliament to Make the Digital Services Act a Model Set of Internet Regulations Protecting Human Rights and Freedom of Expression

Thu, 10/21/2021 - 23:26

The European Parliament’s regulations and policy-making decisions on technology and the internet have unique influence across the globe. With great influence comes great responsibility. We believe the European Parliament (EP) has a duty to set an example with the Digital Services Act (DSA), the first major overhaul of European internet regulations in 20 years. The EP should show that the DSA can address tough challenges—hate speech, misinformation, and users’ lack of control on big platforms—without compromising human rights protections, free speech and expression rights, and users’ privacy and security.

Balancing these principles is complex, but imperative. A step in the wrong direction could reverberate around the world, affecting fundamental rights beyond European Union borders. To this end, 12 civil society organizations from around the globe, standing for transparency, accountability, and human rights-centered lawmaking, have formed the Digital Services Act Human Rights Alliance to establish and promote a world standard for internet platform governance. The Alliance is comprised of digital and human rights advocacy organization representing diverse communities across the globe, including in the Arab world, Europe, United Nations member states, Mexico, Syria, and the U.S.

In its first action towards this goal, the Alliance today is calling on the EP to embrace a human rights framework for the DSA and take steps to ensure that it protects access to information for everyone, especially marginalized communities, rejects inflexible and unrealistic take down mandates that lead to over-removals and impinge on free expression, and strengthen mandatory human rights impact assessments so issues like faulty algorithm decision-making is identified before people get hurt.

This call to action follows a troubling round of amendments approved by an influential EP committee that crossed red lines protecting fundamental rights and freedom of expression. EFF and other civil society organizations told the EP prior to the amendments that the DSA offers an unparalleled opportunity to address some of the internet ecosystem’s most pressing challenges and help better protect fundamental rights online—if done right.

So, it was disappointing to see the EP committee take a wrong turn, voting in September to limit liability exemptions for internet companies that perform basic functions of content moderation and content curation, force companies to analyze and indiscriminately monitor users’ communication or use upload filters, and bestow special advantages, not available to ordinary users, on politicians and popular public figures treated as trusted flaggers.

In a joint letter, the Alliance today called on the EU lawmakers to take steps to put the DSA back on track:

  • Avoid disproportionate demands on smaller providers that would put users’ access to information in serious jeopardy.
  • Reject legally mandated strict and short time frames for content removals that will lead to removals of legitimate speech and opinion, impinging rights to freedom of expression.
  • Reject mandatory reporting obligations to Law Enforcement Agencies (LEAs), especially without appropriate safeguards and transparency requirements.
  • Prevent public authorities, including LEAs, from becoming trusted flaggers and subject conditions for becoming trusted flaggers to regular reviews and proper public oversight.   
  • Consider mandatory human rights impact assessments as the primary mechanism for examining and mitigating systemic risks stemming from platforms' operations.

For the DSA Human Rights Alliance Joint Statement:
https://www.eff.org/document/dsa-human-rights-alliance-joint-statement

For more on the DSA:
https://www.eff.org/issues/eu-policy-principles

Police Can’t Demand You Reveal Your Phone Passcode and Then Tell a Jury You Refused

Thu, 10/21/2021 - 17:29

The Utah Supreme Court is the latest stop in EFF’s roving campaign to establish your Fifth Amendment right to refuse to provide your password to law enforcement. Yesterday, along with the ACLU, we filed an amicus brief in State v. Valdez, arguing that the constitutional privilege against self-incrimination prevents the police from forcing suspects to reveal the contents of their minds. That includes revealing a memorized passcode or directly entering the passcode to unlock a device.

In Valdez, the defendant was charged with kidnapping his ex-girlfriend after arranging a meeting under false pretenses. During his arrest, police found a cell phone in Valdez’s pocket that they wanted to search for evidence that he set up the meeting, but Valdez refused to tell them the passcode. Unlike many other cases raising these issues, however, the police didn’t bother seeking a court order to compel Valdez to reveal his passcode. Instead, during trial, the prosecution offered testimony and argument about his refusal. The defense argued that this violated the defendant’s Fifth Amendment right to remain silent, which also prevents the state from commenting on his silence. The court of appeals agreed, and now the state has appealed to the Utah Supreme Court.

As we write in the brief: 

The State cannot compel a suspect to recall and share information that exists only in his mind. The realities of the digital age only magnify the concerns that animate the Fifth Amendment’s protections. In accordance with these principles, the Court of Appeals held that communicating a memorized passcode is testimonial, and thus the State’s use at trial of Mr. Valdez’s refusal to do so violated his privilege against self-incrimination. Despite the modern technological context, this case turns on one of the most fundamental protections in our constitutional system: an accused person’s ability to exercise his Fifth Amendment rights without having his silence used against him. The Court of Appeals’ decision below rightly rejected the State’s circumvention of this protection. This Court should uphold that decision and extend that protection to all Utahns.

Protecting these fundamental rights is only more important as we also fight to keep automated surveillance that would compromise our security and privacy off our devices. We’ll await a decision on this important issue from the Utah Supreme Court.

Related Cases: Andrews v. New Jersey

Victory! Oakland’s City Council Unanimously Approves Communications Choice Ordinance

Thu, 10/21/2021 - 13:23

Oakland residents shared the stories of their personal experience; a broad coalition of advocates, civil society organizations, and local internet service providers (ISPs) lifted their voices; and now the Oakland City Council has unanimously passed Oakland’s Communications Service Provider Choice Ordinance. The newly minted law frees Oakland renters from being constrained to their landlord's preferred ISP by prohibiting owners of multiple occupancy buildings from interfering with an occupant's ability to receive service from the communications provider of their choice.

Across the country—through elaborate kickback schemes—large, corporate ISPs looking to lock out competition have manipulated landlords into denying their tenants the right to choose the internet provider that best meets their family’s needs and values. In August of 2018, an Oakland-based EFF supporter emailed us asking what would need to be done to empower residents with the choice they were being denied. Finally, after three years of community engagement and coalition building, that question has been answered.  

Modeled on a San Francisco law adopted in 2016, Oakland’s new Communications Choice ordinance requires property owners of multiple occupancy buildings to provide reasonable access to any qualified communication provider that has received a service request from a building occupant. San Francisco’s law has already proven effective. There, one competitive local ISP, which had previously been locked out of properties of forty or more units with active revenue sharing agreements, gained access to more than 1800 new units by 2020. Even for those who choose to stay with their existing provider, a competitive communications market benefits all residents by incentivizing providers to offer the best services at the lowest prices. As Tracy Rosenberg, the Executive Director of coalition member Media Alliance—and a leader in the advocacy effort—notes, "residents can use the most affordable and reliable services available, alternative ISP's can get footholds in new areas and maximize competitive benefits, and consumers can vote with their pockets for platform neutrality, privacy protections, and political contributions that align with their values.”

Unfortunately, not every city is as prepared to take advantage of such measures as San Francisco and Oakland. The Bay Area has one of the most competitive ISP markets in the United States, including smaller ISPs committed to defending net neutrality and their users’ privacy. In many U.S. cities, that’s not the case.

We hope to see cities and towns across the country step up to protect competition and foster new competitive options by investing in citywide fiber-optic networks and opening that infrastructure to private ISPs.

Why Is It So Hard to Figure Out What to Do When You Lose Your Account?

Thu, 10/21/2021 - 13:03

We get a lot of requests for help here at EFF, with our tireless intake coordinator being the first point of contact for many. All too often, however, the help needed isn’t legal or technical. Instead, users just need an answer to a simple question: what does this company want me to do to get my account back?

People lose a lot when they lose their account. For example, being kicked off Amazon could mean losing access to your books, music, pictures, or anything else you have only licensed, not bought, from that company. But the loss can have serious financial consequences for people who rely on the major social media platforms for their livelihoods, the way video makers rely on YouTube or many artists rely on Facebook or Twitter for promotion.

And it’s even worse when you can’t figure out why your account was closed, much less how to get it restored.  The deep flaws in the DMCA takedown process are well-documented, but at least the rules of a DMCA takedown are established and laid out in the law. Takedowns based on ill-defined company policies, not so much.

Over the summer, writer and meme king Chuck Tingle found his Twitter account suspended due to running afoul of Twitter’s ill-defined repeat infringer policy. That they have such a policy is not a problem in and of itself: to take advantage of the DMCA safe harbor, Twitter is required to have one. It’s not even a problem that the law doesn’t specify what the policy needs to look like—flexibility is vital for different services to do what makes the most sense for them. However, a company has to make a policy with an actual, tangible set of rules if they expect people to be able to follow it.

This is what Twitter says:

What happens if my account receives multiple copyright complaints?

If multiple copyright complaints are received Twitter may lock accounts or take other actions to warn repeat violators. These warnings may vary across Twitter’s services.  Under appropriate circumstances we may suspend user accounts under our repeat infringer policy. However, we may take retractions and counter-notices into account when applying our repeat infringer policy. 

That is frustratingly vague. “Under appropriate circumstances” doesn’t tell users what to avoid or what to do if they run afoul of the policy. Furthermore, if an account is suspended, this does not tell users what to do to get it back. We’ve confirmed that “We may take retractions and counter-notices into account when applying our repeat infringer policy” means that Twitter may restore the account after a suspension or ban, in response to counter-notices and retractions of copyright claims. But an equally reasonable reading of it is that they will take those things into account only before suspending or banning a user, so counter-noticing won’t help you get your account back if you lost it after a sudden surge in takedowns.

And that assumes you can even send a counter-notice. When Tingle lost his account under its repeat infringer policy, he found that because his account was suspended, he couldn’t use Twitter’s forms to contest the takedowns. That sounds like a minor thing, but it makes it very difficult for users to take the steps needed to get their accounts back.

Often, being famous or getting press attention to your plight is the way to fast-track getting restored. When Facebook flagged a video of a musician playing a public domain Bach piece, and Sony refused to release the claim, the musician got it resolved by making noise on Twitter and emailing the heads of various Sony departments. Most of us don’t have that kind of reach.

Even when there are clear policies, those rules mean nothing if the companies don’t hold up their end of the bargain. YouTube’s Content ID rules claim a video will be restored if, after an appeal, a month goes by with no word from the complaining party. But there are numerous stories from creators in which a month passes, nothing happens, and nothing is communicated to them by YouTube. While YouTube’s rules need fixing in many ways, many people would be grateful if YouTube would just follow those rules.

These are not new concerns. Clear policies, notice to users, and a mechanism for appeal are at the core of the Santa Clara principles for content moderation. They are basic best practices for services that allow users to post content, and companies that have been hosting content for more than a decade have no excuse not to follow them.

EFF is not a substitute for a company helpline. Press attention is not a substitute for an appeals process. And having policies isn’t a substitute for actually following them.

Crowd-Sourced Suspicion Apps Are Out of Control

Thu, 10/21/2021 - 12:51

Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.

These apps come in a wide spectrum—some let users connect with those around them by posting pictures, items for sale, or local tips. Others, however, focus exclusively on things and people that users see as “suspicious” or potentially hazardous. These alerts run the gamut from active crimes, or the aftermath of crimes, to generally anything a person interprets as helping to keep their community safe and informed about the dangers around them.

"Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims"

These apps are often designed with a goal of crowd-sourced surveillance, like a digital neighborhood watch. A way of turning the aggregate eyes (and phones) of the neighborhood into an early warning system. But instead, they often exacerbate the same dangers, biases, and problems that exist within policing. After all, the likely outcome to posting a suspicious sight to the app isn’t just to warn your neighbors—it’s to summon authorities to address the issue.

And even worse than incentivizing people to share their most paranoid thoughts and racial biases on a popular platform are the experimental new features constantly being rolled out by apps like Citizen. First, it was a private security force, available to be summoned at the touch of a button. Then, it was a service to help make it (theoretically) even easier to summon the police by giving users access to a 24/7 concierge service who will call the police for you. There are scenarios in which a tool like this might be useful—but to charge people for it, and more importantly, to make people think they will eventually need a service like this—adds to the idea that companies benefit from your fear.

These apps might seem like a helpful way to inform your neighbors if the mountain lion roaming your city was spotted in your neighborhood. But in practice they have been a cesspool of racial profiling, cop-calling, gatekeeping, and fear-spreading. Apps where a so-called “suspicious” person’s picture can be blasted out to a paranoid community, because someone with a smartphone thinks they don’t belong, are not helping people to “Connect and stay safe.” Instead, they promote public safety for some, at the expense of surveillance and harassment for others.

Digitizing an Age Old Problem

Paranoia about crime and racial gatekeeping in certain neighborhoods is not a new problem. Citizen takes that old problem and digitizes it, making those knee-jerk sightings of so-called suspicious behavior capable of being broadcast to hundreds, if not thousands of people in the area.

But focusing those forums on crime, suspicion, danger, and bad-faith accusations can create havoc. No one is planning their block party on Citizen like they might be on other apps, which is filled with notifications like “unconfirmed report of a man armed with pipe” and “unknown police activity.” Neighbors aren’t likely to coordinate trick-or-treating on a forum they exclusively use to see if any cars in their neighborhood were broken into. And when you download an app that makes you feel like a neighborhood you were formerly comfortable in is now under siege, you’re going to use it not just to doom scroll your way through strange sightings, but also to report your own suspicions.

There is a massive difference between listening to police scanners, a medium that reflects the ever-changing and updating nature of fluid situations on the street, and taking one second of that live broadcast and turning it into a fixed, unverified, news report. Police scanners can be useful by many people for many reasons and ought to stay accessible, but listening to a livestream presents an entirely different context than seeing a fixed geo-tagged alert on a map. 

As the New York Times writes, Citizen is “converting raw scanner traffic—which is by nature unvetted and mostly operational—into filtered, curated digital content, legible to regular people, rendered on a map in a far more digestible form.” In other words, they’re turning static into content with the same formula the long-running show Cops used to normalize both paranoia and police violence.

Police scanners reflect the raw data of dispatch calls and police response to them, not a confirmation of crime and wrongdoing. This is not to say that the scanner traffic isn’t valuable or important—the public often uses it to learn what police are doing in their neighborhood. And last year, protesters relied on scanner traffic to protect themselves as they exercised their First Amendment rights.

But publication of raw data is likely to give the impression that a neighborhood has far more crime than it does. As any journalist will tell you, scanner traffic should be viewed like a tip and be the starting point of a potential story, rather than being republished without any verification or context. Worse, once Citizen receives a report, many stay up for days, giving the overall impression to a user that a neighborhood is currently besieged by incidents—when many are unconfirmed, and some happened four or five days ago.

From Neighborhood Forum to Vigilante-Enabler

It’s well known that Citizen began its life as “Vigilante,” and much of its DNA and operating procedure continue to match its former moniker. Citizen, more so than any other app, is unsure if it wants to be a community forum or a Star Wars cantina where bounty hunters and vigilantes wait for the app to post a reward for information leading to a person’s arrest.

When a brush fire broke out in Los Angeles in May 2021, almost a million people saw a notification pushed by Citizen offering a $30,000 reward for information leading to the arrest of a man they thought was responsible. It is the definition of dangerous that the app offered money to thousands of users, inviting them to turn over information on an unhoused man who was totally innocent.

Make no mistake, this kind of crass stunt can get people hurt. It demonstrates a very narrow view of who the “public” is and what “safety” entails.

Ending Suspicion as a Service

Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims that could get people hurt, and be careful not to feed the fertile ground for destructive hoaxes.

These apps are part of the larger landscape that law professor Elizabeth Joh calls “networked surveillance ecosystems.” The lawlessness that governs private surveillance networks like Amazon Ring and other home surveillance systems—in conjunction with social networking and vigilante apps—is only exacerbating age-old problems. This is one ecosystem that should be much better contained.

On Global Encryption Day, Let's Stand Up for Privacy and Security

Thu, 10/21/2021 - 07:52

At EFF, we talk a lot about strong encryption. It’s critical for our privacy and security online. That’s why we litigate in courts to protect the right to encrypt, build technologies to encrypt the web, and it’s why we lead the fight against anti-encryption legislation like last year’s EARN IT Act.

We’ve seen big victories in our fight to defend encryption. But we haven’t done it alone. That’s why we’re proud this year to join dozens of other organizations in the Global Encryption Coalition as we celebrate the first Global Encryption Day, which is today, October 21, 2021.

For this inaugural year, we’re joining our partner organizations to ask people, companies, governments, and NGOs to “Make the Switch” to strong encryption. We’re hoping this day can encourage people to make the switch to end-to-end encrypted platforms, creating a more secure and private online world. It’s a great time to turn on encryption on all the devices or services you use, or switch to an end-to-end encrypted app for messaging—and talk to others about why you made that choice. Using strong passwords and two-factor authentication are also security measures that can help keep you safe. 

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Day website has some ideas about what you could do to make your online life more private and secure. Another great resource is EFF’s Surveillance Self Defense Guide, where you can get tips on everything from private web browsing, to using encrypted apps, to keeping your privacy in particular security scenarios—like attending a protest, or crossing the U.S. border. 

We need to keep talking about the importance of encryption, partly because it’s under threat. In the U.S. and around the world, law enforcement agencies have been seeking an encryption “backdoor” to access peoples’ messages. At EFF, we’ve resisted these efforts for decades. We’ve also pushed back against efforts like client-side scanning, which would break the promises of user privacy and security while technically maintaining encryption.

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Coalition is listing events around the world today. EFF Senior Staff Technologist Erica Portnoy will be participating in an “Ask Me Anything” about encryption on Reddit, at 17:00 UTC, which is 10:00 A.M. Pacific Time. Jon Callas, EFF Director of Technology Projects, will join an online panel about how to improve user agency in end-to-end encrypted services, on Oct. 28.

EFF to Federal Court: Block Unconstitutional Texas Social Media Law

Thu, 10/21/2021 - 01:51

Users are understandably frustrated and perplexed by many big tech companies’ content moderation practices. Facebook, Twitter, and other social media platforms make many questionable, confounding, and often downright incorrect decisions affecting speakers of all political stripes. 

A new Texas law, which Texas Governor Greg Abbott said would stop social media companies that “silence conservative viewpoints and ideas,” restricts large platforms from removing or moderating content based on the viewpoint of the user. The measure, HB 20, is unconstitutional and should not be enforced, we told a federal court in Texas in an amicus brief filed Oct. 15. 

In NetChoice v. Paxton, two technology trade associations sued Texas to prevent the law from going into effect. Our brief, siding with the plaintiffs, explains that the law forces popular online platforms to publish speech they don’t agree with or don’t want to share with their users. Its broad restrictions would destroy many online communities that rely on moderation and curation. Platforms and users may not want to see certain kinds of content and speech that is legal but still offensive or irrelevant to them. They have the right under the First Amendment to curate, edit, and block everything from harassment to opposing political viewpoints.

Contrary to HB 20’s focus, questionable content moderation decisions are in no way limited to conservative American speakers. In 2017, for example, Twitter disabled the verified account of Egyptian human rights activist Wael Abbas. That same year, users discovered that Twitter had marked tweets containing the word “queer” as offensive. Recent reporting has highlighted how Facebook failed to enforce its policies against hate speech and promotion of violence, or even publish those policies, in places like Ethiopia.

However, EFF’s brief explains that users also rely on the First Amendment to create communities online, whether they are niche or completely unmoderated. Undermining speech protections would ultimately hurt users by limiting their options online. 

HB 20 also requires large online platforms to follow transparency and complaint procedures, such as publishing an acceptable use policy and biannual statistics on content moderation. While EFF urges social media companies to be transparent with users about their moderation practices, when governments mandate transparency, they must accommodate constitutional and practical concerns. Voluntary measures such as implementing the Santa Clara Principles, guidelines for a human rights framework for content moderation, best serve a dynamic internet ecosystem.

HB 20’s requirements, however, are broad and discriminatory. Moreover, HB 20 would likely further entrench the market dominance of the very social media companies the law targets because compliance will require a significant amount of time and money.

EFF has filed several amicus briefs opposing government control over content moderation, including in a recent successful challenge to a similar Florida law. We urge the federal court in Texas to rule that HB 20 restricts and burdens speech in violation of the Constitution.

From Bangkok to Burlington — The Public Interest Social Internet

Wed, 10/20/2021 - 12:03

This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our earlier installments.

In the last installment, we discussed platforms that tie messaging apps together. These let users chat with more people more easily, no matter where they are or what app they’re using, making it possible for someone using the latest chat tool, like Slack, to talk to someone on a decades old-platform like IRC. But localized services matter to the public interest internet as well. While  forums like Nextdoor have drawn attention (and users) for offering neighborhood communication regardless of your zip code, other services that predate those—and get around many of their controversies—do exist. 

Is the best of the Internet doomed to exist in just some narrow strongholds? 

This post will be about two very different social networks:

The first is Front Porch Forum, a Vermont-local platform that is “a micro hyperlocal social network,” tied to local services and with a huge percentage of uptake of local users. A caveat that many find more freeing than restricting: comments, replies, and posts don’t reach their neighbors until the following day, in a newsletter-style digest.

The other is Pantip, which is one of the top ten websites in Thailand. It’s a giant compared to Front Porch Forum, but its ability to persist—and stay independent—make it a worthwhile subject.

Growing Slowly 

Cofounders Michael and Valerie Wood-Lewis, of Burlington, Vermont, began Front Porch Forum in the early 2000’s by passing out flyers in their neighborhood. The goal wasn’t to build a company, or create a startup—it was to meet their neighbors. Users of other neighborhood sites will be familiar with the benefits of a local online community—posts ranged early on from people looking to borrow tools to helping one another find lost pets. 

As the site grew, others outside of Burlington asked to join. But Wood-Lewis turned them down, opting to focus the community on his area only. At first, he created a how-to guide for those who wanted to build their own local network, but eventually, the site allowed anyone in Vermont to join (it’s now expanded to some parts of New York and Massachusetts). 

But even as it's grown, the focus has been on public good—not profit. Instead of increasing the amount of posts users can make to drum up more content (and space for ads), the site has continued functioning effectively as an upgrade to its earlier listserv format. And rather than collect data on users beyond their location (which is necessary to sign up for the site and communicate with neighbors), or plastering it with advertising, Wood-Lewis uses Front Porch Forum’s hyperlocal geography to its advantage

“We have been pretty much diametrically opposed to the surveillance business model from the beginning. So our basic business model is we sell ads, advertising space to local businesses and nonprofits. The ads are distributed by geography, and by date, and that's it. There's no, "Yeah, let's check people's browser history, or let's pry into people's lives." We do not do that.”

These simple ads make it easy for local businesses and others to offer services to their community without hiring a graphic designer or having to learn anything complicated about online advertising, like how to make contextual ads that rely on perceived user interests (and data). 

In contrast to the well-known issues of racism and gatekeeping on Nextdoor or Ring’s Neighbors app, Wood-Lewis attributes the general positivity of the site to a variety of factors that are all baked into the public interest mindset: slow-growth, a focus on community, and moderation. But not necessarily that kind of moderation—while posts are all reviewed by moderators, and there are some filtering tools, posts typically come out as a newsletter, once a day, by default. If you want to yell at your neighbor, you’ve got the option to mail them directly through the site, but you’re probably better off knocking on their door. Users say that while most of the internet “is like a fire hose of information and communication, Front Porch Forum is like slow drip irrigation.” 

While Front Porch Forum has grown, it’s done so through its own earnings and at its own pace. While many of the most popular social networks need to scale to perform for investors, which relies on moving fast and breaking things, Front Porch Forum could be described as a site for moving slowly and fixing things.


Staying Afloat Despite Free Speech Challenges

On the other side of the world, the forum Pantip, a sort of Thai reddit, has grown to be one of the most popular sites in the country since its creation in 1997. Pantip's growth (and survival) is all the more significant because Thailand has some of the harshest legal frameworks in the world for online speech. Intermediaries are strictly liable for the speech of their users, which is particularly troubling, since the crime of criticizing members of the royal family (“lese majeste”) can lead to imprisonment for both poster and site administrator. 

As a result, the site’s strict rules may seem overbearing to Western users—participating in the upvoting and points system requires validating your identity with a government ID, for example—yet the site remains popular after over twenty years of being run without outside investment. Pantip has navigated treacherous waters for a very long time, and has even had parts of the site shut down by the government, but it chugs along, offering a place for Thai users to chat online, while many other sites have been scuppered. For example, many newspapers have shut down comment sections for fear of liability. Though this legal regime puts Pantip’s owner in danger, particularly during regime changes—he still won't sell out to bigger companies:  “Maybe I’m too conservative. I don’t believe that internet [business] needs a lot of money to run. Because we can do internet business with a very small [investment].” 

Models for the Future?

Neither Front Porch Forum nor Pantip get the headlines of a Facebook or a Twitter—but not because they're unsuccessful. Rather, their relatively specific rules and localized audiences make them poor models for scaling to world domination. To a certain extent, they benefit from not garnering huge amounts of publicity. In Front Porch Forum's case, mass appeal is irrelevant — the site’s membership spreads by word of mouth in a local context, and advertising revenues grow with it. For Pantip, it's better to keep a low profile, even if hundreds of thousands of users are in on the secret. And both sites are proof that social media doesn't have to be run by venture capital-funded, globally-scaled services, and that we don’t need a Facebook to give people in local areas or in developing countries forums to connect and organize.

But is part of their success due to Front Porch Forum and Pantip’s lack of interest in disrupting the tech giants of the world? If the Public Interest Internet is an alternative to Facebook and Google, is it really a viable future for just a few lucky groups? Is the best of the Internet doomed to exist in just some narrow strongholds? Or can we scale up the human-scale: make sure that everyone has their own Front Porch, or Pantip close to hand, instead of having to stake everything in the chilly, globalised space of the tech giants?

One area of the Net that has been with it since the beginning, and has managed to both be a place for friends to collaborate, and somewhere that has made a wider impact on the world, is the subject of the next post in this series. We’re going to discuss the world of fan content, including the Hugo Award-winning fanfiction archive at Archive of Our Own, and how the Public Interest Internet makes it possible for people to comment on and add to the stories they love, in places that better serve them than our current giants.

This is the sixth post in our blog series on the public interest internet. Read more in the series:

EFF Files New Lawsuit Against California Sheriff for Sharing ALPR Data with ICE and CBP

Tue, 10/19/2021 - 12:11

The Marin County Sheriff illegally shares the sensitive location information of millions of drivers with out-of-state and federal agencies, including Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). The Sheriff uses automated license plate readers (ALPRs)—high-speed cameras mounted on street poles or squad cars—to scan license plates and record the date, time, and location of each scan. This data can paint a detailed picture of the private lives of Marin County residents, including where they live and work, visit friends or drop their children off at school, and when they attend religious services or protests.

Last week, EFF filed a new lawsuit on behalf of three immigrant rights activists against Sheriff Bob Doyle and Marin County for violating two California laws that protect immigrants and motorists’ privacy. Our co-counsel are the ACLU Foundations of Northern California, Southern California, and San Diego & Imperial Counties, and attorney Michael Risher. We seek a court order prohibiting the Sheriff from sharing ALPR data with out-of-state and federal agencies.

The Marin Sheriff’s ALPRs scan thousands of license plates each month. That sensitive data, including photos of the vehicle and sometimes its drivers and passengers, is stored in a database. The Sheriff permits over 400 out-of-state and 18 federal agencies, including CBP and ICE, to run queries of full or partial license plates against information the Sheriff has collected.

This data sharing particularly impacts the safety and privacy of immigrants, communities of color, and religious minorities. Like many other surveillance technologies, ALPRs have a history of disproportionately impacting marginalized communities. ICE has used ALPR data to detain and deport immigrant community members. NYPD used ALPRs to scan license plates near mosques.

The Sheriff’s sharing of ALPR data to entities outside of California violates state law. S.B. 34, enacted in 2015, prohibits California law enforcement agencies from sharing ALPR data with entities outside of California. Moreover, the California Values Act (S.B. 54), enacted in 2018, limits the use of local resources to assist federal immigration enforcement, including the sharing of personal information.

To learn more, read the complaint, the press release, our case page, the ACLU of Northern California’s case page, and our clients’ statements.

Related Cases: Lagleva v. Marin County Sheriff

After Years of Delays and Alarmingly Flimsy Evidence, Security Expert Ola Bini’s Trial Set for This Week

Tue, 10/19/2021 - 11:57

For over two years EFF has been following the case of Swedish computer security expert Ola Bini, who was arrested in April, 2019, in Ecuador, following Julian Assange's ejection from that country’s London Embassy. Bini’s pre-trial hearing, which was suspended and rescheduled at least five times during 2020, was concluded on June 29, 2021. Despite the cloud that has hung over the case—political ramifications have seemed to drive the allegations, and Bini has been subjected to numerous due process and human rights violations—we are hopeful that the security expert will be afforded a transparent and fair trial and that due process will prevail. 

Ola Bini is known globally as a computer security expert; he is someone who builds secure tools and contributes to free software projects. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world, and in 2018, Ola co-founded a non-profit organization devoted to creating user-friendly security tools.

ola_cypher.png Disable Stretch: 

From the very outset of Bini’s arrest at the Quito airport there have been significant concerns about the legitimacy of the allegations against him. In our visit to Ecuador in July, 2019, shortly after his arrest, it became clear that the political consequences of Bini’s arrest overshadowed the prosecution’s actual evidence. In brief, based on the interviews that we conducted, our conclusion was that Bini's prosecution is a political case, not a criminal one. His arrest occurred shortly after Maria Paula Romo, then Ecuador’s Interior Minister, held a press conference to claim (without evidence) that a group of Russians and Wikileaks-connected hackers were in the country, planning a cyber-attack in retaliation for the government's eviction of Assange; a recent investigation by La Posta revealed that the former Minister knew that Ola Bini was not the "Russian hacker" the government was looking for when Bini was detained in Quito's airport. (Romo was dismissed as minister in 2020 for ordering the use of tear gas against anti-government protestors).

A so-called piece of evidence against Bini was leaked to the press and taken to court: a photo of a screenshot, supposedly taken by Bini himself and sent to a colleague, showing the telnet login screen of a router. The image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of a message exchange attributed to Bini and a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall.

Bini’s arrest and detention were fraught with due process violations. Bini faced 70 days of imprisonment until a Habeas Corpus decision considered his detention illegal (a decision that confirmed the weakness of the initial detention). He was released from jail, but the investigation continued, seeking evidence to back the alleged accusations against him. After his release the problems continued, and as the delays dragged on, the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression included its concern with the delay in Bini’s trial in its 2020's annual report. At the time of our visit, Bini's lawyers told us that they counted 65 violations of due process, and journalists told us that no one was able to provide them with concrete descriptions of what he had done. 

In April 2021, Ola Bini’s Habeas Data recourse, filed in October 2020 against the National Police, the Ministry of Government, and the Strategic Intelligence Center (CIES), was partially granted by the Judge. According to Bini's defense, he had been facing continuous monitoring by members of the National Police and unidentified persons. The decision requested CIES to provide information related to whether the agency has conducted surveillance activities against the security expert. The ruling concluded that CIES unduly denied such information to Ola Bini, failing to offer a timely response to his previous information request. 

Though the judge decided in June’s pre-trial hearing to proceed with the criminal prosecution against Bini, observers indicated a lack of a solid motivation in the judge's decision. The judge was later "separated" from the case in a ruling that admitted the wrongdoing of successive pretrial suspensions and the violation of due process. 

It is alarming, but perhaps not surprising, that the case will proceed after all these well-documented irregularities. While Ola Bini’s behavior and contacts in the security world may look strange to authorities, his computer security expertise is not a crime. Since EFF's founding in 1990, we have become all-too familiar with overly politicized "hacker panic" cases, which encourage unjust prosecutions when the political and social atmosphere demands it. EFF was founded in part due to a notorious, and similar, case pursued in the United States by the Secret Service. Our Coder’s Rights Project has worked for decades to protect the security and encryption researchers that help build a safer future for all of us using digital technologies, and who far too often face serious legal challenges that prevent or inhibit their work. This case is, unfortunately, part of a longstanding history of countering the unfair criminal persecution of security experts, who have unfortunately been the subject of the same types of harassment as those they work to protect, such as human rights defenders and activists. 

In June of this year, EFF called upon Ecuadors’ Human Rights Secretariat to give special attention to Ola Bini’s upcoming hearing and prosecution. As we stressed in our letter

Mr. Bini's case has profound implications for, and sits at the center of, the application of human rights and due process, a landmark case in the context of arbitrarily applying overbroad criminal laws to security experts. Mr. Bini's case represents a unique opportunity for the Human Rights Secretariat Cabinet to consider and guard the rights of security experts in the digital age.  Security experts protect the computers upon which we all depend and protect the people who have integrated electronic devices into their daily lives, such as human rights defenders, journalists, activists, dissidents, among many others. To conduct security research, we need to protect the security experts, and ensure they have the tools to do their work.

The circumstances around Ola Bini's detention have sparked international attention and indicate the growing seriousness of security experts' harassment in Latin America. The flimsy allegations against Ola Bini, the series of irregularities and human rights violations in his case, as well as its international resonance, situate it squarely among other cases we have seen of politicized and misguided allegations against technologists and security researchers. 

We hope that justice will prevail during Ola Bini’s trial this week, and that he will finally be given the fair treatment and due process that the proper respect of his fundamental rights requires.

EFF Joins Press Freedom Groups In Asking U.S. To Drop Assange Extradition Efforts

Mon, 10/18/2021 - 12:59

EFF has joined a coalition of press freedom, civil liberties, and human rights groups that sent a letter to Attorney General Merrick Garland urging the Department of Justice to drop its efforts to extradite and prosecute Julian Assange.

The renewed request comes after a Yahoo News report that the CIA discussed kidnapping or killing Assange in 2017, before charges against Assange were filed. The agency also reportedly planned extensive spying on WikiLeaks associates.

Assange has been charged under the Espionage Act. The charges have been widely condemned by journalists and press freedom organizations, including by outlets that have been critical of Assange. Leaks of information that the government would prefer to keep secret, and the publication of those leaks by journalists, are vital to our democracy. Regardless of what one thinks about Assange’s personal behavior, his indictment on charges that mostly reflect basic journalistic practices will have a chilling effect on critical national security journalism. 

In January, a British judge denied the Trump Administration’s extradition request, on the basis that the conditions of confinement in the U.S. would be overly harsh. The U.S. chose to appeal that decision. A hearing on the case is scheduled to be heard next week. Human rights and press freedom groups, including EFF, first asked in February for the Biden Administration to drop the extradition effort.

The letter to DOJ has been signed by the ACLU, Amnesty International, Center for Constitutional Rights, Fight for the Future, Freedom of the Press Foundation, Human Rights Watch, PEN America, Reporters Without Borders, and many other groups.