You are here

EFF

Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 day 22 hours ago

Seattle and Portland: Say No to Public-Private Surveillance Networks

Fri, 03/12/2021 - 12:56

An organization calling itself Safe Cities Northwest is aiming to create public-private surveillance networks in Portland, Oregon and Seattle, Washington. The organization claims that it is building off of a “successful model for public safety” that it built in San Francisco. However, it’s hard to call that model successful when it has been at the center of a civil rights lawsuit against the city of San Francisco, been used to spy on a number of public events, including  Black-led protests against police violence and a Pride parade, and is now facing resistance from a neighborhood hoping to prevent the spread of the surveillance program. 

In San Francisco, the organization SF Safe connects semi-private Business Improvement Districts (BID) and Community Benefit Districts (CBD) with the police by funding large-scale camera networks that blanket entire neighborhoods. BIDs and CBDS, also known as special assessment districts, are quasi-government agencies that act with state authority to levy taxes in exchange for supplemental city services. While they are run by non-city organizations, they are funded with public money and carry out public services. 

These camera networks are managed by staff within the neighborhood and streamed to a local control room, but footage can be shared with other entities, including individuals and law enforcement, with little oversight. At least six special assessment districts in San Francisco have installed these camera networks, the largest of which belongs to the Union Square BID. The camera networks now blanket a handful of neighborhoods and cover 135 blocks, according to a recent New York Times report.

In October 2020, EFF and ACLU of Northern California sued San Francisco after emails between the San Francisco Police Department and the Union Square BID revealed that police were granted live access to over 400 cameras and a dump of hours of footage in order to monitor Black Lives Matter protests in June 2020. By gaining access, the SFPD violated San Francisco’s Surveillance Technology Ordinance, which prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology without prior approval from the city’s Board of Supervisors. 

Subsequent reporting by the SF Examiner revealed that June 2020 had not been the first time the SFPD had gotten approval for live access to the camera networks without permission of the Board of Supervisors, and that prior instances included surveillance of a Super Bowl parade and a Pride parade.

Seeing how police have been known to request live access to the camera networks in order to surveil public events, residents of San Francisco’s Castro neighborhood, the city’s historically LGBTQ+ area, have contested plans to install its own camera network. 

Seattle and Portland have both been home to large-scale protest movements, both historically and within the last year. City residents that have already grappled with government spy planes and the Department of Homeland Security throwing people into unmarked vans could soon also be confronted by a semi-private widespread camera network, unregulated and without input from the community. The introduction of these new public-private camera networks further threatens the political activity of everyone from grassroots activists and organizers to casual canvassers and demonstrators by opening them up to more surveillance and potential retribution. 

Make no mistake: businesses, many of which already have security cameras, will join these new camera networks based on the premise they will help fight crime. But once consolidated into a single network, this system of hundreds of cameras will prove too tempting for police to ignore, as occurred in San Francisco. For Portland in particular, which unlike Seattle or San Francisco does not have an ordinance that restricts law enforcement’s use of surveillance technologies, residents would have fewer tools to combat the new threat to First Amendment-protected activities. Seattle residents should pay close attention to ensure their police department seeks city council approval and holds public meetings before gaining access to any BID/CBD camera networks. 

EFF is standing by to help residents and organizations on the ground combat the spread of surveillance networks that act like a private entity when they want to avoid regulations, but a public camera network when they want to help police spy on protests.  

Related Cases: Williams v. San Francisco

Congress Proposes Bold Plan to End the Digital Divide

Thu, 03/11/2021 - 14:05

New year, new Congress, but the problems of Internet access remains. If anything, the longer the COVID-19 crisis continues, the stronger the case for fast, affordable, Internet for all becomes. And so, an updated version of the Accessible, Affordable Internet for All Act has been introduced. It remains a bold federal program that will tackle broadband access in the same scale and scope the United States once did for water and electricity.

EFF supported the first introduction of this legislation and we enthusiastically support it today after its updates. Most changes simply reflect past COVID-19 provisions that have already been enacted into law such as the Emergency Benefit Program, a program that ensures people are not disconnected due to a lack of income caused by the pandemic. But its most noteworthy updates are the preferences for open access and a minimum speed metric of low latency 100/100 Mbps, which inherently means fiber infrastructure will play a key role. By adopting these standards—along with a massive investment of federal dollars—Congress can reshape the market to be competitive, universally available, and affordable.

It Is Time to End the Digital Divide by Extending Fiber to Everyone

The digital divide isn’t about whether you have access to a certain speed like 25/3 Mbps, which is the current federal standard that is effectively useless today as a metric to measure connectivity, it is about what infrastructure has been invested into your community. Is that infrastructure robust, future-proofed, and competitively priced? If the answer to any of these is no, then you have folks that are not able to fully utilize the Internet and they sit on the wrong side of the divide. 

As EFF noted in 2019, the fact that major industry players were slow-rolling or shutting down their fiber to the home deployments, even in major metropolitan areas where really no excuse exists to not wire everyone, was a danger sign. It meant that future-ready access was no longer on track to being universally deployed except through local governments and small private providers who lack the finances to do it nationally. At the beginning of the pandemic in 2020 as the stay-at-home orders were coming in, we pointed out that the digital divide failures we will see are going to be prominent in areas that lack ubiquitous fiber infrastructure.

The pandemic demonstrated what that means in terms of real dollars to the government support systems. In areas where fiber was not present, millions of dollars had to be burned to give people temporary mobile hotspots with spotty coverage. Whereas communities with fiber providers got things like free fast Internet from both public and small private fiber providers. In fact, while the federal government is subsidizing broadband access as high as $50-$75 a month, Chattanooga’s EPB is able to deliver via fiber 100/100 mbps at just $3 a month in subsidy cost.

Why Fiber? Because It Is Unequivocally Future-Proofed

We focus on fiber optic infrastructure because it is the universal medium that is unifying all of the 21st century-based communications networks. Low earth orbit satellites, 5G, next-generation WiFi, and direct wireline connections that seek to deliver ever-increasing speeds are all dependent on fiber. Demand for data has never waned, but rather has consistently grown for decades at an average rate of 21 percent per year, meaning if you are in a community that is not deploying fiber, which is decades ahead of the demand curve, you will run into capacity problems. 

We see these capacity problems already in the legacy infrastructure, namely copper and cable, as they get more expensive to operate yet can only deliver obsolete connection speeds with lots of restrictions. We detailed why this is happening in our technical piece that explains why different broadband networks yield different results in connectivity. But really the evidence is clear when increased usage of an essential service is being met with upload throttling and data caps instead of just delivering the service to meet demand. It is why subsidizing or propping up legacy networks is actually going to be more expensive than investing in fiber infrastructure in the long run.

This is the reality that many Americans are all too familiar with and why we must pass this bill. If we do not, it is a certainty that we will continue to talk about the digital divide in perpetuity. But that is a choice now facing Congress and you need to make sure your legislator is on board. If we figured out how to get an electrical line to every house, there really is no reason we can’t do that with a fiber line.

App Stores Have Kicked Out Some Location Data Brokers. Good, Now Kick Them All Out.

Wed, 03/10/2021 - 13:26

Last fall, reports revealed the location data broker X-Mode’s ties to several U.S. defense contractors. Shortly after, both Apple and Google banned the X-Mode SDK from their app stores, essentially shutting off X-Mode’s pipeline of location data. In February, Google kicked another location data broker, Predicio, from its stores.

We’ve written about the problems with app-store monopolies: companies shouldn’t have control over what software users can choose to run on their devices. But that doesn’t mean app stores shouldn’t moderate. On the contrary, Apple and Google have a responsibility to make sure the apps they sell, and profit from, do not put their users at risk of harms like unwarranted surveillance. Kicking out two data brokers helps to protect users, but it’s just a first step. 

X-Mode and Predicio have each been the subject of reports over the past year that reveal how U.S. government agencies—including the Department of Defense and ICE—try to work around the 4th Amendment by buying location data on the private market. In 2018, the Supreme Court handed down U.S. v. Carpenter, a landmark decision which ruled that location data collected from cell phone towers is protected by the 4th Amendment. This means law enforcement can’t get your location from your cell carrier without a warrant. 

But dozens of companies are still collecting the same location from a different source—mobile apps—and making it available to law enforcement, defense, intelligence, immigration, and other government agencies. Data brokers entice app developers to install pieces of third-party code, called SDKs, which collect raw GPS data and feed it directly to the brokers. These data brokers then resell the location feeds to advertisers, hedge funds, other data brokers, and governments all around the world.

The apps that source the data run the gamut from prayer apps to weather services. X-Mode collected data from thousands of apps including Muslim Pro, one of the most popular Muslim prayer apps in the U.S. X-Mode allegedly sold that data to several Pentagon contractors. Another broker, Predicio, collected data from hundreds of apps including Fu*** Weather and Salaat First. It then sold data to Gravy Analytics, whose subsidiary Venntel has provided location data to the IRS, CBP, and ICE.

It took many months of investigative journalism by Vice, the Wall Street Journal, Protocol, NRK Beta, and others to piece together the flow of location data from particular apps to the U.S. government. These reporters deserve our gratitude. But it’s not good enough for app stores to wait for specific data brokers to come into the public spotlight before banning them. 

We know brokers continue to mine location data from our apps and sell it to military and law enforcement—we just don’t know which apps. For example, we know that Babel Street sells its secretive Locate X product, which comprises real-time location data about untold numbers of users, to the Department of Homeland Security, the Department of Defense, and the Secret Service. This data reportedly comes from thousands of different mobile apps. 

But figuring out which apps are responsible is difficult. Laws in the U.S. generally do not require companies to disclose exactly where they sell personal data, so it’s easy for data brokers to mask their behavior. Journalists often must rely on technical analysis (which requires expertise and lots of time) and government records requests (which may take years and be heavily redacted) to piece together data flows. When investigators do discover proof of unwanted data sharing, the apps and brokers involved can just change their tactics. Even the app developers involved often don’t know where the data they share will end up. Users can’t make educated choices without knowing where or how their data will be shared. 

Google Play and the Apple App Store shouldn’t wait on journalists to establish end-to-end data flows before taking steps to protect users.

The ecosystem of phone app location data should be better regulated. Local CCOPS (community control of police surveillance) laws can ban police and other local government agencies from acquiring surveillance tech, including data broker deals, without legislative permission and community input. We support these laws, but most cities do not have them. Also, they do not address the problem of federal agencies buying our location data on the open market. We will continue pushing for legislation and judicial decisions that, as required by the Fourth Amendment, prevent the government at all levels from buying this kind of data without first getting a warrant. But in the meantime, many government agencies will continue buying location data as long as they believe they can.

App stores are in a unique position to protect tech users from app-powered surveillance. We applaud Apple and Google for taking action against X-Mode and Predicio. But this is only the tip of the iceberg. Now the app stores should take the next step: ban SDKs from any data brokers that collect and sell our location information.

There is no good reason for apps to collect and sell location data, especially when users have no way of knowing how that data will be used. We implore Apple and Google to end this seedy industry, and make it clear that location data brokers are not welcome on their app stores.

EFF to Supreme Court: Users Must Be Able to Hold Tech Companies Accountable in Lawsuits When Their Data is Mishandled

Wed, 03/10/2021 - 12:15
Facebook, Google, and Others Want To Make It Harder For Users To Sue

Washington, D.C.—The Electronic Frontier Foundation (EFF) today urged the Supreme Court to rule that consumers can take big tech companies like Facebook and Google to court, including in class action lawsuits, to hold them accountable for privacy and other user data-related violations, regardless of whether they can show they suffered identical harms.

Standing up to defend the ability of tech users to hold powerful companies responsible for protecting the massive amounts of personal data they capture and store every day, EFF and co-counsel Hausfeld LLP told the high court that—contrary to the companies’ claims—Congress rightfully ensured that users could sue when those companies mishandle sensitive, private information about them.

EFF filed a brief today with the Supreme Court in a case called TransUnion v. Ramirez. In the case, TransUnion misidentified plaintiff Sergio Ramirez and over 8,000 other people as included on the U.S. terrorist list. As a result, Ramirez was flagged as a terrorist when he tried to purchase a car. TransUnion is fighting to keep the other consumers it tagged as terrorists from suing it as a group with Ramirez. The company argues that they don’t have standing to sue under the law and shouldn’t be part of the “class” of plaintiffs in the lawsuit because they weren’t harmed in the same way as Ramirez.

Facebook, Google, and tech industry trade groups are siding with TransUnion. They filed a legal brief pushing for even more limitations on users and others impacted by a wide range of privacy and data integrity violations. The companies argue that users whose biometric information is misused or are improperly tracked or wiretapped should also be denied the opportunity to sue if they did not lose money or property. Even those who can sue must all have been harmed in the exact same way to file a class action case, the companies argue.

“Facebook and the other tech giants gather and use immense quantities of our personal data each day, but don’t want to be held accountable by their users in court when they go back on their privacy promises, or unlawfully mishandle user data,” said EFF Executive Director Cindy Cohn. “This logic—that the courthouse door should remain closed unless their users suffer financial or personal injuries even when the companies flagrantly violate the law—is cynical and wrong. Intangible harms have long been recognized as harms under the law, and Congress and the states must be allowed to pass laws that protect us. In today’s digital economy, all of us depend on these companies to ensure that the data they have about us is accurate and safeguarded. When it’s mishandled, we should be able to take those companies to court.”

Class action rules require people suing as a group to have the same claims based upon the same basic facts, not the exact same injuries, EFF told the Supreme Court. Facebook and other tech companies are asking the court to change that, which will make it harder for users to hold them accountable and utilize class action lawsuits to do so.

“When users lose control of their data, or the correctness of their data is compromised, those are serious harms in and of themselves, and they put users at tremendous risk,” said EFF Senior Staff Attorney Adam Schwartz. “Companies that gather and use vast amounts of users’ personal, private information are trying to raise the bar on their own accountability when they fail to protect people’s data. We are telling the Supreme Court: don’t let them.”

For the brief:
https://www.eff.org/document/transunion-amicus-brief

For more on users’ right to sue:
https://www.eff.org/deeplinks/2019/01/you-should-have-right-sue-companies-violate-your-privacy

Contact:  CindyCohnExecutive Directorcindy@eff.org AdamSchwartzSenior Staff Attorneyadam@eff.org

Internet Advocates Call on ISPs to Commit to Basic User Privacy Protections

Wed, 03/10/2021 - 08:04

This blog post was co-written by EFF, the Internet Society, and Mozilla.

As people have learned more about how companies like Google and Facebook track them online they are increasingly taking steps to protect themselves, but there is one relatively unknown way that companies and bad actors can collect troves of data.

Internet Service Providers (ISPs) like Comcast, Verizon, and AT&T are your gateway to the Internet. These companies have complete, unfettered, and unregulated access to a constant stream of your browsing history that can build a profile that they can sell or otherwise use without your consent.

Last year, Comcast committed to a broad range of DNS privacy standards. Companies like Verizon, AT&T, and T-Mobile – which have a major market share of mobile broadband customers in the U.S. – haven’t even committed to these basic protections like not tracking website traffic, deleting DNS logs, or refusing to sell users’ information. What's more, these companies have a history of abusing customer data: AT&T (along with Sprint and T-Mobile) sold customer location data to bounty hunters and Verizon injected trackers bypassing user control.

Every single ISP should have a responsibility to protect the privacy of its users – and as mobile internet access continues to grow, that responsibility rests even more squarely on the shoulders of mobile ISPs. As our partner, Consumer Reports, notes: even opting in to secondary uses of data can be convoluted for consumers. Companies shouldn’t be able just bury consent within their terms of service or use a dark pattern to get people to click "OK” and still claim they are acting with users’ explicit consent.

Nearly every single website you visit transmits your data to dozens or even hundreds of companies. This pervasive and intrusive personal surveillance has become the norm, and it won’t cease without action from us.

In that vein, Mozilla, the Internet Society, and the Electronic Frontier Foundation are individually and collectively taking steps to protect consumers' right to data privacy. A key element of that is an effective baseline federal privacy law that curbs data abuses by ISPs and other third parties and gives consumers meaningful control over how their personal data is used.

But effective regulatory action could be years away, and that’s why we need to proactively hold the ISPs accountable today. Laws and technical solutions can go a long way, but we also need better behavior from those who collect our sensitive DNS data. 

Today we are publishing an open letter calling on AT&T, T-Mobile, and Verizon to publish a privacy notice for their DNS service that commits to deleting the data within 24 hours and to only using the data for providing the service. It is our hope that they heed the call, and that other ISPs take note as well. Click here to see the full letter.

EFF to Supreme Court: States Face High Burden to Justify Forcing Groups to Turn Over Donor Names

Tue, 03/09/2021 - 14:02

Throughout our nation’s history—most potently since the era of civil rights activism—those participating in social movements challenging the status quo have enjoyed First Amendment protections to freely associate with others in advocating for causes they believe in. This right is directly tied to our ability to maintain privacy over what organizations we choose to join or support financially. Forcing organizations to hand membership or donor lists to the state threatens First Amendment activities and suppresses dissent, as those named, facing harassment or worse, have to decide between staying safe or speaking out.

In a California case over donor disclosures, we’ve urged the Supreme Court to apply this important principle to ensure that the bar for public officials seeking information about people’s political and civic activities is sufficiently high. In an amicus brief filed last week, EFF, along with four other free speech advocacy groups, asked the court to compel the California Attorney General to better justify its requirement that nonprofits to turn over to the state the names of their major donors.

The U.S. Court of Appeals for the Ninth Circuit in 2018 upheld California’s charitable donation reporting requirement, under which nonprofits must give state officials the names and addresses of their largest donors. The court, ruling in Americans For Prosperity Foundation v. Becerra, rejected arguments that the requirement infringes on donors’ First Amendment right to freely associate with others, and said the plaintiffs hadn’t shown specific evidence to back up claims that donors would be threatened or harassed if their names were disclosed.

The decision goes against years of Supreme Court precedent requiring the government, whether or not there’s direct evidence of harassment, to show it has a compelling interest justifying donor disclosure requirements that can divulge people’s political activities. Joined by the Freedom to Read Foundation, the National Coalition Against Censorship, the People United for Privacy Foundation, and Woodhull Freedom Foundation, we urged the Supreme Court to overturn the Ninth Circuit decision and rule that “exacting scrutiny” applies to any donor disclosure mandate by the government. By that we mean the government must show its interest is sufficiently important and the requirement carefully crafted to infringe as little as possible on donors’ First Amendment rights.

Even where there’s no specific evidence that donors are being harassed or groups can’t attract funders, the court has found, states wishing to intrude on Americans’ right to keep their political associations private must always demonstrate a compelling state interest in obtaining the information.

This principle was at the center of the Supreme Court’s unanimous landmark 1958 decision blocking Alabama from forcing the NAACP to turn over names and addresses of its members. The court never questioned the NAACP’s concerns about harassment and retaliation, let alone suggest that the organization had the burden of making some threshold showing confirming the nature or specificity of its concerns. The Ninth Circuit said California’s disclosure requirement posed minimal First Amendment harms because the Attorney General must keep the donor names confidential. It faulted the plaintiffs for not producing evidence that donors would be harassed if their names were revealed and not identifying donors whose willingness to contribute hinged on whether their identities would be disclosed by the Attorney General.

The court is wrong on both counts.

First, pledging to keep the names confidential doesn’t eliminate the requirement’s speech-chilling effects, we said in our brief. Groups that challenge or oppose state policies have legitimate fears that members and donors, or their businesses, could become targets of harassment or retaliation by the government itself. It’s easy to imagine that a Black Lives Matter organization, or an organization assisting undocumented immigrants at the border, would have justifiable concerns about turning their donor or membership information to the government, regardless of whether the government shares that information with anyone else. If allowed to stand, the Ninth Circuit’s decision gives the government unchecked power to collect information on people’s political associations.

Second, the burden is on the government to show it has a compelling interest connected to the required information before forcing disclosures that could put people in harm’s way. As we stated in our brief: “Speaking out on contentious issues creates a very real risk of harassment and intimidation by private citizens and critically by the government itself. Furthermore, numerous contemporary issues—ranging from the Black Lives Matter movement, to gender identity, to immigration—arouse significant passion by people with many divergent beliefs. Thus, now, as much as any time in our nation’s history, it is necessary for individuals to be able to express and promote their viewpoints through associational affiliations without personally exposing themselves to a political firestorm or even governmental retaliation.”

The precedent established by this case will affect the associational rights of civil rights and civil liberties groups across the country. We urge the Supreme Court to affirm meaningful protections that nonprofits and their members and contributors need from government efforts to make them hand over donor or member lists.

Scholars Under Surveillance: How Campus Police Use High Tech to Spy on Students

Tue, 03/09/2021 - 10:58

Hailey Rodis, a student at the University of Nevada, Reno Reynolds School of Journalism, was the primary researcher on this report. We extend our gratitude to the dozens of other UNR students and volunteers who contributed data on campus police to the Atlas of Surveillance project. The report will be updated periodically with responses from university officials. These updates will be noted in the text. 

It may be many months before college campuses across the U.S. fully reopen, but when they do, many students will be returning to a learning environment that is under near constant scrutiny by law enforcement. 

A fear of school shootings, and other campus crimes, have led administrators and campus police to install sophisticated surveillance systems that go far beyond run-of-the-mill security camera networks to include drones, gunshot detection sensors, and much more. Campuses have also adopted automated license plate readers, ostensibly to enforce parking rules, but often that data feeds into the criminal justice system. Some campuses use advanced biometric software to verify whether students are eligible to eat in the cafeteria. Police have even adopted new technologies to investigate activism on campus. Often, there is little or no justification for why a school needs such technology, other than novelty or asserted convenience. 

In July 2020, the Electronic Frontier Foundation and the Reynolds School of Journalism at University of Nevada, Reno launched the Atlas of Surveillance, a database of now more than 7,000 surveillance technologies deployed by law enforcement agencies across the United States. In the process of compiling this data we noticed a peculiar trend: college campuses are acquiring a surprising number of surveillance technologies more common to metropolitan areas that experience high levels of violent crime. 

So, we began collecting data from universities and community colleges using a variety of methods, including running specific search terms across .edu domains and assigning small research tasks to a large number of students using EFF's Report Back tool. We documented more than 250 technology purchases, ranging from body-worn cameras to face recognition, adopted by more than 200 universities in 37 states. As big as these numbers are, they are only a sliver of what is happening on college campuses around the world.

Click the image to launch an interactive map (Google's Privacy policy applies)

Technologies

Download the U.S. Campus Police Surveillance dataset as a CSV.

Technologies Body-worn cameras

Maybe your school has a film department, but the most prolific cinematographers on your college campus are probably the police. 

Since the early 2010s, body-worn cameras (BWCs) have become more and more common in the United States. This holds true for law enforcement agencies on university and college campuses. These cameras are attached to officers’ uniforms (often the chest or shoulder, but sometimes head-mounted) and capture interactions between police and members of the public. While BWC programs are often pitched as an accountability measure to reduce police brutality, in practice these cameras are more often used to capture evidence later used in prosecutions. 

Policies on these cameras vary from campus to campus—such as whether a camera should be always recording, or only during certain circumstances. But students and faculty should be aware than any interaction, or even near-interaction, with a police officer could be on camera. That footage could be used in a criminal case, but in many states, journalists and members of the public are also able to obtain BWC footage through an open records request. 

Aside from your run-of-the-mill, closed-circuit surveillance camera networks, BWCs were the most prevalent technology we identified in use by campus police departments. This isn't surprising, since researchers have observed similar trends in municipal law enforcement. We documented 152 campus police departments using BWCs, but as noted, this is only a fraction of what is being used throughout the country. One of the largest rollouts began last summer when Pennsylvania State University announced that police on all 22 campuses would start wearing the devices. 

One of the main ways that universities have purchased BWCs is through funding from the U.S. Department of Justice's Bureau of Justice Assistance. Since 2015, more than 20 universities and community colleges have received funds through the bureau's Body-Worn Camera Grant Program established during the Obama administration. In Oregon, these funds helped the Portland State University Police Department adopt the technology well ahead of their municipal counterparts. PSU police received $20,000 in 2015 for BWCs, while the Portland Police Department does not use BWCs at all (Portland PD's latest attempt to acquire them in 2021 was scuttled due to budget concerns). 

Drones


Drones, also known as unmanned aerial vehicles (UAVs), are remote-controlled flying devices that can be used to surveil crowds from above or locations that would otherwise be difficult or dangerous to observe by a human on the ground. On many campuses, drones are purchased for research purposes, and it's not unusual to see a quadrotor (a drone with four propellers) buzzing around the quad. However, campus police have also purchased drones for surveillance and criminal investigations. 

Our data, which was based on a study conducted by the Center for the Study of The Drone at Bard College, identified 10 campus police departments that have drones: 

  • California State Monterey University Police Department
  • Colorado State University Police Department
  • Cuyahoga Community College Police Department
  • Lehigh University Police Department
  • New Mexico State University Police Department
  • Northwest Florida State College Campus Police Department
  • Pennsylvania State University Police Department
  • University of Alabama, Huntsville Police Department
  • University of Arkansas, Fort Smith Police Department
  • University of North Dakota Police Department

One of the earliest campus drone programs originated at the University of North Dakota, where the campus police began deploying a drone in 2012 as part of a regional UAV unit that also included members of local police and sheriffs' offices. According to UnmannedAerial.com, the unit moved from a "reactive" to a "proactive" approach in 2018, allowing officers to carry drones with them on patrol, rather than retrieving them in response to specific incidents. 

The Northwest Florida State University Police Department was notable in acquiring the most drones. While most universities had one, NFSU police began using four drones in 2019, primarily to aid in searching for missing people, assessing traffic accidents, photographing crime scenes, and mapping evacuation routes. 

The New Mexico State University Police Department launched its drone program in 2017 and, with the help of a local Eagle Scout in Las Cruces, built a drone training facility for local law enforcement in the region. In response to a local resident who questioned on Facebook whether the program was unnerving, a NMSU spokesperson wrote in 2019: 

[The program] thus far has been used to investigate serious traffic crashes (you can really see the skid marks from above), search for people in remote areas, and monitor traffic conditions at large events. They aren't very useful for monitoring campus residents (even if we wanted to, which we don't), since so many stay inside.

Not all agencies have taken such a limited approach. The Lehigh University Police Department acquired a drone in 2015, and equipped it with a thermal imaging camera. Police Chief Edward Shupp told a student journalist at The Brown and Right that the only limits on the drone are Federal Aviation Administration regulations, that there are no privacy regulations for officers to follow, and that the department can use the drones "for any purpose" on and off campus. 

Even when a university police department does not have its own drones, it may seek help from other local law enforcement agencies. Such was the case in 2017, when the University of California Berkeley Police Department requested drone assistance from the Alameda County Sheriff's Office to surveil protests on campus. 

Automated License Plate Readers

Students and faculty may complain about the price tag of parking passes, but there is also an unseen cost of driving on campus: privacy.

Automated license plate readers (ALPRs) are cameras attached to fixed locations or to security or parking patrol cars that capture every license plate that passes. The data is then uploaded to searchable databases with the time, date, and GPS coordinates. Through our research, we identified ALPRs at 49 universities and colleges throughout the country.

ALPRs are used in two main capacities on college campuses. First, transportation and parking divisions have begun using ALPRs for parking enforcement, either attaching the cameras to parking enforcement vehicles or installing cameras at the entrances and exits to parking lots and garages. For example, the University of Connecticut Parking Services uses NuPark, a system that uses ALPRs to manage virtual permits and citations.

Second, campus police are using ALPRs for public safety purposes. The Towson University Police Department in Maryland, for example, scanned over 3 million license plates using automated license plate readers in 2018 and sent that data to the Maryland Coordination and Analysis Center, a fusion center operated by the Maryland State Police. The University has a total of 6 fixed ALPR sites, with 10 cameras and one mobile unit.

These two uses are not always separate: in some cases, parking officials share data with their police counterparts. At Florida Atlantic University, ALPRs are used for parking enforcement, but the police department also has access to this technology through their Communications Center, which monitors all emergency calls to the department, as well as fire alarms, intrusion alarms, and panic alarm systems. In California, the San Jose/Evergreen Community College District Police Department shared* ALPR data with its regional fusion center, the Northern California Regional Intelligence Center. 

March 10, 2021 Update: A spokesperson from San Jose/Evegreen Community College emailed this information: "While it is true that SJECCD did previously purchase two LPR devices, we never licensed the software that would allow data to be collected and shared, so no data from SJECCD’s LPR devices was ever shared with the Northern California Regional Intelligence Center. Further, the MOU that was signed with NCRIC expired in 2018 and was not renewed, so there is no existing MOU between SJECCD and the agency." We have updated the piece to indicate that the ALPR data sharing occurred in the past. 

Social Media Monitoring

Colleges and universities are also watching their students on social media, and it is not just to retweet or like a cute Instagram post about your summer internship. Campus public safety divisions employ social media software, such as Social Sentinel, to look for possible threats to the university, such as posts where students indicate suicidal ideation or threats of gun violence. We identified 21 colleges that use social media monitoring to watch their students and surrounding community for threats. This does not include higher education programs to monitor social media for marketing purposes.

This technology is used for public safety by both private and public universities. The Massachusetts Institute of Technology has used Social Sentinel since 2015, while the Des Moines Area Community College Campus Security spent $15,000 on Social Sentinel software in 2020. 

Social media monitoring technology may also be used to monitor students' political activities. Social Sentinel software was used to watch activists on the University of North Carolina campus who were protesting a Confederate memorial on campus, Silent Sam. As NBC reported, UNC Police and the North Carolina State Bureau of Investigation used a technique called "geofencing" to monitor the social media of people in the vicinity of the protests.

"This information was monitored in an attempt to prevent any potential acts of violence (such as those that have occurred at other public protests around the country, including Charlottesville) and to ensure the safety of all participants," a law enforcement spokesperson told NBC, adding that investigators only looked at public-facing posts and no records of the posts were kept after the event. However, the spokesperson declined to elaborate on how the technology may have been used at other public events. 

Biometric Identification


When we say that a student body is under surveillance, we also mean that literally. The term “biometrics” refers to physical and behavioral characteristics (your body and what you do with it) that can be used to identify you. Fingerprints are among the types of biometrics most familiar to people, but police agencies around the country are adopting computer systems capable of identifying people using face recognition and other sophisticated biometrics. 

At least four police departments at universities in Florida–University of South Florida, University of North Florida, University of Central Florida, and Florida Atlantic University–have access to a statewide face recognition network called Face Analysis Comparison and Examination System (FACES), which is operated by the Pinellas County Sheriff's Office. Through FACES, investigators can upload an image and search a database of Florida driver’s license photos and mugshots.  

University of Southern California in Los Angeles confirmed to The Fix that its public safety department uses face recognition, however the practice was more prevalent in the San Diego, California area up until recently.  

In San Diego, at least five universities and college campuses participated in a face recognition program involving mobile devices. San Diego State University stood out for having conducted more than 180 face recognition searches in 2018. However, in 2019, this practice was suspended in California under a three-year statewide moratorium. 

Faces aren't the only biometric being scanned. In 2017, the University of Georgia introduced iris scanning stations in dining halls, encouraging students to check-in with their eyes to use their meal plans. This replaced an earlier program requiring hand scans, another form of biometric identification.

Gunshot Detection

Gunshot detection is a technology that involves installing acoustic sensors (essentially microphones) around a neighborhood or building. When a loud noise goes off, such as a gunshot or a firework, the sensors attempt to determine the location and then police receive an alert. 

Universities and colleges have begun using this technology in part as a response to fears of campus shootings. However, these technologies often are not as accurate as their sellers claim and could result in dangerous confrontations based on errors. Also, these devices can capture human voices engaged in private conversations, and prosecutors have attempted to use such recordings in court. 

Our dataset has identified eight universities and colleges that have purchased gunshot-detection technology:

  • East Carolina University Police Department
  • Hampton University Police Department
  • Truett McConnell University Campus Safety Department
  • University of California San Diego Police Department
  • University of Connecticut Police Department
  • University of Maryland Police Department
  • University of West Georgia Police Department
  • Georgia Tech Police Department

Some universities and colleges purchase their own gunshot detection technology, while others have access to the software through partnerships with other law enforcement agencies. For example, the Georgia Tech Police Department has access to gunshot detection through the Fūsus Real-Time Crime Center. The University of California San Diego Police Department, on the other hand, installed its own ShotSpotter gunshot detection technology on campus in 2017. 

When a university funds surveillance technology, it can impact the communities nearby. For example, University of Nevada, Reno journalism student Henry Stone obtained documents through Nevada's public records law that showed that UNR Cooperative Extension spent $500,000 in 2017 to install and operate Shotspotter sensors in a 3-mile impoverished neighborhood of Las Vegas. The system is controlled by the Las Vegas Metropolitan Police Department.

Video Analytics

While most college campuses employ some sort of camera network, we identified two particular universities that are applying for extra credit in surveilling students: the University of Miami Police Department in Florida and Grand Valley State University Department of Public Safety in Michigan. These universities apply advanced software to the camera footage—sometimes called video analytics or computer vision—that use an algorithm to achieve round-the-clock monitoring that many officers viewing cameras could never achieve. Often employing artificial intelligence, video analytics systems can track objects and people from camera to camera, identify patterns and anomalies, and potentially conduct face recognition. 

Grand Valley State University began using Avigilon video analytics technology in 2018. The University of Miami Police Department uses video analytics software combined with more than 1,300 cameras.

Three university police departments in Maryland also maintain lists of cameras owned by local residents and businesses. With these camera registries, private parties are asked to voluntarily provide information about the location of their security cameras, so that police can access or request footage during investigations. The University of Maryland, Baltimore Police Department, the University of Maryland, College Park Police Department and the Johns Hopkins University Campus Police are all listed on Motorola Solutions' CityProtect site as maintaining such camera registries. 

Two San Francisco schools—UC Hastings School of Law and UC San Francisco—explored leasing Knightscope surveillance robots in 2019 and 2020 to patrol their campuses, though the plans seem to have been scuttled by COVID-19. The robots are equipped with cameras, artificial intelligence, and, depending on the model, the ability to capture license plate data, conduct facial recognition, or recognize nearby phones. 

Conclusion 

Universities in the United States pride themselves on the free exchange of ideas and the ability for students to explore different concepts and social movements over the course of their academic careers. Unfortunately, for decades upon decades, police and intelligence agencies have also spied on students and professors engaged in social movements. High-tech surveillance only exacerbates the threat to academic freedom.

Around the country, cities are pushing back against surveillance by passing local ordinances requiring a public process and governing body approval before a police agency can acquire a new surveillance technology. Many community colleges do have elected bodies, and we urge these policymakers to enact similar policies to ensure adequate oversight of police surveillance. 

However, these kinds of policy-making opportunities often aren't available to students (or faculty) at state and private universities, whose leadership is appointed, not elected. We urge student and faculty associations to press their police departments to limit the types of data collected on students and to ensure a rigorous oversight process that allows students, faculty, and other staff to weigh in before decisions are made to adopt technologies that can harm their rights.

EFF, ACLU and EPIC File Amicus Brief Challenging Warrantless Cell Phone Search, Retention, and Subsequent Search

Mon, 03/08/2021 - 16:27

Last week, EFF—along with the ACLU and EPIC—filed an amicus brief in the Wisconsin Supreme Court challenging a series of warrantless digital searches and seizures by state law enforcement officers: the search of a person’s entire cell phone, the retention of a copy of the data on the phone, and the subsequent search of the copy by a different law enforcement agency. Given the vast quantity of private information on an ordinary cell phone, the police’s actions in this case, State v. Burch, pose a serious threat to digital privacy, violating the Fourth Amendment’s core protection against “giving police officers unbridled discretion to rummage at will among a person’s private effects.”

The Facts

In June 2016, the Green Bay Police Department was investigating a hit-and-run accident and vehicle fire. Since Burch had previously driven the vehicle at issue, the police questioned him. Burch provided an alibi involving text messages with a friend who lived near the location of the incident. To corroborate his account, Burch agreed to an officer’s request to look at those text messages on his cell phone. But, despite initially only asking for the text messages, the police used a sophisticated mobile device forensic tool to copy the contents of the entire phone. Then about a week later, after reviewing the cell phone data, a Green Bay Police officer wrote a report that ruled Burch out as a suspect, finding that there was “no information to prove [Burch] was the one driving the [vehicle] during the [hit-and- run] accident.”

But that’s not where things end. Also in the summer of 2016, a separate Wisconsin police agency, the Brown County Sheriff’s Office, was investigating a homicide. And in August, Burch became a suspect in that case. In the course of that investigation, the Brown County Sheriff's Office learned that the Green Bay Police Department had kept the download of Burch’s cell phone and obtained a copy of it. The Brown County Sherriff’s Office then used information on the phone to charge Burch with the murder. 

Burch was ultimately convicted but argued that the evidence from his cell phone should have been suppressed on Fourth Amendment grounds. Last fall, a Wisconsin intermediate appellate court certified Burch’s Fourth Amendment challenge to the Wisconsin Supreme Court, writing that the “issues raise novel questions regarding the application of Fourth Amendment jurisprudence to the vast array of digital information contained in modern cell phones.” In December, the Wisconsin Supreme Court decided to review the case and asked the parties to address six specific questions related to the search and retention of the cell phone data.  

The Law

In a landmark ruling in Riley v. California , the U.S. Supreme Court established the general rule that police must get a warrant to search a cell phone. However, there are certain narrow exceptions to the warrant requirement, including when a person consents to the search of a device. While Burch did consent to a limited search of his phone, that did not provide law enforcement limitless authority to search and retain a copy of his entire phone.

Specifically, in our brief, we argue that the state committed multiple independent violations of Burch’s Fourth Amendment rights. First, since Burch only consented to the search of his text messages, it was unlawful for the Green Bay police to copy his entire phone. And even if his consent extended beyond his text messages, he did not give the police the authority to search information on his phone having nothing to do with the initial investigation. Next, regardless of the extent of  Burch’s consent, after the police determined Burch was no longer a suspect, the state lost virtually all justification in retaining Burch’s private information and should have returned it to him or purged it. Lastly, since the state had no compelling legal justification to hold Burch’s data after closing the initial investigation on him, the Brown County Sheriff’s warrantless search of the data retained by the Green Bay police was blatantly unlawful. 

The Privacy Threat at Stake

The police’s actions here are not an outlier. In a recent investigative report, Upturn found that law enforcement in all fifty states have access to the type of mobile forensic tools the police employed in this case. And although consent is a recognized exception to the rule that warrants are required for cell phone searches, Upturn’s study reveals that police rely on warrant exceptions like consent to use those tools at an alarming rate. For example, of the 1,583 cell phones on which the Harris County, Texas Sheriff’s Office performed extractive searches from August 2015 to July 2019, 53% were conducted without a warrant, including searches based on consent and search of phones the police classified as “abandoned/deceased.” Additionally, of the 497 cell phone extractions performed in Anoka County, Minnesota between 2017 to May 2019, 38% were consent searches. 

In light of both how common consent-based searches are and their problematic nature (as a recent EFF post explains), the implications of the state’s core argument is only all the more troubling. In the state’s view, no one—including suspects, witnesses, and victims—who consents to a search of their digital device in the context of one investigation could prevent law enforcement from storing a copy of their entire device in a database that could be mined years into the future, for any reason the government sees fit.

The state’s arguments would erase the hard-fought protections for digital data recognized in cases like Riley. The Wisconsin Supreme Court should recognize that consent does not authorize the full extraction, indefinite retention, and subsequent search of a person’s cell phone.

Washington: Everyone Deserves Reliable Internet

Mon, 03/08/2021 - 12:38

The coronavirus pandemic, its related stay-at-home orders, and its economic and social impacts have illustrated how important robust broadband service is to everything from home-based work to education. Yet, even now, many communities across America have been unable to meet their residents’ telecommunication needs. This is because of two problems: disparities in access to services that exacerbate race and class inequality—the digital divide—and the overwhelming lack of competition in service providers. At the heart of both problems is the current inability of public entities to provide their own broadband services.

This is why EFF joined a coalition of private-sector companies and organizations to support H.B. 1336, authored by Washington State Representative Drew Hansen. This bill would remove restrictions in current Washington law preventing public entities from building and providing broadband services. In removing these restrictions, Hansen’s bill would allow public entities to create and implement broadband policy based on the needs of the people they serve, and provide services unconstrained and not beholden to big, unreliable ISPs

Take Action

Washington: Demand Reliable Internet for Everyone

There are already two examples of community-provided telecommunications services showing what removing these constraints could do. Chattanooga, Tennessee has been operating a profitable municipal broadband network for 10 years and, in response to the pandemic, had the capacity to provide 18,000 school children with free 100/100mbps so they could continue to learn. In Utah, 11 cities joined together to build an open-access fiber network that not only brought competitively priced high-speed fiber to its residents but also provided them with over a dozen choices as provided by small businesses. This multi-city partnership has been so successful that they added two new cities into the network in 2020.

The pandemic made it abundantly clear that communication services and capabilities are the platform, driver, and enabler of all that matters in communities. It is also abundantly clear that  monopolistic ISPs failed to meet the needs of communities. H.B. 1136 would correct that failure by allowing public entities to address the concerns and needs of the people they serve. If you are a Washington resident, please urge your lawmakers to support this bill. Broadband access is vitally important now and beyond the pandemic. This bill would not only loosen the hold of monopolistic ISPs, but also give everyone a chance at faster service to participate meaningfully in an increasingly digital world. 

TAKE ACTION

WASHINGTON: DEMAND RELIABLE INTERNET FOR EVERYONE