Subscribe to EFF feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 11 min 35 sec ago

Google's Sensorvault Can Tell Police Where You've Been

Thu, 04/18/2019 - 17:53

Do you know where you were five years ago? Did you have an Android phone at the time? It turns out Google might know—and it might be telling law enforcement.

In a new article, the New York Times details a little-known technique increasingly used by law enforcement to figure out everyone who might have been within certain geographic areas during specific time periods in the past. The technique relies on detailed location data collected by Google from most Android devices as well as iPhones and iPads that have Google Maps and other apps installed. This data resides in a Google-maintained database called “Sensorvault,” and because Google stores this data indefinitely, Sensorvault “includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.”

The data Google is turning over to law enforcement is so precise that one deputy police chief said it “shows the whole pattern of life.” It’s collected even when people aren’t making calls or using apps, which means it can be even more detailed than data generated by cell towers.

One deputy police chief said Google’s location data “shows the whole pattern of life.”

The location data comes from GPS signals, cellphone towers, nearby Wi-Fi devices and Bluetooth beacons. According to Google, users opt in to collection of the location data stored in Sensorvault. However, Google makes it very hard to resist opting in, and many users may not understand that they have done so. Also, Android devices collect lots of other location data by default, and it’s extremely difficult to opt out of that collection.

Using a single warrant—often called a “geo-fence” or “reverse location” warrant—police are able to access location data from dozens to hundreds of devices—devices that are linked to real people, many of whom (and perhaps in some cases all of whom) have no tie to criminal activity and have provided no reason for suspicion. The warrants cover geographic areas ranging from single buildings to multiple blocks, and time periods ranging from a few hours to a week.

So far, according to the Times and other outlets, this technique is being used by the FBI and police departments in Arizona, North Carolina, California, Florida, Minnesota, Maine, and Washington, although there may be other agencies using it across the country. But police aren’t limiting the use of the technique to egregious or violent crimes—Minnesota Public Radio reported the technique has been used to try to identify suspects who stole a pickup truck and, separately, $650 worth of tires. Google is getting up to 180 requests a week for data and is, apparently, struggling to keep up with the demand.

Law enforcement appears to be seeking warrants to access this extremely detailed location data. However, it’s questionable whether the affidavits supporting those warrants truly establish probable cause and also questionable whether judges fully understand what they’re authorizing when issuing these warrants.

According to the Times, the warrants frequently rely on an officer’s assertion that the fact that “Americans owned cellphones and that Google held location data on many of these phones” somehow supports probable cause for the warrant. The warrants also list GPS coordinates that supposedly “geo-fence” the geographic area for which they are requesting data, but many don’t include a map showing the area itself. Without a visual representation, there’s almost no way to tell how large or small the geographic area covered by the warrant is.

Law enforcement seems to be using a three-step process to learn the names of device holders (in some cases, a single warrant authorizes all three steps). In the first step, the officer specifies the area and time period of interest, and in response, Google gives the police information on all the devices that were there, identified by anonymous numbers—this step may reveal hundreds of devices.

After that, officers can narrow the scope of their request to fewer devices, and Google will release even more detailed data, including data on where devices traveled outside the original requested area and time period. This data, which still involves multiple devices, reveals detailed travel patterns. In the final step, detectives review that travel data to see if any devices appear relevant to the crime, and they ask for the users’ names and other information for specific individual devices.

This technique is problematic for several reasons. First, unlike other methods of investigation used by the police, the police don’t start with an actual suspect or even a target device—they work backward from a location and time to identify a suspect. This makes it a fishing expedition—the very kind of search that the Fourth Amendment was intended to prevent. Searches like these—where the only information the police have is that a crime has occured—are much more likely to implicate innocent people who just happen to be in the wrong place at the wrong time. Every device owner in the area during the time at issue becomes a suspect—for no other reason than that they own a device that shares location information with Google.

Second, as the Supreme Court recognized in Carpenter v United States last summer, detailed travel data like this can provide “an intimate window into a person's life, revealing not only his particular movements, but through them his ‘familial, political, professional, religious, and sexual associations.’” This is exactly what the deputy police chief recognized when he said Google location data “shows the whole pattern of life.”

Third, there’s a high probability the true perpetrator isn’t even included in the data disclosed by Google. For these kinds of warrants, officers are just operating off a hunch that the unknown suspect had a cellphone that generated location data collected by Google. This shouldn’t be enough to support probable cause, because it’s just as likely that the suspect wasn’t carrying an Android phone or using Google apps at the time.

Techniques like this also reveal big problems with our current warrant system. Even though the standard for getting a warrant is higher than other legal procedures—and EFF pushes for a warrant requirement for digital data and devices—warrants, alone, are no longer enough to protect our privacy. Through a single warrant the police can access exponentially more and more detailed information about us than they ever could in the past. Here, the police are using a single warrant to get access to location information for hundreds of devices. In other contexts, through a single warrant, officers can access all the data on a cell phone or a hard drive; all email stored in a Google account (possibly going back years); and all information linked to a social media account (including photos, posts, private communications, and contacts).

We shouldn’t allow the government to have such broad access to our digital lives. One way we could limit access is by passing legislation that mandates heightened standards, minimization procedures, and particularity requirements for digital searches. We already have this in laws that regulate wiretaps, where police, in addition to demonstrating probable cause, must state that they have first tried other investigative procedures (or state why other procedures wouldn’t work) and also describe how the wiretap will be limited in scope and time.

The Fourth Amendment itself also supports limits on the scope of individual warrants. It states that warrants must “particularly describ[e] the place to be searched, and the persons or things to be seized.” However, many courts merely rubber stamp warrant requests without questioning the broad scope of the request.

As the Times article notes, this technique implicates innocent people and has a real impact on people’s lives. Even if you are later able to clear your name, if you spend any time at all in police custody, this could cost you your job, your car, and your ability to get back on your feet after the arrest. One man profiled in the Times article spent nearly a week in police custody and was having trouble recovering, even months after the arrest. He was arrested at work and subsequently lost his job. Due to the arrest, his car was impounded for investigation and later repossessed. These are the kinds of far-reaching consequences that can result from overly broad searches, so courts should subject geo-location warrants to far more scrutiny.

Related Cases: Carpenter v. United States

California Attorney General Must Investigate Improper Database Searches on Community Observers at Controversial Police Event

Thu, 04/18/2019 - 15:33

This is a guest post by Tracy Rosenberg, executive director of Media Alliance. It was originally published on the Media Alliance website

For the last two years (2017 and 2018) of the Urban Shield weapons expo and SWAT drill in Alameda County, I was a community observer. I went as a citizen to see how my tax dollars were being spent, and as an activist/journalist so I could describe the event to others and to the media. What I didn’t know is that in exchange the Alameda County Sheriff would access my driving record, parking tickets and legal history through CLETS, the California Law Enforcement Telecommunications System.

Urban Shield, as a Homeland Security-funded regional training exercise for SWAT, Fire and Emergency Services, was not open to the public, although some volunteers were solicited to role-play victims and perpetrators in the counterterrorism scenarios. So the great battle that sprung up around the event starting in 2013 with protests in Oakland dislodging the weapons expo from the Downtown Marriotreporters getting thrown out of the eventcivil disobedience outside the gates, and finally bloodied heads at a Berkeley City Council meeting debating the city’s possible withdrawal from the event, was largely waged by people who had never seen the event, but knew that militaristic training of local law enforcement wasn’t helping the growing problems with excessive use of force and the deaths of unarmed people.

When Alameda County finally got serious about debating whether the Urban Shield exercise should continue, a county task force was set up, and that task force set about gathering data, including organizing delegations of outside observers. I was a member of both of those delegations, a large one in 2017 and a smaller one in 2018. As a community observer, I was asked to register and fill out a form to produce a little badge on a rope with my name. The form included in small letters, a disclaimer that a background check would be performed.

I am a privacy advocate, so a) I noticed and b) I felt uncomfortable. In practical terms, during both of my observation periods, I was surrounded by battalions of armed officers at all times, rarely less than 2 feet from me at any given moment. During my guided tour of the SWAT practices, I was escorted by armed sheriff personnel and driven about in a sheriff SUV, much as the KGB-guided tours of the Kremlin during the days of the Soviet Union were described to me as a child. While neither I, nor my fellow observers who included attorneys, medical doctors, and religious leaders, were criminals, the slightest untoward action would have resulted in being immediately blown to smithereens.

In a memo to CLETS subscribing entities sent in April 2018, the Department of Justice reminded law enforcement agencies that CLETS was not to be used to query individuals in the media and the Automated Criminal History System (ACHS) was not to be used for licensing, employment, or certification purposes.

On April 12, Media Alliance and the Electronic Frontier Foundation filed a request for investigation into possible misuse of the CLETS database and a request that the agency cease all similar background checks on journalists and advocates engaged in oversight roles.

Read the letter from EFF and Media Alliance to the California Department of Justice. 

In our inquiry, we added: “Community trust in law enforcement relies on transparency and respect for the watchdog roles of civil society and the news media. Accessing the sensitive data of these observers via CLETS discourages members of the community from participating in oversight activities.”

Californians Want and Deserve Stronger Privacy Laws

Thu, 04/18/2019 - 11:51

California made strides to protect privacy last year with the California Consumer Privacy Act (CCPA). This year, we want to make sure that the state has tools necessary to make sure it can enforce that law, and that everyone will be able to stand up for their own privacy without fear of discrimination.

That is why we are supporting both A.B. 1760 and S.B. 561: two essential bills to provide Californians with the privacy protection they want and deserve. We stand fully behind these bills and their authors, Assemblymember Buffy Wicks and Senator Hannah-Beth Jackson.

Wicks’ bill, A.B. 1760, would give California consumers the knowledge and protection to defend their privacy rights. It makes sure that they can learn which companies have received their personal information through a sale or other form of sharing. The bill also requires that all companies that share data, as well as those that sell it, get the consumer’s opt-in consent to do so.

This law helps people become aware of the myriad ways personal information is shared in the modern digital world. And it ensures that companies cannot punish people for exercising their right to privacy, by imposing a higher price or inferior service. No one should ever be punished for protecting their privacy, and privacy should not be a premium feature for those who can afford it.

Privacy legislation like A.B. 1760 has overwhelming public support. Recent polling from the American Civil Liberties Union found that 94 percent of Californians, across all demographics, want legislation with the protections A.B. 1760 provides.

We thank Assemblymember Wicks for her leadership in continuing to defend her bill, and standing up for Californians and their privacy, even in the face of heavy pushback from the technology industry. Ahead of A.B. 1760’s April 23 hearing before the Privacy Committee, however, the bill has had to undergo amendments. This included removing the private right of action—a right that 94 percent of Californians agree they should have.

That’s where S.B. 561 steps in. Sen. Jackson’s bill provides tools for the Attorney General’s Office to enforce the CCPA and hold companies accountable for their actions, and grants every person the right to take companies to court for violating their privacy rights. As EFF has said many times, the best way to hold companies accountable is to empower ordinary consumers to bring their own lawsuits against the companies that violate their privacy rights.

Sen. Jackson showed remarkable leadership by standing firm against critics to pass her bill out of a key committee, underscoring her commitment to giving the Attorney General and the people of California these crucial tools.

We support these complementary bills to give Californians the rights and power needed to stand up for their own privacy. Tell your lawmakers that it’s time for them to stand up for your privacy, too.

Take Action

Tell Lawmakers to Protect Your privacy

The Ecuadorean Authorities Have No Reason to Detain Free Software Developer Ola Bini

Tue, 04/16/2019 - 22:39

Hours after the ejection of Julian Assange from the London Ecuadorean embassy last week, police officers in Ecuador detained the Swedish citizen and open source developer Ola Bini. They seized him as he prepared to travel from his home in Quito to Japan, claiming that he was attempting to flee the country in the wake of Assange’s arrest. Bini had, in fact, booked the vacation long ago, and had publicly mentioned it on his twitter account before Assange was arrested.

Ola’s detention was full of irregularities, as documented by his lawyers. His warrant was for a “Russian hacker” (Bini is neither); he was not read his rights, allowed to contact his lawyer nor offered a translator.

The charges against him, when they were finally made public, are tenuous. Ecuador’s general prosecutor has stated that Bini was accused of “alleged participation in the crime of assault on the integrity of computer systems” and attempts to destabilize the country. The “evidence” seized from Ola’s home that Ecuadorean police showed journalists to demonstrate his guilt was nothing more than a pile of USB drives, hard drives, two-factor authentication keys, and technical manuals: all familiar property for anyone working in his field.

Ola is a free software developer, who worked to improve the security and privacy of the Internet for all its users. He has worked on several key open source projects, including JRuby, several Ruby libraries, as well as multiple implementations of the secure and open communication protocol OTR. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world.

Like many people working on the many distributed projects defending the Internet, Ola has no need to work from a particular location. He traveled the world, but chose to settle in Ecuador because of his love of that country and of South America in general. At the time of his arrest, he was putting down roots in his new home, including co-founding Centro de Autonomia Digital, a non-profit devoted to creating user-friendly security tools, based out of Ecuador’s capital, Quito.

One might expect the Ecuadorean administration to hold up Bini as an example of the high-tech promise of the country, and use his expertise to assist the new administration in securing their infrastructure — just as his own European Union made use of Ola’s expertise when developing its government-funded DECODE privacy project.

Instead, Ecuador’s leadership has targeted him for arrest as a part of wider political process to distance itself from WikiLeaks. They have incorporated Ola into a media story that claims he was part of a gang of Russian hackers who planned to destabilize the country in retaliation for Julian Assange’s ejection.

At EFF, we are familiar with overzealous prosecutors attempting to implicate innocent coders by portraying them as dangerous cyber-masterminds, as well as demonizing the tools and lifestyle of coders that work to defend the security of critical infrastructure, not undermine it. These cases are indicative of an inappropriate tech panic, and their claims are rarely borne out by the facts.

As expressed by the many technologists supporting Ola Bini in our statement of solidarity, Ecuador should drop all charges against him, and allow Ola to return home to his family and friends. Ecuador’s leaders undermine their country’s reputation abroad and the independence of its judicial system by this fanciful and unfounded prosecution.

How Landmark Technology’s Terrible Patent Has Survived

Tue, 04/16/2019 - 18:06
Stupid Patent of the Month

There’s an increasing insistence from the highest echelons of the patent world that patent abuse just isn’t a thing anymore. The Director of the U.S. Patent Office, Andre Iancu, has called patent trolls—a term for companies that do nothing but collect patents and sue others—mere “monster stories,” and suggested in a recent oversight hearing that it was simply name-calling. 

But whatever you call them—trolls, non-practicing entities, or patent assertion entities—their business model, which involves stockpiling patents to sue productive companies rather than making goods or services, continues to thrive. It’s not hard to find examples of abusive patent litigation that make clear the threat posed by wrongly-issued patents is very real.

Take, for instance, the patents that Lawrence Lockwood owns. These patents have been used to sue companies, large and small, for nearly 20 years now. Through his company Landmark Technologies and his earlier company PanIP, more than 100 lawsuits have been filed against businesses—candy companies, an educational toy maker, and an organic farm, to name a few. Because these companies engage in “sales and distribution via electronic transactions,” or use an automated system “for processing business and financial transactions,” Landmark says they infringe one of its patents.

Those lawsuits don’t account for the other companies that have received licensing demands, but have not been sued in court. The numerous threats made with Lockwood’s patents are made clear both by news accounts of Lockwood’s activity, as well as the several small business owners that have reached out to EFF after being targeted by Lockwood’s patents. 

Patent Office records show Lockwood first applied for a patent in 1984, but his litigation ramped up after he acquired U.S. Patent No. 6,289,319 back in September 2001. The document describes an “automatic business and financial transaction processing system,” which Lockwood has interpreted to give him rights to demand licensing fees from just about any web-based business. Upon receiving that patent, Lockwood promptly sent 100 letters to various e-commerce businesses, demanding $10,000 apiece. When that didn’t work, he started filing lawsuits.

For more than 15 years now, some companies have been paying thousands of dollars to license Lockwood’s patents rather than pay the legal fees required to defend themselves. Hiring attorneys to fight the patents would have cost far more, and Lockwood was keenly aware of this leverage.

“Do they really want to spend $1 million and two years of their life to invalidate a patent they can license for a couple thousand dollars?” Lockwood said in 2003, speaking to a Los Angeles Times reporter about his lawsuits. “People get divorced over this stuff. They have strokes over this.”

Sixteen years and more than 100 lawsuits later, stress and the expenses continue to mount for Lockwood’s targets. Through Landmark, Lockwood continues to demand money from businesses that provide basic e-commerce, although his price has gone up. Companies targeted by Landmark Technology patents in recent years have shown demand letters [PDF, PDF] indicating the company now demands around $65,000 to avoid a lawsuit. 

Not a single court has ever weighed in on the merits of Lockwood’s patent claim, according to court papers [PDF] filed in 2017 by one of his targets. 

Despite some court rulings that have helped cut back patent trolling over the years, nothing has slowed down Lockwood’s broad assault on Internet commerce. This year, through a newly created company called “Landmark Technology A,” Lockwood’s patent no. 7,010,508—related to the ‘319 patent that came before it—has been used to sue two more companies: a specialty bottle-maker in south Seattle, and an Ohio company that sells safety equipment

Based on Landmark’s history, it’s unlikely these two lawsuits will be the last. 

Continuations and Consequences

How did this happen, and how does it continue? Lockwood applied for his first solely-owned patent in 1984, getting it two years later. It describes a network of “information and sales terminals” that could “dispens[e] voice and video information, printed documents, and goods,” accepting credit card payments. There’s no evidence Lockwood developed any such network or even had the ability to do so. In fact, Lockwood, a former travel agent, reportedly admitted during a deposition that he had never used a personal computer “for any length of time,” according to the 2003 Los Angeles Times profile.  

In the mid-90s, Lockwood sued American Airlines for patent infringement, seeking to collect royalties on its SABRE flight reservation system, which he claimed infringed three of his patents. He lost that case when, in 1997, an appeals court agreed with the district court that his patent claims were not infringed and were invalid.

That wasn’t the end of Lockwood’s efforts to make money through patent litigation, though. He continued to get more patents, acquiring Patent No. 6,289,319 in 2001, and 7,010,508 in 2006. Both patents have been used in more than 85 lawsuits, according to the LexMachina legal database. He was able to get those patents despite the fact that they were based on a patent that had been found invalid. Even better for Lockwood, he was allowed to use the “priority date” of the earlier patent. That means the only prior art that could be used to invalidate the patent would have to be from earlier than that priority date—May 24, 1984. 

Led by a family-owned chocolate shop, a group of small businesses banded together to share legal costs and fight Lockwood’s PanIP. When they put up a website about PanIP’s abuse of the system, Lockwood sued the owner of the chocolate shop for defamation and trademark infringement.

The ‘319 patent, which is richly deserving of our “Stupid Patent of the Month” award, was issued because of a problem we’ve spoken about before—abuse of the continuation process.

The Patent Office allows applicants to file “continuation” applications with new claims, as long as they’re based on what was disclosed in previously-filed applications. This creates opportunities for applicants to game the system and get patents on advances they could not have developed. For example, even though Lockwood applied for the ‘319 patent in 1994, it’s a continuation of the original 1984 application—which means that only prior art from 1984 or earlier can be used to invalidate it. 

Landmark’s complaints demand money from operating businesses, claiming that because their systems process “business and financial transactions between entities from remote sites,” they infringe the ‘319 patent. Their recent complaint [PDF] against Illinois-based Learning Resources, Inc. includes a claim chart [PDF] explaining the alleged infringement, which is a 42-page detailed chart that describes using a computer to order a toy on the defendant’s website. 

That chart makes clear that Landmark’s patent doesn’t claim any particular technological advance—just the basic idea of transmitting data between networked computer terminals.  

This patent should be invalid under Section 101 of the patent laws for failing to claim an actual invention. At best, it describes basic computer technology—like an “on-line means for transmitting said information, inquiries, and orders”— to exchange information, and respond to orders. That is a ubiquitous and essential part of e-commerce, not a patent-eligible invention.

Right now, lobbyists are pushing for a wholesale re-write of Section 101, which is the best chance of stopping patents like this one early enough in a case to avoid spending hundreds of thousands of dollars on lawyers and expert witnesses. Drastic alterations to Section 101 could leave targets of Landmark in an even worse position—in order to get out of a multi-million dollar lawsuit, they’ll have to find published, pre-1984 prior art describing the precise, nearly indefinable contours of Lockwood’s “invention,” and invest huge sums on prior art investigations as well as expert witness reports. 

Before lawmakers distort Section 101 so that it’s nearly useless, they should consider campaigns like Landmark’s. It involves an “inventor” who’s long been focused on litigating patents, not creating new innovations—and who admits to leveraging the high cost of litigation defense against small businesses. Lowering the bar for patent-eligibility even further will do far more to threaten innovation than encourage it.

Related Cases: Abstract Patent Litigation

Julian Assange's Prosecution is about Much More Than Attempting to Hack a Password

Tue, 04/16/2019 - 15:52

The recent arrest of Wikileaks editor Julian Assange surprised many by hinging on one charge: a Computer Fraud and Abuse Act (CFAA) charge for a single, unsuccessful attempt to reverse engineer a password. This might not be the only charge Assange ultimately faces. The government can add more before the extradition decision and possibly even after that if it gets a waiver from the UK or otherwise. Yet some have claimed that as the indictment sits now, the single CFAA charge is a sign that the government is not aiming at journalists. We disagree.  This case seems to be a clear attempt to punish Assange for publishing information that the government did not want published, and not merely arising from a single failed attempt at cracking a password. And having watched CFAA criminal prosecutions for many years, we think that neither journalists nor the rest of us should be breathing a sigh of relief. 

The CFAA grants broad discretion to prosecutors and has been used to threaten, prosecute, and civilly sue security researchers, competitors, and disloyal employees, among others. It has notoriously severe penalties, often applied out of all proportion to the offense. Here the government says the single charge of attempted, apparently unsuccessful assistance in password cracking can carry five years in prison, although under the sentencing guidelines the actual sentence would likely be lower. Remember, there is no parole in the federal judicial system. 

We do not believe this will be the last time we see the CFAA used to prosecute efforts central to journalism. 

While we can all agree that we need some method for prosecuting malicious computer crimes, the lack of clear limits and exceptions, combined with draconian penalties, make the CFAA a powerful hammer that prosecutors can use against those who act against the wishes of a computer owner. That’s an especially broad reach in this age of networked computers. As the tragic prosecution of our friend Aaron Swartz for downloading scientific articles demonstrated, this also isn’t the first time that the CFAA has been used to bludgeon people for trying to inform the public.

Since journalists often work to provide us with information that the powerful do not want us to see, we do not believe this will be the last time we see the CFAA used to prosecute efforts central to journalism. 

Of course, breaking into computers and cracking passwords in many contexts is rightly illegal. When analyzing the worst abuses of the CFAA, EFF has argued that the statute should only be applied to serious attempts to circumvent technological access barriers, including passwords. But even if the government has made a sufficient claim of a 'legitimate' CFAA violation here, it still must prove every element beyond a reasonable doubt, and it should do so without relying on irrelevant arguments about whether Wikileaks was truly engaged in journalism.

Whistleblower Chelsea Manning was charged in 2010 for her role in the release of approximately 700,000 military war and diplomatic records to WikiLeaks, which created front page news stories around the world and spurred significant reforms. The disclosure of classified Iraq war documents exposed human rights abuses and corruption the government had kept hidden from the public. While the disclosures riveted the globe, they also angered, embarrassed, and inconvenienced many, including the U.S. Departments of Defense and State, although no injuries or deaths were ever demonstrated as a result.

The Assange indictment, in contrast, arises from conversations the two had about an apparently unsuccessful attempt to access other classified documents. Here's why it seems clear to us that the government’s charge of an attempted conspiracy to violate the CFAA is being used as a thin cover for attacking the journalism. 

First, the government spends much of the indictment referencing regular journalistic techniques that are irrelevant to the CFAA claim. The indictment includes the actual elements of the CFAA claim in paragraph 15. Here’s an attempt to translate it in plain English: pursuant to an agreement aimed at giving Assange access to secret government information, Manning gave Assange a scrambled portion of a password that would allow Manning to log into a computer in a way that would hide her identity from the government. Assange’s only alleged illegal act was trying to unscramble a portion of that password.

If the government wasn’t aiming further, it could have stopped there. But it didn’t. Instead it included descriptions of normal journalistic practices in the modern age: using a secure chat service, using cloud services to transfer files, removing usernames, and deleting logs to protect the source’s identity. The government includes in the indictment a cryptic comment by Assange: “curious eyes never run dry in my experience,” which it characterizes as “encouraging” violations of the law. The government’s inclusion of these facts, as well as its reference to the Espionage Act, is a strong signal that it believes these other actions should also be viewed as part of a crime.   

On top of that, as they have since the 1990s when they want to feed the “hacker madness” narrative, the prosecutors added unnecessary computer allegations to the indictment. The indictment mentions Manning’s use of the Linux operating system, darkly described as “special software . . . to access the computer file” that contained the password. It describes the use of a secure online chat service called Jabber. It even includes the fact that Manning used a “special folder” in Wikileaks’ cloud-based file transfer system. These facts are completely irrelevant to the single CFAA claim, but they, along with the Justice Department’s press release headline trumpeting Assange’s “hacking,” appear aimed at linking and even equating journalism and use of normal technical tools with the underlying crime. 

Second, President Trump himself has blurred the distinction between what Wikileaks is accused of here and mainstream journalism. In an interview just after the arrest, Trump received a lot of scorn for saying that he did not know much about Wikileaks, an obvious lie. But what he said next should also be raising concerns about Trump’s view of the legality of normal journalistic practices: “I guess the concept is perhaps [Assange] is a reporter type and, you know, The New York Times is doing the same thing maybe and The Washington Post maybe the same thing." Trump has made no secret of his hatred for these outlets and desire to create more liability for journalists revealing facts and news he doesn’t like to the public. His words here should give journalists pause.

Third, legally speaking, the claim in the indictment itself seems very small. The underlying act Assange is accused of—a single failed attempt to figure out a password—was not even important enough to be included in the formal CFAA charges leveled against Manning, even though it was known to the prosecutors and reported about long ago. The government made its CFAA case against Manning on her separate use of an “unauthorized” program (Wget) to actually access other materials she provided to Wikileaks, in violation of the government’s terms of use. For separate reasons, this was not a legitimate use of the CFAA, as EFF argued in its amicus brief in support of Manning. The misapplication of the CFAA to Manning is actually still pending in the appeal of Manning’s case, which continues despite the commutation of her sentence.

In the prosecutors’ desperation to find something, anything, to charge Assange, the U.S. government had to reach beyond the acts it used to court-martial Manning into something that apparently didn’t happen. While attempted violations of the CFAA are illegal, as with many other crimes, it’s still a remarkably small potatoes violation—with no apparent harm. It’s difficult to imagine that any U.S. Attorneys’ office would even investigate, much less impanel a grand jury and demand extradition for an attempted, unsuccessful effort to unscramble a single password if it wasn’t being done to punish the later publication of other materials.

From where we sit this prosecution feels sadly familiar. Just a few years ago this same statute was used by federal prosecutors to find something, anything, they could use to charge our friend Aaron Swartz. Swartz angered the government, first by downloading a bunch of judicial documents from the Pacer system and later, by downloading scientific journal articles from JSTOR. The government then continued the JSTOR prosecution even when JSTOR, the alleged victim, asked them to stop. Facing the CFAA’s draconian penalties, Swartz took his own life.

From these and other CFAA prosecutions we’ve tracked over at least the past 20 years, it’s nearly impossible to weigh the relatively narrow charge used to arrest Assange without considering the nearly decade-long effort by the U.S. government to find a way to punish Wikileaks for publishing information vital to the public interest. Anyone concerned about press freedom should be concerned about this application of the CFAA. 

Related Cases: Government demands Twitter records of Birgitta Jonsdottir

Media Alert: EFF Argues Against Forced Unlocking of Phone in Indiana Supreme Court

Tue, 04/16/2019 - 13:17
Justices to Consider Fifth Amendment Right Against Self-Incrimination

Wabash, IN—At 10 a.m. on Thursday, April 18, the Electronic Frontier Foundation (EFF) will argue to the Indiana Supreme Court that police cannot force a criminal suspect to turn over a passcode or otherwise decrypt her cell phone. The case is Katelin Seo v. State of Indiana.

The Fifth Amendment of the Constitution states that people cannot be forced to incriminate themselves, and it’s well settled that this privilege against self-incrimination covers compelled “testimonial” communications, including physical acts. However, courts have split over how to apply the Fifth Amendment to compelled decryption of encrypted devices.

Along with the ACLU, EFF responded to an open invitation from the Indiana Supreme Court to file an amicus brief in this important case. In Thursday’s hearing, EFF Senior Staff Attorney Andrew Crocker will explain that the forced unlocking of a device requires someone to disclose “the contents of his own mind.” That is analogous to written or oral testimony, and is therefore protected under the U.S. Constitution.

Thursday’s hearing is in Indiana’s Wabash County to give the public an opportunity to observe the work of the court. Over 750 students are scheduled to attend the argument. It will also be live-streamed.

Hearing in Katelin Seo v. State of Indiana

EFF Senior Staff Attorney Andrew Crocker

April 18, 10 a.m.

Ford Theater
Honeywell Center
275 W. Market Street
Wabash, Indiana 46992

For more information on attending the argument in Wabash:

For more on this case:

Contact:  AndrewCrockerSenior Staff

Victory! Fairfax, Virginia Judge Finds That Local Police Use of ALPR Violates the State’s Data Act

Tue, 04/16/2019 - 11:02

Thanks to a recent ruling by Fairfax County Circuit Court Judge Robert J. Smith, drivers in Fairfax County, Virginia need not worry that local police are maintaining ALPR records of their travels for work, prayer, protest or play.

Earlier this month, Judge Smith ordered an injunction against the use of the license plate database, finding that the “passive” use of Fairfax County Police Department’s Automated License Plate Reader (ALPR) system violated Virginia’s Government Data Collection and Dissemination Practices Act (Data Act). This means that the Fairfax County Police will be required to purge its database of ALPR data that isn’t linked to a criminal investigation and stop using ALPRs to passively collect data on people who aren’t suspected of criminal activity. The ruling came in response to a complaint brought by the ACLU of  Virginia in support of Harris Neal, a local resident whose license plate had been recorded at least twice by the Fairfax police.

Judge Smith had previously dismissed the case. In a 2016 ruling, the court ruled that license plate numbers were not covered by the state law’s limits on government data collection, because alone, they did not identify a single individual. Virginia’s Supreme Court overturned that ruling.

Information collected using ALPR data is personally identifiable. 

EFF and the Brennan Center for Justice filed an amicus brief when the case came before the Supreme Court of the State of Virginia, holding that information collected using ALPR data is personally identifiable. Thus, the Data Act was applicable and required the Fairfax Police to purge plate information they collect using the system.

In its reversal, the Virginia Supreme Court found that the photographic and location data stored in the department’s database did meet the Data Act’s definition of ‘personal information,’ but sent the case back to the Circuit Court to determine whether the database met the Act’s definition of an “information system.” Judge Smith’s ruling affirms EFF’s view that the ALPR system does indeed provide a means through which a link to the identity of a vehicle's owner can be readily made.

Often mounted on police vehicles or attached to fixed structures like street lights and bridges, ALPR systems comprise high-speed cameras connected to computers that photograph every license plate that passes. The systems then log, associate, and store the time, date, and location a particular car was encountered. This allows police to identify and record the locations of vehicles in real-time and correlate where those vehicles have been in the past.

Some ALPR systems are capable of scanning up to 1,600 plates per minute, capturing the plate numbers of millions of innocent, law-abiding drivers.

Using this information, police are able to establish driving patterns for individual cars. Some ALPR systems are capable of scanning up to 1,600 plates per minute, capturing the plate numbers of millions of innocent, law-abiding drivers who aren’t under any kind of investigation and just living their daily lives.

The Fairfax County Police Chief says he has asked the county attorney to appeal the ruling. However, based on the broad language in the Virginia Supreme Court's original opinion, we think it's unlikely the trial court's opinion would be overruled on appeal. Although the court's ruling technically only applies to the Fairfax County Police Department, all Virginia state police agencies using ALPR should take note: passive collection and use of ALPR data violates state law and must be stopped.Using this information, police are able to establish driving patterns for individual cars. Some ALPR systems are capable of scanning up to 1,600 plates per minute, capturing the plate numbers of millions of innocent, law-abiding drivers who aren’t under any kind of investigation, and just living their daily lives.

EFF’s Tweet About an Overzealous DMCA Takedown Is Now Subject to an Overzealous Takedown

Sun, 04/14/2019 - 23:31

Update, 4/15/2019: EFF's tweet has been restored.

Get ready for a tale as good as anything you’d see on television. Here’s the sequence of events: the website TorrentFreak publishes an article about a leak of TV episodes, including shows from the network Starz. TorrentFreak tweets its article, Starz sends a copyright takedown notice. TorrentFreak writes about the takedown, including a comment from EFF. EFF tweets the article about the takedown and the original article. EFF’s tweet…gets hit with a takedown.

TorrentFreak’s original article about leaked episodes of television does contain a few screenshots of some of the leaked episodes—enough to establish the veracity of the story. It does not contain links to download the episodes, a fact to keep in mind as this story goes on.

TorrentFreak then tweeted a link to its article, which did contain a thumbnail image, but not one that matches any of the screenshots in the article. An agency acting on behalf of Starz then used the Digital Millennium Copyright Act (DMCA) to have Twitter remove the tweet, alleging copyright infringement. The complaint TorrentFreak received says the article has “images of unreleased episodes” of the show American Gods. It also maintains that TorrentFreak supplies “information about their illegal availability.”

Here’s the thing: TorrentFreak reporting about an illegal event is not illegal. Reporting about copyright infringement is not infringement. The few thumbnails—including a single image from American Gods—act as proof of the story being reported and certainly don’t replace watching entire episodes of television. (If you don’t believe me, go look at a single screenshot from a show and figure out if it scratches the same itch as watching a whole hour of TV.) The screenshot also illustrated the watermarks in the leaked episode, which suggest that the leak came from a pre-release screener copy sent to TV critics, as the TorrentFreak article discusses.

Articles reporting on true events are textbook examples of fair use. Using the DMCA in this way is an attack on journalism and fair use. Which is what we would have said if asked.

Oh, wait. We were asked. TorrentFreak followed up its first article with one about the takedown it received. They reached out for comment, and, among other things, EFF Senior Staff Attorney Kit Walsh told TorrentFreak:

Starz has no right to silence TorrentFreak’s news article or block links to it. The article reports that there are people on the Internet infringing copyright, but that is a far cry from being an infringement itself. The screenshots are important parts of the reporting that validate the facts being reported. Starz should withdraw its takedown and refrain from harassing journalists in the future.

As is our wont, we tweeted out a link to TorrentFreak’s original article, with text nearly identical to Walsh’s statement to TorrentFreak. A few days later, we also received a takedown and our tweet was blocked. At this point, you may have noticed just how far removed we are from anything that remotely resembles copyright infringement.

The DMCA notice we received from Twitter was sent by Starz.  In the field labeled “links to original work,” Starz wrote “n/a.” To reiterate: in the field about where the original work being infringed on can be located, the answer is “not applicable.” Under “Description of infringement,” it says, “Link to bootleg.” There’s no bootleg link in any of the articles or tweets.

Sending a DMCA complaint requires a sworn statement that the person sending the complaint actually believes it to be copyright infringement. Look at this sequence of events again and try to imagine sending a takedown for our tweet honestly believing it to be infringement.

The DMCA process allows us to send a counterclaim, explaining that the tweet is not infringement and directing Twitter to restore the tweet, barring a copyright infringement lawsuit being filed by Starz. We have done so.

DMCA claims can be intimidating, especially to people who don’t know the ins and outs of the process. Fortunately, EFF is an organization that definitely knows its rights and how to exercise them. And we’ll keep calling out abusive takedowns and helping people defend their rights to speak on the Internet.

Four Steps Facebook Should Take to Counter Police Sock Puppets

Sun, 04/14/2019 - 22:14

Despite Facebook’s repeated warnings that law enforcement is required to use “authentic identities” on the social media platform, cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information.

The most recent examples—and one of the most egregious—was revealed by The Guardian this week. The U.S. Department of Homeland Security executed a complex network of dummy Facebook profiles and pages to trick immigrants into registering with a fake college, The University of Farmington. The operation netted more than 170 arrests. Meanwhile, Customs and Border Protection issued a privacy impact assessment that encourages investigators to conceal their social media accounts.

Last fall, after the Memphis Police Department was caught using fake profiles to monitor Black Lives Matter activists, Facebook added new language to its law enforcement guidelines emphasizing that this practice was not permitted. Facebook also removed the offending accounts and sent Memphis a stern warning not to do it again. However, Facebook has proven resistant to sending warning letters to every agency caught red-handed; recently it turned down a request by EFF that it confront the San Francisco Police Department after court records revealed its use of fake accounts in criminal investigations.

This latest DHS investigation uncovered by The Guardian, as well as The Root’s report revealing other agencies that authorize undercover cops to friend people on Facebook, indicates that much more needs to be done.

EFF is now calling on Facebook to escalate the matter with law enforcement in the United States. Facebook should take the following actions to address the proliferation of fake/impersonator Facebook accounts operated by law enforcement, in addition to suspending the fake accounts.

  1. As part of its regular transparency reports, Facebook should publish data on the number of fake/impersonator law enforcement accounts identified, what agencies they belonged to, and what action was taken.
  2. When a fake/impersonator account is identified, Facebook should alert the users and groups that interacted with the account whether directly or indirectly. These interactions include, but are not limited to, a friend request, Messenger messages, a comment, membership in a group, or being shown an advertisement. The user should know what agency operated the account and how long it was in operation. Facebook should also add a notification to the agency’s page informing the public that the agency is known to have created fake/impersonator law enforcement accounts.
  3. Facebook should further amend its “Amended Terms for Federal, State and Local Governments in the United States” to make it explicitly clear that, by agreeing to the terms, the agency is agreeing not to operate fake/impersonator profiles on the platform. Facebook has the right to take actions in response to violation of their terms, but when they do so, Facebook should be fair and consistent with the Santa Clara Principles.
  4. Facebook should review the department policies for social media use by law enforcement agencies. When law enforcement has a written policy of engaging in fake/impersonator law enforcement accounts in violation of the “Amended Terms for Federal, State and Local Governments in the United States,” Facebook should add a notification to the agency’s page to inform users of the law enforcement policy.

Facebook’s practice of taking down these individual accounts when they learn about them from the press (or from EFF) is insufficient to deter what we believe is a much larger iceberg beneath the surface. We often only discover the existence of law enforcement fake profiles months, if not years, after an investigation has concluded. These four changes are relatively light lifts that would enhance transparency and establish real consequences for agencies that deliberately violate the rules.

Don’t Force Web Platforms to Silence Innocent People

Fri, 04/12/2019 - 18:09

The U.S. House Judiciary Committee held a hearing this week to discuss the spread of white nationalism, online and offline. The hearing tackled hard questions about how online platforms respond to extremism online and what role, if any, lawmakers should play. The desire for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly in the wake of the recent massacre in New Zealand. But unfortunately, looking to Silicon Valley to be the speech police may do more harm than good.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors. Similarly, when Congress considers regulating speech on online platforms, it must consider both the First Amendment implications and how its regulations might unintentionally encourage platforms to silence innocent people.

When considering measures to discourage or filter out unwanted activity, platforms must consider how those mechanisms might be abused by bad actors.

Again and again, we’ve seen attempts to more aggressively stamp out hate and extremism online backfire in colossal ways. We’ve seen state actors abuse flagging systems in order to silence their political enemies. We’ve seen platforms inadvertently censor the work of journalists and activists attempting to document human rights atrocities.

But there’s a lot platforms can do right now, starting with more transparency and visibility into platforms’ moderation policies. Platforms ought to tell the public what types of unwanted content they are attempting to screen, how they do that screening, and what safeguards are in place to make sure that innocent people—especially those trying to document or respond to violence—aren’t also censored. Rep. Pramila Jayapal urged the witnesses from Google and Facebook to share not just better reports of content removals, but also internal policies and training materials for moderators.

Better transparency is not only crucial for helping to minimize the number of people silenced unintentionally; it’s also essential for those working to study and fight hate groups. As the Anti-Defamation League’s Eileen Hershenov noted:

To the tech companies, I would say that there is no definition of methodologies and measures and the impact. […] We don’t have enough information and they don’t share the data [we need] to go against this radicalization and to counter it.

Along with the American Civil Liberties Union, the Center for Democracy and Technology, and several other organizations and experts, EFF endorses the Santa Clara Principles, a simple set of guidelines to help align platform moderation practices to human rights and civil liberties principles. The Principles ask platforms

  • to be honest with the public about how many posts and accounts they remove,
  • to give notice to users who’ve had something removed about what was removed, and under what rule, and
  • to give those users a meaningful opportunity to appeal the decision.

Hershenov also cautioned lawmakers about the dangers of heavy-handed platform moderation, pointing out that social media offers a useful view for civil society and the public into how and where hate groups organize: “We do have to be careful about whether in taking stuff off of the web where we can find it, we push things underground where neither law enforcement nor civil society can prevent and deradicalize.”

Before they try to pass laws to remove hate speech from the Internet, members of Congress should tread carefully. Such laws risk pushing platforms toward a more highly filtered Internet, silencing far more people than was intended. As Supreme Court Justice Anthony Kennedy wrote in Matel v. Tam (PDF) in 2017, “A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all.”

Join EFF and Help Guide Our International Policy Work

Fri, 04/12/2019 - 17:52

Do you want to help defend civil liberties around the world? Are you an expert in copyright, intermediary liability, and European lawmaking? A rare opportunity to help guide EFF in those arenas is now available—we're hiring an International Policy Director.

EFF weighs in when international lawmaking has a huge potential impact on the Internet for everyone. That’s why we banded with organizations around the world to stop the Trans-Pacific Partnership, whose copyright and anti-hacking measures would have changed the global Internet for the worse. It’s also why we fought to stop Article 13 in Europe, which now threatens to usher in a new era of a more highly filtered web. The policy fights that will change the Internet for everyone frequently happen in international forums.

The International Policy Director will act as a bridge between EFF's legal strategy and our international policy work. You don’t have to be a lawyer to apply, but lawyers are highly encouraged. The Director will work closely with others across EFF and lead a small team of senior policy experts, so communication skills and management experience are essential.

EFF has highly competitive housing benefits to make living in the Bay Area a reality. We also have a warm, welcoming, and intellectually challenging workplace culture.

If you think you might be the right person for the role, please apply. Otherwise, please forward the listing on to your appropriate contacts.