Category Archives: Criminal

Cloud Users and Providers Win Big Privacy Victory – U.S. v. Warshak

The Sixth Circuit ruled on Tuesday that criminal investigators must obtain a warrant to seize user data from cloud providers, voiding parts of the notorious Stored Communication Act.  The SCA allowed investigators to demand providers turn over user data under certain circumstances (e.g., data stored more than 180 days) without obtaining a warrant supported by probable cause.

I have a very long piece analyzing the decision, published on CNET this evening.  See “Search Warrants and Online Data:  Getting Real.” (I also wrote extensively about digital search and seizure in “The Laws of Disruption.”)  The opinion is from the erudite and highly-readable Judge Danny Boggs.    The case is notable if for no other reason than its detailed and lurid description of the business model for Enzyte, a supplement that promises to, well, you know what it promises to do….

The SCA’s looser rules for search and seizure created real headaches for cloud providers and weird results for criminal defendants.  Emails stored on a user’s home computer or on a service provider’s computer for less than 180 days get full Fourth Amendment protection.  But after 180 days the same emails stored remotely lose some of their privacy under some circumstances.   As the commercial Internet has evolved (the SCA was written in 1986), these provisions have become increasingly anomalous, random and worrisome, both to users and service providers.  (As well as to a wide range of public interest groups.)

Why 180 days?  I haven’t had a chance to check the legislative history, but my guess is that in 1986 data left on a service provider’s computer would have taken on the appearance of being abandoned.

Assuming the Sixth Circuit decision is upheld and embraced by other circuits, digital information will finally be covered by traditional Fourth Amendment protections regardless of age or location.  Which means that the government’s ability to seize emails (Tuesday’s case applied only to emails, but other user data would likely get the same treatment) without a warrant that is based on probable cause will turn on whether or not the defendant had a “reasonable expectation of privacy” in the data.  If the answer is yes, a warrant will be required.

(If the government seizes the data anyway, the evidence could be excluded as a penalty.  The “exclusionary rule” was not invoked in the Warshak case, however, because the government acted on a good-faith belief that the SCA was Constitutional.)

Where does the “reasonable expectation of privacy” test come from?  The Fourth Amendment protects against “unreasonable” searches and seizures, and, since the Katz decision in 1968, Fourth Amendment cases turn on an analysis of whether a criminal defendant’s  expectation of privacy in whatever evidence is obtained was reasonable.

Katz involved an electronic listening device attached to the outside of a phone booth—an early form of electronic surveillance.  Discussions about whether a phone conversation could be “searched” or “seized” got quickly metaphysical, so the U.S. Supreme Court decided that what the Fourth Amendment really protected was the privacy interest a defendant had in whatever evidence the government obtained.  “Reasonable expectation of privacy” covered all the defendant’s “effects,” whether tangible or intangible.

Which means, importantly, that not all stored data would pass the test requiring a warrant.   Only stored data that the user reasonably expects to be kept private by the service provider would require a warrant.  Information of any kind that the defendant makes no effort to keep private—e.g., talking on a  cell phone in a public place where anyone can hear—can be used as evidence without a warrant.

Here the Warshak court suggested that if the terms of service were explicit that user data would not be kept private, then users wouldn’t have a reasonable expectation of privacy that the Fourth Amendment protected.  On the other hand, terms that reserved the service provider’s own right to audit or inspect user data did not defeat a reasonable expectation of privacy, as the government has long argued.

An interesting test case, not discussed in the opinion, would be Twitter.  Could a criminal investigator demand copies of a defendant’s Tweets without a warrant, arguing that Tweets are by design public information?  On the one hand, Twitter users can exclude followers they don’t want.  But at the same time, allowed followers can retweet without the permission of the original poster.   So, is there a reasonable expectation of privacy here?

There’s no answer to this simplified  hypothetical (yet), but it is precisely the kind of analysis that courts perform when a defendant challenges the government’s acquisition of evidence without full Fourth Amendment process being followed.

To pick an instructive tangible evidence example, last month appellate Judge Richard Posner wrote a fascinating decision that shows the legal mind in its most subtle workings.  In U.S. v. Simms, the defendant challenged the inclusion of evidence that stemmed from a warranted search of his home and vehicle.  The probable cause that led to the warrant was the discovery in the defendant’s trash of marijuana cigarette butts.  The defendant argued that the search leading to the warrant was a violation of the Fourth Amendment, since the trash can was behind a high fence on his property.

Courts have held that once trash is taken to the curb, the defendant has no “reasonable” expectation of privacy and therefore is deemed to consent to a police officer’s search of that trash.  But trash cans behind a fence are generally protected by the Fourth Amendment, subject to several other exceptions.

Here Judge Posner noted that the defendant’s city had an ordinance that prohibited taking the trash to the curb during the winter, out of concern that cans would interfere with snow plowing.  Instead, the “winter rules” require that trash collectors take the cans from the resident’s property, and that the residents leave a safe and unobstructed path to wherever the cans are stored.  Since the winter rules were in effect, and the cans were left behind a fence but the gate was left open (perhaps stuck in the snow), and the police searched them on trash pickup day, the search did not violate the defendant’s reasonable expectation of privacy.

For better or worse, this is the kind of analysis judges must perform in the post-Katz era, when much of what we consider to be private is not memorialized in papers or other physical effects but which is likely to be intangible—the state of our blood chemistry, information stored in various data bases, heat given off and detectable by infrared scanners.

The good news is that the Warshak case is a big step in including digital information under that understanding of the Fourth Amendment.  Search and seizure is evolving to catch up with the reality of our digital lives.

Domain Name Seizures and the Limits of Civil Forfeiture

I was quoted this morning in Sara Jerome’s story for The Hill on the weekend seizures of domain names the government believes are selling black market, counterfeit, or copyright infringing goods.

The seizures take place in the context of an on-going investigation where prosecutors make purchases from the sites and then determine that the goods violate trademarks or copyrights or both.

Several reports, including from CNET, The Washington Post and Techdirt, wonder how it is the government can seize a domain name without a trial and, indeed, without even giving notice to the registered owners.

The short answer is the federal civil forfeiture law, which has been the subject of increasing criticism unrelated to Internet issues.  (See http://law.jrank.org/pages/1231/Forfeiture-Constitutional-challenges.html for a good synopsis of recent challenges, most of which fail.)

The purpose of forfeiture laws is to help prosecutors fit the punishment to the crime, especially when restitution of the victims or of the cost of prosecution is otherwise unlikely to have a deterrent effect, largely because the criminal has no assets to attach.  In the war on drugs, for example, prosecutors can now seize pretty much any property used in the commission of the crime, including a seller’s vehicle or boat.  (See U.S. v. 1990 Toyota 4 Runner for an example and explanation of the limits of federal forfeiture law.)

Forfeiture laws have been increasingly used to fund large-scale enforcement operations, and many local and federal police now develop budgets for these activities based on assumptions about the value of seized property.  This has led to criticism that the police are increasingly only enforcing the law when doing so is “profitable.”  But police point out that in an age of regular budget cuts, forfeiture laws are all they have in the way of leverage.

Sometimes the forfeiture proceedings happen after the trial, but as with the domain names, prosecutors also have the option to seize property before any indictment and well before any trial or conviction.  Like a search warrant, a warrant to seize property requires only that a judge find probable cause that the items to be seized fit the requirements of forfeiture—in general, that they were used in the commission of a crime.

The important difference between a seizure and a finding of guilt—the difference that allows the government to operate with such a free hand—is that the seizure is only temporary.  A forfeiture, as here, isn’t permanent until there is a final conviction.

The pre-trial seizure is premised on the idea that during the investigation and trial, prosecutors need to secure the items so that the defendant doesn’t destroy or hide it.

If the defendant is acquitted, the seized items are returned.  Or, if the items turn out not to be subject to forfeiture (e.g., they were not used in the commission of any crimes the defendant is ultimately convicted for), they are again returned.  Even before trial, owners can sue to quash the seizure order on the grounds that there was insufficient (that is, less than probable) cause to seize it in the first place.

All of that process takes time and money, however, and many legal scholars believe in practice that forfeiture reverses the presumption of innocence, forcing the property owner to prove the property is “innocent” in some way.

In current (and expanding) usage, forfeiture may also work to short-circuit due process of the property owner.  (Or owners—indeed, seized property may be jointly owned, and the victim of the crime may be one of the owners, as when the family car is seized when the husband uses it to liaison with a prostitute.)

That’s clearly a concern with the seizure of domain names.  This “property” is essential for the enterprise being investigated to do business of any kind.  So seizing the domain names before indictment and trial effectively shuts down the enterprise indefinitely. (Reports are that most if not all of the enterprises involved in this weekend’s raid, however, have returned under new domain names.)

If prosecutors drag their heels on prosecution, the defendant gets “punished” anyway.  So even if the defendant is never charged or is ultimately acquitted, there’s nothing in the forfeiture statute that requires the government to make them whole for the losses suffered during the period when their property was held by the prosecution.  The loss of the use of a car or boat, for example, may require the defendant to rent another while waiting for the wheels of justice to turn.

For a domain name, even a short seizure effectively erases any value the asset has.  Even if ultimately returned, it’s now worthless.

Clearly the prosecutors here understand that a pre-trial seizure is effectively a conviction.  Consider the following quote from Immigration and Customs Enforcement Director John Morton, who said at a press conference today, “Counterfeiters are prowling in the back alleys of the Internet, masquerading, duping and stealing.”  Or consider the wording of the announcement placed on seized domain names (see http://news.cnet.com/8301-1023_3-20023918-93.html), implying at the least that the sites were guilty of illegal acts.

There’s no requirement for the government to explain the seizures are only temporary measures designed to safeguard property that may be evidence of crime or may be an asset used to commit it.  Nor do they have to acknowledge that none of the owners of the domain names seized has been charged or convicted of any crime yet.  But the farther prosecutors push the forfeiture statute, the bigger the risk that courts or Congress will someday step in to pull them back.

After the deluge, more deluge

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET News.com and The Washington Post.

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

If the size of my stacks are any indication of activity level, the most contentious areas of legal debate are, not surprisingly, privacy (Facebook, Google, Twitter et. al.), infrastructure (Net neutrality, Title II and the wireless spectrum crisis), copyright (the secret ACTA treaty, Limewire, Google v. Viacom), free speech (China, Facebook “hate speech”), and cyberterrorism (Sen. Lieberman’s proposed legislation expanding executive powers).

There was relatively little development in other key topics, notably antitrust (Intel and the Federal Trade Commission appear close to resolution of the pending investigation; Comcast/NBC merger plodding along).  Cyberbullying, identity theft, spam, e-personation and other Internet crimes have also gone eerily, or at least relatively, quiet.

Where are We?

There’s one thing that all of the high-volume topics have in common—they are all moving increasingly toward a single topic, and that is the appropriate balance between private and public control over the Internet ecosystem.  When I first started researching cyberlaw in the mid-1990’s, that was truly an academic question, one discussed by very few academics.

But in the interim, TCP/IP, with no central authority or corporate owner, has pursued a remarkable and relentless takeover of every other networking standard.  The Internet’s packet-switched architecture has grown from simple data file exchanges to email, the Web, voice, video, social network and the increasingly hybrid forms of information exchanges performed by consumers and businesses.

As its importance to both economic and personal growth has expanded, anxiety over how and by whom that architecture is managed has understandably developed in parallel.

(By the way, as Morgan Stanley analyst Mark Meeker pointed out this spring, consumer computing has overtaken business computing as the dominant use of information technology, with a trajectory certain to open a wider gap in the future.)

The locus of the infrastructure battle today, of course, is in the fundamental questions being asked about the very nature of digital life.  Is the network a piece of private property operated subject to the rules of the free market, the invisible hand, and a wondrous absence of transaction costs?  Or is it a fundamental element of modern citizenship, overseen by national governments following their most basic principles of governance and control?

At one level, that fight is visible in the machinations between governments (U.S. vs. E.U. vs. China, e.g.) over what rules apply to the digital lives of their citizens.  Is the First Amendment, as John Perry Barlow famously said, only a local ordinance in Cyberspace?  Do E.U. privacy rules, being the most expansive, become the default for global corporations?

At another level, the lines have been drawn even sharper between public and private parties, and in side-battles within those camps.  Who gets to set U.S. telecom policy—the FCC or Congress, federal or state governments, public sector or private sector, access providers or content providers?  What does it really mean to say the network should be “nondiscriminatory,” or to treat all packets anonymously and equally, following a “neutrality” principle?

As individuals, are we consumers or citizens, and in either case how do we voice our view of how these problems should be resolved?  Through our elected representatives?  Voting with our wallets?  Through the media and consumer advocates?

Not to sound too dramatic, but there’s really no other way to see these fights as anything less than a struggle for the soul of the Internet.  As its importance has grown, so have the stakes—and the immediacy—in establishing the first principles, the Constitution, and the scriptures that will define its governance structure, even as it continues its rapid evolution.

The Next Wave

Network architecture and regulation aside, the other big problems of the day are not as different as they seem.  Privacy, cybersecurity and copyright are all proxies in that larger struggle, and in some sense they are all looking at the same problem through a slightly different (but equally mis-focused) lens.  There’s a common thread and a common problem:  each of them represents a fight over information usage, access, storage, modification and removal.  And each of them is saddled with terminology and a legal framework developed during the Industrial Revolution.

As more activities of all possible varieties migrate online, for example, very different problems of information economics have converged under the unfortunate heading of “privacy,” a term loaded with 19th and 20th century baggage.

Security is just another view of the same problems.  And here too the debates (or worse) are rendered unintelligible by the application of frameworks developed for a physical world.  Cyberterror, digital warfare, online Pearl Harbor, viruses, Trojan Horses, attacks—the terminology of both sides assumes that information is a tangible asset, to be secured, protected, attacked, destroyed by adverse and identifiable combatants.

In some sense, those same problems are at the heart of struggles to apply or not the architecture of copyright created during the 17th Century Enlightenment, when information of necessity had to take physical form to be used widely.  Increasingly, governments and private parties with vested interests are looking to the ISPs and content hosts to act as the police force for so-called “intellectual property” such as copyrights, patents, and trademarks.  (Perhaps because it’s increasingly clear that national governments and their physical police forces are ineffectual or worse.)

Again, the issues are of information usage, access, storage, modification and removal, though the rhetoric adopts the unhelpful language of pirates and property.

So, in some weird and at the same time obvious way, net neutrality = privacy = security = copyright.  They’re all different and equally unhelpful names for the same (growing) set of governance issues.

At the heart of these problems—both of form and substance—is the inescapable fact that information is profoundly different than traditional property.  It is not like a bush or corn or a barrel of oil.  For one thing, it never has been tangible, though when it needed to be copied into media to be distributed it was easy enough to conflate the media for the message.

The information revolution’s revolutionary principle is that information in digital form is at last what it was always meant to be—an intangible good, which follows a very different (for starters, a non-linear) life-cycle.  The ways in which it is created, distributed, experienced, modified and valued don’t follow the same rules that apply to tangible goods, try as we do to force-fit those rules.

Which is not to say there are no rules, or that there can be no governance of information behavior.  And certainly not to say information, because it is intangible, has no value.  Only that for the most part, we have no real understanding of what its unique physics are.  We barely have vocabulary to begin the analysis.

Now What?

Terminology aside, I predict with the confidence of Moore’s Law that business and consumers alike will increasingly find themselves more involved than anyone wants to be in the creation of a new body of law better-suited to the realities of digital life.  That law may take the traditional forms of statutes, regulations, and treaties, or follow even older models of standards, creeds, ethics and morals.  Much of it will continue to be engineered, coded directly into the architecture.

Private enterprises in particular can expect to be drawn deeper (kicking and screaming perhaps) into fundamental questions of Internet governance and information rights.

Infrastructure and application providers, as they take on more of the duties historically thought to be the domain of sovereigns, are already being pressured to maintain the environmental conditions for a healthy Internet.  Increasingly, they will be called upon to define and enforce principles of privacy and human rights, to secure the information environment from threats both internal (crime) and external (war), and to protect “property” rights in information on behalf of “owners.”

These problems will continue to be different and the same, and will be joined by new problems as new frontiers of digital life are opened and settled.  Ultimately, we’ll grope our way toward the real question:  what is the true nature of information and how can we best harness its power?

Cynically, it’s lifetime employment for lawyers.  Optimistically, it’s a chance to be a virtual founding father.  Which way you look at it will largely determine the quality of the work you do in the next decade or so.

The Seven Deadly Sins of Title II Reclassification (NOI Remix)

Better late than never, I’ve finally given a close read to the Notice of Inquiry issued by the FCC on June 17th.  (See my earlier comments, “FCC Votes for Reclassification, Dog Bites Man”.)  In some sense there was no surprise to the contents; the Commission’s legal counsel and Chairman Julius Genachowski had both published comments over a month before the NOI that laid out the regulatory scheme the Commission now has in mind for broadband Internet access.

Chairman Genachowski’s “Third Way” comments proposed an option that he hoped would satisfy both extremes.  The FCC would abandon efforts to find new ways to meet its regulatory goals using “ancillary jurisdiction” under Title I (an avenue the D.C. Circuit had wounded, but hadn’t actually exterminated, in the Comcast decision), but at the same time would not go as far as some advocates urged and put broadband Internet completely under the telephone rules of Title II.

Instead, the Commission would propose a “lite” version of Title II, based on a few guiding principles:

  • Recognize the transmission component of broadband access service—and only this component—as a telecommunications service;
  • Apply only a handful of provisions of Title II (Sections 201, 202, 208, 222, 254, and 255) that, prior to the Comcast decision, were widely believed to be within the Commission’s purview for broadband;
  • Simultaneously renounce—that is, forbear from—application of the many sections of the Communications Act that are unnecessary and inappropriate for broadband access service; and
  • Put in place up-front forbearance and meaningful boundaries to guard against regulatory overreach.

The NOI pretends not to take a position on any of three possible options – (1) stick with Title I and find a way to make it work, (2) reclassify broadband and apply the full suite of Title II regulations to Internet access providers, or (3) compromise on the Chairman’s Third Way, applying Title II but forbearing on any but the six sections noted above—at least, for now (see ¶ 98).  It asks for comments on all three options, however, and for a range of extensions and exceptions within each.

I’ve written elsewhere (see “Reality Check on ‘Reclassifying’ Broadband” and  “Net Neutrality and the Inconvenient Constitution”) about the dubious legal foundation on which the FCC rests its authority to change the definition of “information services” to suddenly include broadband Internet, after successfully (and correctly) convincing the U.S. Supreme Court that it did not.  That discussion will, it seems, have to wait until its next airing in federal court following inevitable litigation over whatever course the FCC takes.

This post deals with something altogether different—a number of startling tidbits that found their way into the June 17th NOI.  As if Title II weren’t dangerous enough, there are hints and echoes throughout the NOI of regulatory dreams to come.  Beyond the hubris of reclassification, here are seven surprises buried in the 116 paragraphs of the NOI—its seven deadly sins.  In many cases the Commission is merely asking questions.  But the questions hint at a much broader—indeed overwhelming—regulatory agenda that goes beyond Net Neutrality and the undoing of the Comcast decision.

Pride:  The folly of defining “facilities-based” provisioning – The FCC is struggling to find a way to apply reclassification only to the largest ISPs – Comcast, AT&T, Verizon, Time Warner, etc.  But the statutory definition of “telecommunications” doesn’t give them much help.  So the NOI invents a new distinction, referred to variously as “facilities-based” providers (¶ 1) or providers of an actual “physical connection,” (¶ 106) or limiting the application of Title II just to the “transmission component” of a provider’s consumer offering (¶ 12).

All the FCC has in mind here is “a commonsense definition of broadband Internet service,” (¶ 107) (which they never provide), but in any case the devil is surely in the details.  First, it’s not clear that making that distinction would actually achieve the goal of applying the open Internet rules—network management, good or evil, largely occurs well above the transmission layers in the IP stack.

The sin here, however, is that of unintentional over-inclusion.  If Title II is applied to “facilities-based” providers, it could sweep in application providers who increasingly offer connectivity as a way to promote usage of their products.

Limiting the scope of reclassification just to “facilities-based” providers who sell directly to consumers doesn’t eliminate the risk of over-inclusion.  Some application providers, for example, offer a physical connection in partnership with an ISP (think Yahoo and Covad DSL service) and many large application providers own a good deal of fiber optic cable that could be used to connect directly with consumers.  (Think of Google’s promise to build gigabit test beds for select communities.)  Municipalities are still working to provide WiFi and WiMax connections, again in cooperation with existing ISPs.  (EarthLink planned several of these before running into financial and, in some cities, political trouble.)

There are other services, including Internet backbone provisioning, that could also fall into the Title II trap (see ¶ 64).  Would companies, such as Akamai, which offer caching services, suddenly find themselves subject to some or all of Title II?  (See ¶ 58)  How about Internet peering agreements (unmentioned in the NOI)?  Would these private contracts be subject to Title II as well?  (See ¶ 107)

Lust:  The lure of privacy, terrorism, crime, copyright – Though the express purpose of the NOI is to find a way to apply Title II to broadband, the Commission just can’t help lusting after some additional powers it appears interested in claiming for itself.  Though the Commissioners who voted for the NOI are adamant that the goal of reclassification is not to regulate “the Internet” but merely broadband access, the siren call of other issues on the minds of consumers and lawmakers may prove impossible to resist.

Recognizing, for example, that the Federal Trade Commission has been holding hearings all year on the problems of information privacy, the FCC now asks for comments about how it can use Title II authority to get into the game (¶ 39, 52, 82, 83, 96), promising of course to “complement” whatever actions the FTC is planning to take.

Cyberattacks and other forms of terrorism are also on the Commission’s mind.  In his separate statement, for example, Chairman Genachowski argues that the Comcast decision “raises questions about the right framework for the Commission to help protect against cyber-attacks.”

The NOI includes several references to homeland security and national defense—this in the wake of publicity surrounding Sen. Lieberman’s proposed law to give the President extensive emergency powers over the Internet.  (See Declan McCullaugh, “Lieberman Defends Emergency Net Authority Plan.”)  Lieberman’s bill puts the power squarely in the Department of Homeland Security—is the FCC hoping to use Title II to capture some of that power for itself?

And beyond shocking acts of terrorism, does the FCC see Title II as a license to require ISPs to help enforce other, lesser crimes, including copyright infringement, libel, bullying and cyberstalking, e-personation—and the rest?  Would Title II give the agency the ability to impose its content “decency” rules, limited today merely to broadcast television and radio, to Internet content, as Congress has unsuccessfully tried to help the Commission do on three separate occasions?

(Just as I wrote that sentence, the U.S. Court of Appeals for the Second Circuit ruled that the FCC’s recent effort to craft more aggressive indecency rules, applied to Janet Jackson’s nipple, violates the First Amendment.  The Commission is having quite a bad year in the courts!)

Anger:  Sharing the pain of CALEA – That last paragraph is admittedly speculation.  The NOI contains no references to copyright, crime, or indecency.  But here’s a law enforcement sin that isn’t speculative.  The NOI reminds us that separate from Title II, the FCC is required by law to enforce the Communications Assistance for Law Enforcement Act (CALEA). (¶ 89) CALEA is part of the rich tapestry of federal wiretap law, and requires “telecommunications carriers” to implement technical “back doors” that make it easier for federal law enforcement agencies to execute wiretapping orders.  Since 2005, the FCC has held that all facilities-based providers are subject to CALEA.

Here, the Commission assumes that reclassification would do nothing to change the broader application of CALEA already in place, and seeks comment on “this analysis.”  (¶ 89)  The Commission wonders how that analysis impacts its forbearance decisions, but I have a different question.  Assuming the definition of “facilities-based” Internet access providers is as muddled as it appears (see above), is the Commission intentionally or unintentionally extending the coverage of CALEA to anyone selling Internet “connectivity” to consumers, even those for whom that service is simply in the interest of promoting applications?

Again, would residents of communities participating in Google’s fiber optic test bed awake to discover that all of that wonderful data they are now pumping through the fiber is subject to capture and analysis by any law enforcement officer holding a wiretapping order?  Oops?

Gluttony:  The Insatiable Appetite of State and Local Regulators – Just when you think the worst is over, there’s a nasty surprise waiting at the end of the NOI.  Under Title II, the Commission reminds us, many aspects of telephone regulation are not exclusive to the FCC but are shared with state and even local regulatory agencies. 

Fortunately, to avoid the catastrophic effects of imposing perhaps hundreds of different and conflicting regulatory schemes to broadband Internet access, the FCC has the authority to preempt state and local regulations that conflict with FCC “decisions,” and to preempt the application of those parts of Title II the FCC may or may not forbear. 

But here’s the billion dollar question, which the NOI saves for the very last (¶ 109):  “Under each of the three approaches, what would be the limits on the states’ or localities’ authority to impose requirements on broadband Internet service and broadband Internet connectivity service?”

What indeed?  One of the provisions the FCC would not apply under the Third Way, for example, is § 253, which gives the Commission the authority to “preempt state regulations that prohibit the provision of telecommunications services.” (¶ 87)  So does the Third Way taketh federal authority only to giveth to state and local regulators?  Is the only way to avoid state and local regulations—oh, well, if you insist–to go to full Title II?  And might the FCC decide in any case to exercise their discretion, now or in the future, to allow local regulations of Internet connectivity?

What might those regulations look like?  One need only review the history of local telephone service to recall the rate-setting labyrinths, taxes, micromanagement of facilities investment and deployment decisions—not to mention the scourge of corruption, graft and other government crimes that have long accompanied the franchise process.  Want to upgrade your cable service?  Change your broadband provider?  Please file the appropriate forms with your state or local utility commission, and please be patient.

Fear-mongering?  Well, consider a proposal that will be voted on this summer at the annual meeting of the National Association of Utilities Commissioners.  (TC-1 at page 30)  The Commissioners will decide whether to urge the FCC to adopt what it calls a “fourth way” to fix the Net Neutrality problem.  Their description of the fourth way speaks for itself.  It would consist of:

“bi-jurisdictional regulatory oversight for broadband Internet connectivity service and broadband Internet service which recognizes the particular expertise of States in: managing front-line consumer education, protection and services programs; ensuring public safety; ensuring network service quality and reliability; collecting and mapping broadband service infrastructure and adoption data; designing and promoting broadband service availability and adoption programs; and implementing  competitively neutral pole attachment, rights-of-way and tower siting rules and programs.”

The proposal also asks the FCC, should it stick to the Third Way approach, to add in several other provisions left out of Chairman Genachowski’s list, including one (again, § 253) that would preserve the state’s ability to help out.

Or consider a proposal currently being debated by the California Public Utilities Commission.  California, likewise, would like to use reclassification as the key that unlocks the door to “cooperative federalism,” and has its own list of provisions the FCC ought not to forbear under the Third Way proposal.

Among other things, the CPUC’s general counsel is unhappy with the definition the FCC proposes for just who and what would be covered by Title II reclassification.  The CPUC proposal argues for a revised definition that “should be flexible enough to cover unforeseen technological [sic] in both the short- and long-term.”

The CPUC also proposes the FCC add to the list of those regulated by Title II providers Voice over Internet Protocol telephony, which is often a software application riding well above the “transmission” component of broadband access.

California is just the first (tax-starved) state I looked for.  I’m sure there are and will be others who will respond hungrily to the Commission’s invitation to “comment” on the appropriate role of state and local regulators under either a full or partial Title II regime.  (¶ 109, 110)

Sloth:  The sleeping giant of basic web functions – browsers, DNS lookup, and more – The NOI admits that the FCC is a bit behind the times when it comes to technical expertise, and they would like commenters to help them build a fuller record.  Specifically, ¶ 58 asks for help “to develop a current record on the technical and functional characteristics of broadband Internet service, and whether those characteristics have changed materially in the last decade.”

In particular, the NOI wants to know more about the current state of web browsers, DNS lookup services, web caching, and “other basic consumer Internet activities.”

Sounds innocent enough, but those are very loaded questions.  In the Brand X case, in which the U.S. Supreme Court agreed with the FCC that broadband Internet access over cable fit the definition of a Title I “information service” and not a Title II “telecommunications service,” browsers, DNS lookup and other “basic consumer Internet activities” were crucial to the analysis of the majority.  Because cable (and, later, it was decided, DSL) providers offered not simply a physical connection but also supporting or “enhanced” services to go with it—including DNS lookup, home pages, email support and the like—their offering to consumers was not simple common carriage.

Justice Scalia disagreed, and in dissent made the argument that cable Internet was in fact two separable offerings – the physical connection (the packet-switched network) and a set of information services that ran on top of that connection.  Consumers used some information services from the carrier, and some from other content providers (other web sites, e.g.).  Those information services were rightly left unregulated under Title I, but Congress intended the transmission component, according to Justice Scalia, to be treated as a common carrier “telecommunications service” under Title II.

The Third Way proposal in large part adopts the Scalia view of the Communications Act (see ¶ 20, 106), despite the fact that it was the FCC who argued vigorously against that view all along, and despite the fact that a majority of the Court agreed with them.

By asking these innocent questions about technical architecture, the FCC appears to be hedging its bets for a certain court challenge.   Any effort to reclassify broadband Internet access will generate long, complicated, and expensive litigation.  What, the courts will ask, has driven the FCC to make such an abrupt change in its interpretation of terms like “information service” whose statutory definitions haven’t changed since 1996?

We know it is little more than that the Chairman would like to undo the Comcast decision, of course, and thereafter complete the process of enrolling the open Internet rules proposed in October.  But in the event that proves an unavailing argument, it would be nice to be able to argue that the nature of the Internet and Internet access have fundamentally changed since 2005, when Brand X was decided.  If it’s clear that basic Internet services have become more distinct from the underlying physical connection, at least in the eyes of consumers, so much the better.

Or perhaps something bigger is lumbering lazily through the NOI.  Perhaps the FCC is considering whether “basic Internet activities” (browsing, searching, caching, etc.) have now become part of the definition of basic connectivity.  Perhaps Title II, in whole or in part, will apply not only to facilities-based providers, but to those who offer basic Internet services essential for web access.  (Why extend Title II to providers of “basic” information service?  See below, “Greed.”)  If so, the exception will swallow the rule, and just about everything else that makes the Internet ecosystem work.

Vanity:  The fading beauty of the cellular ingénue – Perhaps the most worrisome feature of the proposed open Internet rules is that they would apply with equal force to wired and wireless Internet access.  As any consumer knows, however, those two types of access couldn’t be more different. 

Infrastructure providers have made enormous progress in innovating improvements to existing infrastructure—especially the cable and copper networks.  New forms of access have also emerged, including fiber optic cable, satellite, WiFi/WiMax, and the nascent provisioning of broadband over power lines, which has particular promise in remote areas which may have no other option for access.

Broadband speeds are increasing, and there’s every expectation that given current technology and current investment plans, the National Broadband Plan’s goal of 100 million Americans with access to 100 mbps Internet speeds by 2010 will be reached without any public spending.

The wireless world, however, is a different place.  After years of underutilization of 3G networks by consumers who saw no compelling or “killer” apps worth using, the latest generation of portable computing devices (iPhone, Android, Blackberry, Windows) has reached the tipping point and well beyond.  Existing networks in many locations are overcommitted, and political resistance to additional cell tower and other facilities deployment is exacerbating the problem.

Just last week, a front page story in the San Francisco Chronicle reported on growing tensions between cell phone providers and residents who want new towers located anywhere but near where they live, go to school, shop, or work.  CTIA-The Wireless Association announced that it would no longer hold events in San Francisco, after the city council, led by Mayor Gavin Newsome, passed a “Cell Phone Right to Know” ordinance that requires retail disclosure of a phone’s specific adoption rate of emitted radiation.

Given the likely continued lagging of cellular deployment, it seems prudent to consider less stringent restrictions on network management for wireless than for wireline.  Under the open Internet rules, providers would be unable to limit or ban outright certain high-bandwidth data services, notably video services and peer-to-peer file sharing, that the network may simply be unable to support.  But the proposed open Internet rules will have none of that.

The NOI does note some of the significant differences between wired and wireless (¶ 102), but also reminds us that the limited spectrum for wireless signals affords them special powers to regulate the business practices of providers. (¶ 103)  Under Title III of the Communications Act, which applies to wireless, the FCC has and makes use of the power to ensure spectrum uses are serving a broad “public interest.”

In some ways, then, Title III gives the Commission powers to regulate wireless broadband access beyond what they would get from a reclassification to Title II.  So even if the FCC were to choose the first option and leave the current classification scheme alone, wireless broadband providers might still be subject to open Internet rules under Title III.  It would be ironic if the only broadband providers whose network management practices were to be scrutinized were those who needed the most flexibility.  But irony is nothing new in communications law.

One power, however, might elude the FCC, and therefore might give further weight to a scheme that would regulate wireless broadband under Title III and Title II.  Title III does not include the extension of Universal Service to wireless broadband (¶ 103).  This is a particular concern given the increased reliance of under-served and at-risk communities on cellular technologies for all their communications needs.  (See the recent Pew Internet & Society study for details.)

While the NOI asks for comment on whether and to what extent the FCC ought to treat wireless broadband differently and at a later time from wired services, the thrust of this section makes clear the Commission is thinking of more, not less regulation for the struggling cellular industry.

Greed:  Universal Service taxes – So what about Universal Service?  In an effort to justify the Title II reclassification as something more than just a fix to the Comcast case, the FCC has (with some hedging) suggested that D.C. Circuit’s ruling also calls into question the Commission’s ability to implement the National Broadband Plan, published only a few weeks prior to the decision in Comcast

At a conference sponsored by the Stanford Institute for Economic Policy Research that I attended, Chairman Genachowski was emphatic that nothing in Comcast constrained the FCC’s ability to execute the plan.

But in the run-up to the NOI, the rhetoric has changed.  Here the Chairman in his separate statement says only that “the recent court decision did not opine on the initiatives and policies that we have laid out transparently in the National Broadband Plan and elsewhere.”

Still, it’s clear that whether out of genuine concern or just for more political and legal cover, the Commission is trying to make the case that Comcast casts serious doubt on the Plan, and in particular the FCC’s recommendations for reform of the Universal Service Fund (USF).  (¶¶ 32-38).

Though the NOI politely recites the legal theories posed by several analysts for how USF reform could be done without any reclassification, the FCC is skeptical.  For the first and only time in the NOI, the FCC asks not for general comments on its existing authority to reform Universal Service but for the kind of evidence that would be “needed to successfully defend against a legal challenge to implementation of the theory.”

There is, of course, a great deal at stake.  The USF is fed by taxes paid by consumers as part of their telephone bills, and is used to subsidize telephone service to those who cannot otherwise afford it.  Some part of the fund is also used for the “E-Rate” program, which subsidizes Internet access for schools and libraries.

Like other parts of the fund, E-Rate has been the subject of considerable corruption.  As I noted in Law Four of “The Laws of Disruption,” a 2005 Congressional oversight committee labeled the then $2 billion E-Rate program, which had already spawned numerous criminal convictions for fraud, a disgrace, “completely [lacking] tangible measures of either effectiveness or impact.”

Today the USF collects $8 billion annually in consumer taxes, and there’s little doubt that the money is not being spent in a particularly efficient or useful way.  (See, for example, Cecilia Kang’s Washington Post article this week, “AT&T, Verizon get most federal aid for phone service.”)  The FCC is right to call for USF reform in the National Broadband Plan, and to propose repurposing the USF to subsidize basic Internet access as well as dial tone.  The needs for universal Internet access—employment, education, health care, government services, etc.—are obvious.

But what has this to do with Title II reclassification?  There’s no mention in the NOI of plans to extend the class of services or service providers obliged to collect the USF tax, which is to say there’s nothing to suggest a new tax on Internet access.  But Recommendation 8.10 of the NBP encourages just that.  The Plan recommends that Congress “broaden the USF contributions base” by finding some method of taxing broadband Internet customers.  (Congress has so far steadfastly resisted and preempted efforts to introduce any taxes on Internet access at the federal and state level.)

If Congress agreed with the FCC, broadband Internet access would someday be subject to taxes to help fund a reformed USF.  The bigger the category of providers included under Title II (the most likely collectors of such a tax), the bigger the USF.  The temptation to broaden the definition of affected companies from “facilities based” to something, as the California Public Utilities Commission put it, more “flexible,” would be tantalizing.

***

But other than these minor quibbles, the NOI offers nothing to worry about!

Viacom v. YouTube: The Principle of Least Cost Avoidance

I’m late to the party, but I wanted to say a few things about the District Court’s decision in the Viacom v. YouTube case this week and.  This will be a four-part post, covering:

1.  The holding

2.  The economic principle behind it

3.  The next steps in the case

4.  A review of the errors in legal analysis and procedure committed by reporters covering the case

I’ve written before (see “Two Smoking Guns and a Cold Case”, “Google v. Everyone” and “The Revolution will be Televised…on YouTube”) about this case, in which Viacom back in 2007 sued YouTube and Google (which owns YouTube) for $1 billion in damages, claiming massive copyright infringement of Viacom content posted by YouTube users.

There’s no question of the infringing activity or its scale.  The only question in the case is whether YouTube, as the provider of a platform for uploading and hosting video content, shares any of the liability of those among its users who uploaded Viacom content (including clips from Comedy Central and other television programming) without permission.

The more interesting questions raised by the ascent of new video sites aren’t addressed in the opinion.  Whether the users understood copyright law or not and whether their intent in uploading their favorite clips from Viacom programming was to promote Viacom rather than to harm it, were not considered.   Indeed, whether on balance Viacom was helped more than harmed by the illegal activity, and how either should be calculated under current copyright law, is not relevant to this decision, and are saved for another day and perhaps another case.

That’s because Google moved for summary judgment on the basis of the Digital Millennium Copyright Act’s “safe harbor” provisions, which immunize service providers from any kind of attributed or “secondary” liability for user behavior when certain conditions are met.  Most important, a service provider can dock safe from liability only if it can show that it :

– did not have “actual knowledge that the material…is infringing,” or is “not aware of facts or circumstances from which infringing activity is apparent” and

– upon obtaining such knowledge or awareness “acts expeditiously to remove…the material” and

– does not “receive a financial benefit directly attributable to the infringing activity, “in a case in which the service provider has the right ability to control such activity,” and

– upon notification of the claimed infringement, “responds expeditiously to remove…the material that is claimed to be infringing….”

Note that all four of these elements must be satisfied to benefit from the safe harbor

The question for Judge Stanton to decide on YouTube’s motion for summary judgment was whether YouTube met all the conditions, and he has ruled that they did so.

1.  The Slam-Dunk for Google

The decision largely comes down to an interpretation of what phrases like “the material” and “such activity” means in the above-quoted sections of the DMCA.

Indeed, the entire opinion can be boiled down to one sentence on page 15.  After reviewing the legislative history of the DMCA at length, Judge Stanton concludes that the “tenor” of the safe harbor provisions leads him to interpret infringing “material” and “activity” to mean “specific and identifiable infringements of particular individual items.”

General knowledge, which YouTube certainly had, that some of its users were (and still are) uploading content protected by copyright law without permission, is not enough to defeat the safe harbor and move the case to a determination of whether or not secondary liability can be shown.  “Mere knowledge of prevalence of such activity in general,” Judge Stanton writes, “is not enough.”

To defeat a safe harbor defense at the summary judgment stage, in other words, a content owner must show that the service provider knew or should have known about specific instances of infringement.  Such knowledge could come from a service provider hosting subsites with names like “Pirated Content” or other “red flags.”  But in most cases, as here, the service provider would not be held to know about specific instances of infringement until informed of them, most often from takedown notices sent by copyright holders themselves.

Whether ad revenue constitutes “direct financial benefit” was not tested, because, again, that provision only applies to “activity” the service provider has the right to control.  “Activity,” as Judge Stanton reads it, also refers to specific instances of illegal content distribution.

As Judge Stanton notes, YouTube users currently post 24 hours of video content every minute, making it difficult if not impossible, as a practical matter, for YouTube to have any idea which ones are not authorized by rights holders.  And when Viacom informed the site of some 100,000 potentially-infringing clips, YouTube removed nearly all of them within a day.  That is how the DMCA was intended to work, according to Judge Stanton, and indeed demonstrates that it is working just fine.

Viacom, of course, is free to pursue the individuals who posted its content without permission, but everyone should know by now that for many reasons that’s a losing strategy.

2.  The Least-Cost Avoider Principle

On balance, Judge Stanton is reading what is clearly an ambiguous statute with a great deal of common sense.  To what extent the drafters of the DMCA intended the safe harbor to apply to general vs. specific knowledge is certainly not clear from the plain language, nor, really, from the legislative history.  (Some members of the U.S. Supreme Court believe strongly that legislative history, in any case, is irrelevant in interpreting a statute, even if ambiguous.)

To bolster his interpretation that the safe harbor protects all but specific knowledge of infringement, interestingly, Judge Stanton points out that this case is similar to one decided a few months ago in the Second Circuit.  In that case, the court refused to apply vicarious liability for trademark infringement to eBay for customer listings of fake Tiffany’s products.

Though trademark and copyright law are quite different, the analogy is sensible.  In both cases, the question comes down to one of economic efficiency.  Which party, that is, is in the best position to police the rights being violated?

Here’s how the economic analysis might go.  Given the existence of new online marketplaces and video sharing services, and given the likelihood and ease with which individuals can use those services to violate information rights (intentionally or otherwise, for profit or not), the question for legislators and courts is how to minimize the damage to the information rights of some while still preserving the new value to information in general that such services create.

For there is also no doubt that the vast majority of eBay listings and YouTube clips are posted without infringing the rights of any third party, and that the value of such services, though perhaps not easily quantifiable, is immense.  EBay has created liquidity in markets that were too small and too disjointed to work efficiently offline.  YouTube has enabled a new generation of users with increasingly low-cost video production tools to distribute their creations, get valuable feedback and, increasingly, make money.

That these sites (and others, including Craigslist) are often Trojan Horses for illegal activities could lead legislators to ban them outright, but that clearly gets the cost-benefit equation wrong.  A ban would generate too much protection.

At the same time, throwing up one’s hands and saying that a certain class of rights-holders must accept all the costs of damage without any means of reducing or eliminating those costs, would be overly generous in the other direction.  Neither users, service providers, nor rights holders would have any incentives to police user behavior.  The basic goals of copyright and trademark might be seriously damaged as a result.

The goal of good legislation in situations like this—where overall benefit outweighs individual harm and where technology is changing the equation rapidly–is to produce rules that are most likely to get the balance right and do so with the least amount of expensive litigation.  The DMCA provisions described above are one attempt at creating such rules.

But those rules, given the uncertainties of emerging technologies and the changing behaviors of users, can’t possibly give judges the tools to decide every case with precision.  Such rules must be a least a little ambiguous (if not a lot).  Judges, as they have done for centuries, must apply other, objective interpretive tools to help decide individual cases even as the targets keep moving.

Judge Stanton’s interpretation of the safe harbor provisions follows, albeit implicitly, one of those neutral tools, the same one applied by the Second Circuit in the eBay case.  And that is the principle of the least-cost avoider.

This principle encourages judges to interpret the law, where possible, such that the burden of reducing harmful behavior falls to the party in the best position, economically, to avoid it.  That way, as parties in similar situations in the future evaluate the risk of liability, they will be more likely to choose a priori behaviors that not only reduce the risk of damages but also the cost of more litigation.

In the future, if Judge Stanton’s ruling stands, rights holders will be encouraged to police video sites more carefully.  Service providers such as YouTube will be encouraged to respond quickly to legitimate demands to remove infringing content.

Given the fact that activities harmful to rights holders are certain to occur, in other words, the least cost avoider principles says that a judge should rule in a way that puts the burden of minimizing the damage on the party who can most efficiently avoid it.  In this case, the choice would be between YouTube (preview all content before posting and ensure legal rights have been cleared), Viacom (monitor sites carefully and quickly demand takedown of infringing content) or the users themselves (don’t post unauthorized content without expecting to pay damages or possible criminal sanctions).

Here, the right answer economically is Viacom, the rights holder who is directly harmed by the infringing behavior.

That may seem unfair from a moral standpoint.  For, after all, Viacom is the direct victim of the users’ clearly unlawful behavior and the failure of YouTube, the enabler of the users, to stop it.  Why should the victim be held responsible for making sure they are not caused further damage in the future?

But there’s a certain economic logic to that decision, though one difficult to quantify (Judge Stanton made no effort to do so; indeed he did not invoke the least cost avoider principle explicitly.)  The grant of a copyright or a trademark is the grant of a monopoly on a certain class of information, a grant that itself comes with inherent economic inefficiencies in the service of encouraging overall social value–encouraging investment in creative works.

Part of the cost of having such a valuable monopoly is the cost of policing it, even in new media and new services that the rights holder may not have any particular interest in using itself.

By interpreting the DMCA as protecting service providers from mere general knowledge of infringing behavior, Judge Stanton has signaled that Viacom can police YouTube more efficiently than YouTube can.  Why?  For one thing, Viacom has the stronger incentive to ensure unauthorized content stays off the site.  It alone also has the knowledge both of what content it has rights to and when that content appears without authorization.  (Several examples arose in the course of discovery of content Viacom ordered YouTube to remove that, it turned out, had been posted by Viacom or its agents masquerading as users in order to build buzz.)

The cost of monitoring and stopping unauthorized posting is not negligible, of course.  But YouTube, eBay and other service providers increasingly provide tools to make the process easier, faster, and cheaper for rights holders.  They may or may not be obligated to do so as a matter of law; for now, their decision to do so represents an organic and efficient form of extra-legal rulemaking that Judge Stanton is eager to encourage.

No matter what, someone has to bear the bulk of the cost of monitoring and reporting violations.  Viacom can do it cheaper, and can more easily build that cost into the price it charges for authorized copies of its content.

And where it cannot easily issue takedown orders to large, highly-visible service providers like YouTube, it retains the option, admittedly very expensive, to sue the individuals who actually infringed.  It can also try to invoke the criminal aspect of copyright law, and get the FBI (that is, the taxpayer) to absorb the cost.

To rule the other way–to deny YouTube its safe harbor–would encourage service providers to overspend on deterrence of infringing behavior.  In response, perhaps YouTube and other sites would require, before posting videos, that users provide legally-binding and notarized documentation that the user either owns the video or has a license to post it.  Obtaining such agreements, not to mention evaluating them for accuracy, would effectively mean the end of video sites.  Denying the safe harbor based on general knowledge, to put it another way, would effectively interpret the DMCA as a ban on video sites.

That would be cheaper for Viacom, of course, but would lead to overall social loss.  Right and wrong, innocence and guilt, are largely excluded from this kind of analysis, though certainly not from the rhetoric of the parties.  And remember that actual knowledge or general awareness of specific acts of infringement would, according to Judge Stanton’s rule, defeat the safe harbor.  In that case, to return to the economic terminology, the cost of damages—or, if you prefer, assigning some of the blame—would shift back on YouTube.

3.  What’s Next?

Did Judge Stanton get it right as a matter of information economics?  It appears that the answer is yes.  But did he get it right as a matter of law—in this case, of the DMCA?

That remains to be seen.

Whether one likes the results or not, as I’ve written before, summary judgment rulings by district courts are never the last word in complex litigation between large, well-funded parties.  That is especially so here, where the lower court’s interpretation of a federal law is largely untested in the circuit and indeed, as here, in any circuit.

Judge Stanton cites as authority for his view of the DMCA a number of other lower court cases, many of them in the Ninth Circuit.  But as a matter of federal appellate law, Ninth Circuit cases are not binding precedent on the Second Circuit, where Judge Stanton sits.  And other district (that is, lower) court opinions cannot be cited by the parties as precedent even within a circuit.  They are merely advisory.  (A Ninth Circuit case involving Veoh is currently on appeal; the service provider won on a “safe harbor” argument similar to Google’s in the lower court.)

So this case will certainly head for appeal to the Second Circuit, and perhaps from there to the U.S. Supreme Court.  But a Supreme Court review of the case is far from certain.  Appeals to the circuit court are the right of the losing party.  A petition to the Supreme Court, on the other hand, is accepted at the Court’s discretion, and the Court turns down the vast majority of cases that it is asked to hear, often without regard to the economic importance or newsworthiness of the case.  (The Court refused to hear an appeal in the Microsoft antitrust case, for example, because the lower courts largely applied existing antitrust precedents.)

A circuit court reviewing summary judgment will make a fresh inquiry into the law, accepting the facts alleged by Viacom (the losing party below) as if they were all proven.  If the Second Circuit follows Judge Stanton’s analogy to the eBay case, Google is likely to prevail.

If the appellate court rejects Judge Stanton’s view of specificity, the case will return to the lower court and move on, perhaps to more summary judgment attempts by both parties and, failing that, a trial.  More likely, at that point, the parties will reach a settlement, or an overall licensing agreement, which may have been the point of bringing this litigation in the first place.  (A win for Viacom, as in most patent cases, would have given the company better negotiating leverage.)

4.  Getting it Right or Wrong in the Press

That brief review of federal appellate practice is entirely standard—it has nothing to do with the facts of this case, the parties, the importance of the decision, or the federal law in question.

Which makes it all the more surprising when journalists who regularly cover the legal news of particular companies continually get it wrong when describing what has happened and/or what happens next.

Last and perhaps least, here are a few examples from some of the best-read sources:

The New York Times – Miguel Helft, who covers Google on a regular basis, commits some legal hyperbole in saying that Judge Stanton “threw out” Viacom’s case, and that “the ruling” (that is, this opinion) could have “major implications for …scores of Internet sites.”  The appellate court decision will be the important one, but technically it will apply only to cases brought in the Second Circuit.  The lower court’s decision, even if upheld, will have no implications for future litigation.  Helft also quotes from counsel at both Viacom and Google which are filled with legal errors, though perhaps understandably so.

The Wall Street Journal –Sam Schechner and Jessica E. Vasellaro make no mistakes in their report of the decision.  They correctly explain what summary judgment means, and summarize the ruling without distorting it.  Full marks.

The Washington Post – Cecilia Kang, who covers technology policy for the Post, incorrectly characterizes Judge Stanton’s ruling as a “dismissal” of Viacom’s lawsuit.  A dismissal, as opposed to the granting of a motion for summary judgment, generally happens earlier in litigation, and signals a much weaker case, often one for which the court finds it has no jurisdiction or which, even if all the alleged facts are true, doesn’t amount to behavior for which a legal remedy exists.  Kang repeats the companies’ statements, but also adds a helpful quote from Public Knowledge’s Sherwin Siy about the balance of avoiding harms.

The National Journal – At the website of this legal news publication, Juliana Gruenwald commits no fouls in this short piece, with an even better quote from PK’s Siy.

CNET News.com – Tech news site CNET’s media reporter Greg Sandoval suggests that “While the case could continue to drag on in the appeals process, the summary judgment handed down in the Southern District of New York is a major victory for Google . . . .”  This is odd wording, as the case will certainly “drag on” to an appeal to the Second Circuit.  (A decision by the Second Circuit is perhaps a year or more away.)  Again, a district court decision, no matter how strongly worded, does not constitute a “major victory” for the prevailing party.

Sandoval (who, it must be said, posted his story quite quickly), also exaggerates the sweep of Google’s argument and the judge’s holding.  He writes, “Google held that the DMCA’s safe harbor provision protected it and other Internet service providers from being held responsible for copyright infringements committed by users.  The judge agreed.”  But Google argued only that it (not other providers) was protected, and protected only from user infringements it didn’t know about specifically.  That is the argument with which Judge Stanton agreed

Perhaps these are minor infractions.  You be the judge.

Updates to the "Media" Page

I’ve added almost twenty new posts to the Media Page from April and May. These were busy months for those interested in the dangerous intersection of technology and policy, the theme of The Laws of Disruption.

A major court decision upended the Federal Communications Commissions efforts to pass new net neutrality regulations, leading the Commission to begin execution of its “nuclear option”–the reclassification of Internet access under ancient rules written for the old telephone monopoly.  While I support the principles of net neutrality, I am increasingly concerned about efforts by the FCC to appoint itself the “smart cop” on the Internet beat, as Chairman Julius Genachowski put it last fall.

As consumer computing outstripped business computing for the first time, privacy has emerged as a leading concern of both users and mainstream media sources.  Not surprisingly, legal developments in information security go hand-in-hand with conversations about privacy policy and regulation, and I have been speaking and commenting to the press extensively on these topics.

The new entries run the full range of topics, including copyright, identity theft, e-commerce, new criminal laws for social networking behaviors, as well as privacy, security, and communications policy.

In the last few months, I have continued writing not only for this blog but for the Technology Liberation Front, the Stanford Law School Center for Internet & Society, and for CNET.  I’ve also written op-eds for The Orange County Register, The Des Moines Register, and Info Tech & Telecom News.

I’ve appeared on CNN, Fox News, and National Public Radio, and have been interviewed by print media sources as varied as El Pais, The Christian Science Monitor, TechCrunch and Techdirt.

My work has also been quoted by a variety of business and mainstream publications, including The Atlantic, Reason, Fortune and Fast Company.

As they say, may you live in interesting times!