Category Archives: Security

After the deluge, more deluge

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET and The Washington Post.

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

If the size of my stacks are any indication of activity level, the most contentious areas of legal debate are, not surprisingly, privacy (Facebook, Google, Twitter et. al.), infrastructure (Net neutrality, Title II and the wireless spectrum crisis), copyright (the secret ACTA treaty, Limewire, Google v. Viacom), free speech (China, Facebook “hate speech”), and cyberterrorism (Sen. Lieberman’s proposed legislation expanding executive powers).

There was relatively little development in other key topics, notably antitrust (Intel and the Federal Trade Commission appear close to resolution of the pending investigation; Comcast/NBC merger plodding along).  Cyberbullying, identity theft, spam, e-personation and other Internet crimes have also gone eerily, or at least relatively, quiet.

Where are We?

There’s one thing that all of the high-volume topics have in common—they are all moving increasingly toward a single topic, and that is the appropriate balance between private and public control over the Internet ecosystem.  When I first started researching cyberlaw in the mid-1990’s, that was truly an academic question, one discussed by very few academics.

But in the interim, TCP/IP, with no central authority or corporate owner, has pursued a remarkable and relentless takeover of every other networking standard.  The Internet’s packet-switched architecture has grown from simple data file exchanges to email, the Web, voice, video, social network and the increasingly hybrid forms of information exchanges performed by consumers and businesses.

As its importance to both economic and personal growth has expanded, anxiety over how and by whom that architecture is managed has understandably developed in parallel.

(By the way, as Morgan Stanley analyst Mark Meeker pointed out this spring, consumer computing has overtaken business computing as the dominant use of information technology, with a trajectory certain to open a wider gap in the future.)

The locus of the infrastructure battle today, of course, is in the fundamental questions being asked about the very nature of digital life.  Is the network a piece of private property operated subject to the rules of the free market, the invisible hand, and a wondrous absence of transaction costs?  Or is it a fundamental element of modern citizenship, overseen by national governments following their most basic principles of governance and control?

At one level, that fight is visible in the machinations between governments (U.S. vs. E.U. vs. China, e.g.) over what rules apply to the digital lives of their citizens.  Is the First Amendment, as John Perry Barlow famously said, only a local ordinance in Cyberspace?  Do E.U. privacy rules, being the most expansive, become the default for global corporations?

At another level, the lines have been drawn even sharper between public and private parties, and in side-battles within those camps.  Who gets to set U.S. telecom policy—the FCC or Congress, federal or state governments, public sector or private sector, access providers or content providers?  What does it really mean to say the network should be “nondiscriminatory,” or to treat all packets anonymously and equally, following a “neutrality” principle?

As individuals, are we consumers or citizens, and in either case how do we voice our view of how these problems should be resolved?  Through our elected representatives?  Voting with our wallets?  Through the media and consumer advocates?

Not to sound too dramatic, but there’s really no other way to see these fights as anything less than a struggle for the soul of the Internet.  As its importance has grown, so have the stakes—and the immediacy—in establishing the first principles, the Constitution, and the scriptures that will define its governance structure, even as it continues its rapid evolution.

The Next Wave

Network architecture and regulation aside, the other big problems of the day are not as different as they seem.  Privacy, cybersecurity and copyright are all proxies in that larger struggle, and in some sense they are all looking at the same problem through a slightly different (but equally mis-focused) lens.  There’s a common thread and a common problem:  each of them represents a fight over information usage, access, storage, modification and removal.  And each of them is saddled with terminology and a legal framework developed during the Industrial Revolution.

As more activities of all possible varieties migrate online, for example, very different problems of information economics have converged under the unfortunate heading of “privacy,” a term loaded with 19th and 20th century baggage.

Security is just another view of the same problems.  And here too the debates (or worse) are rendered unintelligible by the application of frameworks developed for a physical world.  Cyberterror, digital warfare, online Pearl Harbor, viruses, Trojan Horses, attacks—the terminology of both sides assumes that information is a tangible asset, to be secured, protected, attacked, destroyed by adverse and identifiable combatants.

In some sense, those same problems are at the heart of struggles to apply or not the architecture of copyright created during the 17th Century Enlightenment, when information of necessity had to take physical form to be used widely.  Increasingly, governments and private parties with vested interests are looking to the ISPs and content hosts to act as the police force for so-called “intellectual property” such as copyrights, patents, and trademarks.  (Perhaps because it’s increasingly clear that national governments and their physical police forces are ineffectual or worse.)

Again, the issues are of information usage, access, storage, modification and removal, though the rhetoric adopts the unhelpful language of pirates and property.

So, in some weird and at the same time obvious way, net neutrality = privacy = security = copyright.  They’re all different and equally unhelpful names for the same (growing) set of governance issues.

At the heart of these problems—both of form and substance—is the inescapable fact that information is profoundly different than traditional property.  It is not like a bush or corn or a barrel of oil.  For one thing, it never has been tangible, though when it needed to be copied into media to be distributed it was easy enough to conflate the media for the message.

The information revolution’s revolutionary principle is that information in digital form is at last what it was always meant to be—an intangible good, which follows a very different (for starters, a non-linear) life-cycle.  The ways in which it is created, distributed, experienced, modified and valued don’t follow the same rules that apply to tangible goods, try as we do to force-fit those rules.

Which is not to say there are no rules, or that there can be no governance of information behavior.  And certainly not to say information, because it is intangible, has no value.  Only that for the most part, we have no real understanding of what its unique physics are.  We barely have vocabulary to begin the analysis.

Now What?

Terminology aside, I predict with the confidence of Moore’s Law that business and consumers alike will increasingly find themselves more involved than anyone wants to be in the creation of a new body of law better-suited to the realities of digital life.  That law may take the traditional forms of statutes, regulations, and treaties, or follow even older models of standards, creeds, ethics and morals.  Much of it will continue to be engineered, coded directly into the architecture.

Private enterprises in particular can expect to be drawn deeper (kicking and screaming perhaps) into fundamental questions of Internet governance and information rights.

Infrastructure and application providers, as they take on more of the duties historically thought to be the domain of sovereigns, are already being pressured to maintain the environmental conditions for a healthy Internet.  Increasingly, they will be called upon to define and enforce principles of privacy and human rights, to secure the information environment from threats both internal (crime) and external (war), and to protect “property” rights in information on behalf of “owners.”

These problems will continue to be different and the same, and will be joined by new problems as new frontiers of digital life are opened and settled.  Ultimately, we’ll grope our way toward the real question:  what is the true nature of information and how can we best harness its power?

Cynically, it’s lifetime employment for lawyers.  Optimistically, it’s a chance to be a virtual founding father.  Which way you look at it will largely determine the quality of the work you do in the next decade or so.

The Seven Deadly Sins of Title II Reclassification (NOI Remix)

Better late than never, I’ve finally given a close read to the Notice of Inquiry issued by the FCC on June 17th.  (See my earlier comments, “FCC Votes for Reclassification, Dog Bites Man”.)  In some sense there was no surprise to the contents; the Commission’s legal counsel and Chairman Julius Genachowski had both published comments over a month before the NOI that laid out the regulatory scheme the Commission now has in mind for broadband Internet access.

Chairman Genachowski’s “Third Way” comments proposed an option that he hoped would satisfy both extremes.  The FCC would abandon efforts to find new ways to meet its regulatory goals using “ancillary jurisdiction” under Title I (an avenue the D.C. Circuit had wounded, but hadn’t actually exterminated, in the Comcast decision), but at the same time would not go as far as some advocates urged and put broadband Internet completely under the telephone rules of Title II.

Instead, the Commission would propose a “lite” version of Title II, based on a few guiding principles:

  • Recognize the transmission component of broadband access service—and only this component—as a telecommunications service;
  • Apply only a handful of provisions of Title II (Sections 201, 202, 208, 222, 254, and 255) that, prior to the Comcast decision, were widely believed to be within the Commission’s purview for broadband;
  • Simultaneously renounce—that is, forbear from—application of the many sections of the Communications Act that are unnecessary and inappropriate for broadband access service; and
  • Put in place up-front forbearance and meaningful boundaries to guard against regulatory overreach.

The NOI pretends not to take a position on any of three possible options – (1) stick with Title I and find a way to make it work, (2) reclassify broadband and apply the full suite of Title II regulations to Internet access providers, or (3) compromise on the Chairman’s Third Way, applying Title II but forbearing on any but the six sections noted above—at least, for now (see ¶ 98).  It asks for comments on all three options, however, and for a range of extensions and exceptions within each.

I’ve written elsewhere (see “Reality Check on ‘Reclassifying’ Broadband” and  “Net Neutrality and the Inconvenient Constitution”) about the dubious legal foundation on which the FCC rests its authority to change the definition of “information services” to suddenly include broadband Internet, after successfully (and correctly) convincing the U.S. Supreme Court that it did not.  That discussion will, it seems, have to wait until its next airing in federal court following inevitable litigation over whatever course the FCC takes.

This post deals with something altogether different—a number of startling tidbits that found their way into the June 17th NOI.  As if Title II weren’t dangerous enough, there are hints and echoes throughout the NOI of regulatory dreams to come.  Beyond the hubris of reclassification, here are seven surprises buried in the 116 paragraphs of the NOI—its seven deadly sins.  In many cases the Commission is merely asking questions.  But the questions hint at a much broader—indeed overwhelming—regulatory agenda that goes beyond Net Neutrality and the undoing of the Comcast decision.

Pride:  The folly of defining “facilities-based” provisioning – The FCC is struggling to find a way to apply reclassification only to the largest ISPs – Comcast, AT&T, Verizon, Time Warner, etc.  But the statutory definition of “telecommunications” doesn’t give them much help.  So the NOI invents a new distinction, referred to variously as “facilities-based” providers (¶ 1) or providers of an actual “physical connection,” (¶ 106) or limiting the application of Title II just to the “transmission component” of a provider’s consumer offering (¶ 12).

All the FCC has in mind here is “a commonsense definition of broadband Internet service,” (¶ 107) (which they never provide), but in any case the devil is surely in the details.  First, it’s not clear that making that distinction would actually achieve the goal of applying the open Internet rules—network management, good or evil, largely occurs well above the transmission layers in the IP stack.

The sin here, however, is that of unintentional over-inclusion.  If Title II is applied to “facilities-based” providers, it could sweep in application providers who increasingly offer connectivity as a way to promote usage of their products.

Limiting the scope of reclassification just to “facilities-based” providers who sell directly to consumers doesn’t eliminate the risk of over-inclusion.  Some application providers, for example, offer a physical connection in partnership with an ISP (think Yahoo and Covad DSL service) and many large application providers own a good deal of fiber optic cable that could be used to connect directly with consumers.  (Think of Google’s promise to build gigabit test beds for select communities.)  Municipalities are still working to provide WiFi and WiMax connections, again in cooperation with existing ISPs.  (EarthLink planned several of these before running into financial and, in some cities, political trouble.)

There are other services, including Internet backbone provisioning, that could also fall into the Title II trap (see ¶ 64).  Would companies, such as Akamai, which offer caching services, suddenly find themselves subject to some or all of Title II?  (See ¶ 58)  How about Internet peering agreements (unmentioned in the NOI)?  Would these private contracts be subject to Title II as well?  (See ¶ 107)

Lust:  The lure of privacy, terrorism, crime, copyright – Though the express purpose of the NOI is to find a way to apply Title II to broadband, the Commission just can’t help lusting after some additional powers it appears interested in claiming for itself.  Though the Commissioners who voted for the NOI are adamant that the goal of reclassification is not to regulate “the Internet” but merely broadband access, the siren call of other issues on the minds of consumers and lawmakers may prove impossible to resist.

Recognizing, for example, that the Federal Trade Commission has been holding hearings all year on the problems of information privacy, the FCC now asks for comments about how it can use Title II authority to get into the game (¶ 39, 52, 82, 83, 96), promising of course to “complement” whatever actions the FTC is planning to take.

Cyberattacks and other forms of terrorism are also on the Commission’s mind.  In his separate statement, for example, Chairman Genachowski argues that the Comcast decision “raises questions about the right framework for the Commission to help protect against cyber-attacks.”

The NOI includes several references to homeland security and national defense—this in the wake of publicity surrounding Sen. Lieberman’s proposed law to give the President extensive emergency powers over the Internet.  (See Declan McCullaugh, “Lieberman Defends Emergency Net Authority Plan.”)  Lieberman’s bill puts the power squarely in the Department of Homeland Security—is the FCC hoping to use Title II to capture some of that power for itself?

And beyond shocking acts of terrorism, does the FCC see Title II as a license to require ISPs to help enforce other, lesser crimes, including copyright infringement, libel, bullying and cyberstalking, e-personation—and the rest?  Would Title II give the agency the ability to impose its content “decency” rules, limited today merely to broadcast television and radio, to Internet content, as Congress has unsuccessfully tried to help the Commission do on three separate occasions?

(Just as I wrote that sentence, the U.S. Court of Appeals for the Second Circuit ruled that the FCC’s recent effort to craft more aggressive indecency rules, applied to Janet Jackson’s nipple, violates the First Amendment.  The Commission is having quite a bad year in the courts!)

Anger:  Sharing the pain of CALEA – That last paragraph is admittedly speculation.  The NOI contains no references to copyright, crime, or indecency.  But here’s a law enforcement sin that isn’t speculative.  The NOI reminds us that separate from Title II, the FCC is required by law to enforce the Communications Assistance for Law Enforcement Act (CALEA). (¶ 89) CALEA is part of the rich tapestry of federal wiretap law, and requires “telecommunications carriers” to implement technical “back doors” that make it easier for federal law enforcement agencies to execute wiretapping orders.  Since 2005, the FCC has held that all facilities-based providers are subject to CALEA.

Here, the Commission assumes that reclassification would do nothing to change the broader application of CALEA already in place, and seeks comment on “this analysis.”  (¶ 89)  The Commission wonders how that analysis impacts its forbearance decisions, but I have a different question.  Assuming the definition of “facilities-based” Internet access providers is as muddled as it appears (see above), is the Commission intentionally or unintentionally extending the coverage of CALEA to anyone selling Internet “connectivity” to consumers, even those for whom that service is simply in the interest of promoting applications?

Again, would residents of communities participating in Google’s fiber optic test bed awake to discover that all of that wonderful data they are now pumping through the fiber is subject to capture and analysis by any law enforcement officer holding a wiretapping order?  Oops?

Gluttony:  The Insatiable Appetite of State and Local Regulators – Just when you think the worst is over, there’s a nasty surprise waiting at the end of the NOI.  Under Title II, the Commission reminds us, many aspects of telephone regulation are not exclusive to the FCC but are shared with state and even local regulatory agencies. 

Fortunately, to avoid the catastrophic effects of imposing perhaps hundreds of different and conflicting regulatory schemes to broadband Internet access, the FCC has the authority to preempt state and local regulations that conflict with FCC “decisions,” and to preempt the application of those parts of Title II the FCC may or may not forbear. 

But here’s the billion dollar question, which the NOI saves for the very last (¶ 109):  “Under each of the three approaches, what would be the limits on the states’ or localities’ authority to impose requirements on broadband Internet service and broadband Internet connectivity service?”

What indeed?  One of the provisions the FCC would not apply under the Third Way, for example, is § 253, which gives the Commission the authority to “preempt state regulations that prohibit the provision of telecommunications services.” (¶ 87)  So does the Third Way taketh federal authority only to giveth to state and local regulators?  Is the only way to avoid state and local regulations—oh, well, if you insist–to go to full Title II?  And might the FCC decide in any case to exercise their discretion, now or in the future, to allow local regulations of Internet connectivity?

What might those regulations look like?  One need only review the history of local telephone service to recall the rate-setting labyrinths, taxes, micromanagement of facilities investment and deployment decisions—not to mention the scourge of corruption, graft and other government crimes that have long accompanied the franchise process.  Want to upgrade your cable service?  Change your broadband provider?  Please file the appropriate forms with your state or local utility commission, and please be patient.

Fear-mongering?  Well, consider a proposal that will be voted on this summer at the annual meeting of the National Association of Utilities Commissioners.  (TC-1 at page 30)  The Commissioners will decide whether to urge the FCC to adopt what it calls a “fourth way” to fix the Net Neutrality problem.  Their description of the fourth way speaks for itself.  It would consist of:

“bi-jurisdictional regulatory oversight for broadband Internet connectivity service and broadband Internet service which recognizes the particular expertise of States in: managing front-line consumer education, protection and services programs; ensuring public safety; ensuring network service quality and reliability; collecting and mapping broadband service infrastructure and adoption data; designing and promoting broadband service availability and adoption programs; and implementing  competitively neutral pole attachment, rights-of-way and tower siting rules and programs.”

The proposal also asks the FCC, should it stick to the Third Way approach, to add in several other provisions left out of Chairman Genachowski’s list, including one (again, § 253) that would preserve the state’s ability to help out.

Or consider a proposal currently being debated by the California Public Utilities Commission.  California, likewise, would like to use reclassification as the key that unlocks the door to “cooperative federalism,” and has its own list of provisions the FCC ought not to forbear under the Third Way proposal.

Among other things, the CPUC’s general counsel is unhappy with the definition the FCC proposes for just who and what would be covered by Title II reclassification.  The CPUC proposal argues for a revised definition that “should be flexible enough to cover unforeseen technological [sic] in both the short- and long-term.”

The CPUC also proposes the FCC add to the list of those regulated by Title II providers Voice over Internet Protocol telephony, which is often a software application riding well above the “transmission” component of broadband access.

California is just the first (tax-starved) state I looked for.  I’m sure there are and will be others who will respond hungrily to the Commission’s invitation to “comment” on the appropriate role of state and local regulators under either a full or partial Title II regime.  (¶ 109, 110)

Sloth:  The sleeping giant of basic web functions – browsers, DNS lookup, and more – The NOI admits that the FCC is a bit behind the times when it comes to technical expertise, and they would like commenters to help them build a fuller record.  Specifically, ¶ 58 asks for help “to develop a current record on the technical and functional characteristics of broadband Internet service, and whether those characteristics have changed materially in the last decade.”

In particular, the NOI wants to know more about the current state of web browsers, DNS lookup services, web caching, and “other basic consumer Internet activities.”

Sounds innocent enough, but those are very loaded questions.  In the Brand X case, in which the U.S. Supreme Court agreed with the FCC that broadband Internet access over cable fit the definition of a Title I “information service” and not a Title II “telecommunications service,” browsers, DNS lookup and other “basic consumer Internet activities” were crucial to the analysis of the majority.  Because cable (and, later, it was decided, DSL) providers offered not simply a physical connection but also supporting or “enhanced” services to go with it—including DNS lookup, home pages, email support and the like—their offering to consumers was not simple common carriage.

Justice Scalia disagreed, and in dissent made the argument that cable Internet was in fact two separable offerings – the physical connection (the packet-switched network) and a set of information services that ran on top of that connection.  Consumers used some information services from the carrier, and some from other content providers (other web sites, e.g.).  Those information services were rightly left unregulated under Title I, but Congress intended the transmission component, according to Justice Scalia, to be treated as a common carrier “telecommunications service” under Title II.

The Third Way proposal in large part adopts the Scalia view of the Communications Act (see ¶ 20, 106), despite the fact that it was the FCC who argued vigorously against that view all along, and despite the fact that a majority of the Court agreed with them.

By asking these innocent questions about technical architecture, the FCC appears to be hedging its bets for a certain court challenge.   Any effort to reclassify broadband Internet access will generate long, complicated, and expensive litigation.  What, the courts will ask, has driven the FCC to make such an abrupt change in its interpretation of terms like “information service” whose statutory definitions haven’t changed since 1996?

We know it is little more than that the Chairman would like to undo the Comcast decision, of course, and thereafter complete the process of enrolling the open Internet rules proposed in October.  But in the event that proves an unavailing argument, it would be nice to be able to argue that the nature of the Internet and Internet access have fundamentally changed since 2005, when Brand X was decided.  If it’s clear that basic Internet services have become more distinct from the underlying physical connection, at least in the eyes of consumers, so much the better.

Or perhaps something bigger is lumbering lazily through the NOI.  Perhaps the FCC is considering whether “basic Internet activities” (browsing, searching, caching, etc.) have now become part of the definition of basic connectivity.  Perhaps Title II, in whole or in part, will apply not only to facilities-based providers, but to those who offer basic Internet services essential for web access.  (Why extend Title II to providers of “basic” information service?  See below, “Greed.”)  If so, the exception will swallow the rule, and just about everything else that makes the Internet ecosystem work.

Vanity:  The fading beauty of the cellular ingénue – Perhaps the most worrisome feature of the proposed open Internet rules is that they would apply with equal force to wired and wireless Internet access.  As any consumer knows, however, those two types of access couldn’t be more different. 

Infrastructure providers have made enormous progress in innovating improvements to existing infrastructure—especially the cable and copper networks.  New forms of access have also emerged, including fiber optic cable, satellite, WiFi/WiMax, and the nascent provisioning of broadband over power lines, which has particular promise in remote areas which may have no other option for access.

Broadband speeds are increasing, and there’s every expectation that given current technology and current investment plans, the National Broadband Plan’s goal of 100 million Americans with access to 100 mbps Internet speeds by 2010 will be reached without any public spending.

The wireless world, however, is a different place.  After years of underutilization of 3G networks by consumers who saw no compelling or “killer” apps worth using, the latest generation of portable computing devices (iPhone, Android, Blackberry, Windows) has reached the tipping point and well beyond.  Existing networks in many locations are overcommitted, and political resistance to additional cell tower and other facilities deployment is exacerbating the problem.

Just last week, a front page story in the San Francisco Chronicle reported on growing tensions between cell phone providers and residents who want new towers located anywhere but near where they live, go to school, shop, or work.  CTIA-The Wireless Association announced that it would no longer hold events in San Francisco, after the city council, led by Mayor Gavin Newsome, passed a “Cell Phone Right to Know” ordinance that requires retail disclosure of a phone’s specific adoption rate of emitted radiation.

Given the likely continued lagging of cellular deployment, it seems prudent to consider less stringent restrictions on network management for wireless than for wireline.  Under the open Internet rules, providers would be unable to limit or ban outright certain high-bandwidth data services, notably video services and peer-to-peer file sharing, that the network may simply be unable to support.  But the proposed open Internet rules will have none of that.

The NOI does note some of the significant differences between wired and wireless (¶ 102), but also reminds us that the limited spectrum for wireless signals affords them special powers to regulate the business practices of providers. (¶ 103)  Under Title III of the Communications Act, which applies to wireless, the FCC has and makes use of the power to ensure spectrum uses are serving a broad “public interest.”

In some ways, then, Title III gives the Commission powers to regulate wireless broadband access beyond what they would get from a reclassification to Title II.  So even if the FCC were to choose the first option and leave the current classification scheme alone, wireless broadband providers might still be subject to open Internet rules under Title III.  It would be ironic if the only broadband providers whose network management practices were to be scrutinized were those who needed the most flexibility.  But irony is nothing new in communications law.

One power, however, might elude the FCC, and therefore might give further weight to a scheme that would regulate wireless broadband under Title III and Title II.  Title III does not include the extension of Universal Service to wireless broadband (¶ 103).  This is a particular concern given the increased reliance of under-served and at-risk communities on cellular technologies for all their communications needs.  (See the recent Pew Internet & Society study for details.)

While the NOI asks for comment on whether and to what extent the FCC ought to treat wireless broadband differently and at a later time from wired services, the thrust of this section makes clear the Commission is thinking of more, not less regulation for the struggling cellular industry.

Greed:  Universal Service taxes – So what about Universal Service?  In an effort to justify the Title II reclassification as something more than just a fix to the Comcast case, the FCC has (with some hedging) suggested that D.C. Circuit’s ruling also calls into question the Commission’s ability to implement the National Broadband Plan, published only a few weeks prior to the decision in Comcast

At a conference sponsored by the Stanford Institute for Economic Policy Research that I attended, Chairman Genachowski was emphatic that nothing in Comcast constrained the FCC’s ability to execute the plan.

But in the run-up to the NOI, the rhetoric has changed.  Here the Chairman in his separate statement says only that “the recent court decision did not opine on the initiatives and policies that we have laid out transparently in the National Broadband Plan and elsewhere.”

Still, it’s clear that whether out of genuine concern or just for more political and legal cover, the Commission is trying to make the case that Comcast casts serious doubt on the Plan, and in particular the FCC’s recommendations for reform of the Universal Service Fund (USF).  (¶¶ 32-38).

Though the NOI politely recites the legal theories posed by several analysts for how USF reform could be done without any reclassification, the FCC is skeptical.  For the first and only time in the NOI, the FCC asks not for general comments on its existing authority to reform Universal Service but for the kind of evidence that would be “needed to successfully defend against a legal challenge to implementation of the theory.”

There is, of course, a great deal at stake.  The USF is fed by taxes paid by consumers as part of their telephone bills, and is used to subsidize telephone service to those who cannot otherwise afford it.  Some part of the fund is also used for the “E-Rate” program, which subsidizes Internet access for schools and libraries.

Like other parts of the fund, E-Rate has been the subject of considerable corruption.  As I noted in Law Four of “The Laws of Disruption,” a 2005 Congressional oversight committee labeled the then $2 billion E-Rate program, which had already spawned numerous criminal convictions for fraud, a disgrace, “completely [lacking] tangible measures of either effectiveness or impact.”

Today the USF collects $8 billion annually in consumer taxes, and there’s little doubt that the money is not being spent in a particularly efficient or useful way.  (See, for example, Cecilia Kang’s Washington Post article this week, “AT&T, Verizon get most federal aid for phone service.”)  The FCC is right to call for USF reform in the National Broadband Plan, and to propose repurposing the USF to subsidize basic Internet access as well as dial tone.  The needs for universal Internet access—employment, education, health care, government services, etc.—are obvious.

But what has this to do with Title II reclassification?  There’s no mention in the NOI of plans to extend the class of services or service providers obliged to collect the USF tax, which is to say there’s nothing to suggest a new tax on Internet access.  But Recommendation 8.10 of the NBP encourages just that.  The Plan recommends that Congress “broaden the USF contributions base” by finding some method of taxing broadband Internet customers.  (Congress has so far steadfastly resisted and preempted efforts to introduce any taxes on Internet access at the federal and state level.)

If Congress agreed with the FCC, broadband Internet access would someday be subject to taxes to help fund a reformed USF.  The bigger the category of providers included under Title II (the most likely collectors of such a tax), the bigger the USF.  The temptation to broaden the definition of affected companies from “facilities based” to something, as the California Public Utilities Commission put it, more “flexible,” would be tantalizing.


But other than these minor quibbles, the NOI offers nothing to worry about!

The Privacy and Security Totentanz

I participated last week in a Techdirt webinar titled, “What IT needs to know about Law.”  (You can read Dennis Yang’s summary here, or follow his link to watch the full one-hour discussion.  Free registration required.)

The key message of  The Laws of Disruption is that IT and other executives need to know a great deal about law—and more all the time.  And Techdirt does an admirable job of reporting the latest breakdowns between innovation and regulation on a daily basis.  So I was happy to participate.

Legally-Defensible Security

Not surprisingly, there were far too many topics to cover in a single seminar, so we decided to focus narrowly on just one:  potential legal liability when data security is breached, whether through negligence (lost laptop) or the criminal act of a third party (hacking attacks).  We were fortunate to have as the main presenter David Navetta, founding partner with The Information Law Group, who had recently written an excellent article on what he calls “legally-defensible security” practices.

I started the seminar off with some context, pointing out that one of the biggest surprises for companies in the Internet age is the discovery that having posted a website on the World Wide Web, they are suddenly and often inappropriately subject to the laws and jurisdiction of governments around the world.   (How wide is the web? World.)

In the case of security breaches, for example, a company may be required to disclose the incident to affected third parties (customers, employees, etc.) under state law.  At the other extreme, executives of the company handling the data may be criminally-liable if the breach involved personally-identifiable information of citizens of the European Union (e.g., the infamous Google Video case in Italy earlier this year, which is pending appeal).  Individuals and companies affected by a breach may sue the company under a variety of common law claims, including breach of contract (perhaps the violation of a stated privacy policy) or simple negligence.

The move to cloud computing amplifies and accelerates the potential nightmares.  In the cloud model, data and processing are subcontracted over the network to a potentially-wide array of providers who offer economies of scale, application or functional expertise, scalable hardware or proprietary software.  Data is everywhere, and its disclosure can occur in an exploding number of inadvertent ways.  If a security breach occurs in the course of any given transaction, just untangling which parties handled the data—let alone who let it slip out—could be a logistical (and litigation) nightmare.

The Limits of Negligence

Not all security breaches involve private or personal information, but it’s not surprising that the most notable breakdowns (or at least the most vividly-reported) in security are those that expose consumer or citizen data, sometimes for millions of affected parties.  (Some of the most egregious losses have involved government computers left unsecured, with sensitive citizen data unencrypted on the hard drive.)  Consumer computing activity has surpassed corporate computing and is growing much faster.  Privacy and security are topics that are increasingly hard to disentangle

Which is not to say that the bungling of data that affects millions of users necessarily translates to legal consequences for the company who held the information.   Often, under current law, even the most irresponsible behavior by a data handler does not necessarily translate to liability.

For one thing, U.S. law does not require companies to spare no expense in protecting data.  As David Navetta points out, courts may find that despite a breach the precautions taken may have nonetheless been economically sensible, meaning that the precautions taken were justified given the likelihood of a breach and the potential consequences that followed.  Adherence to ISO or other industry standards on data security may be sufficient to insulate a company from liability—though not always.  (Courts sometimes find that industry standards are too lax.)

For the most part, tort law still follows the classic negligence formula of the beatified American jurist Learned Hand, who explained that the duty of courts was to encourage behavior by defendants that made economic sense.  If courts found liability any time a breach occurred, then data handlers would be incentivized to spend inefficient amounts of money on protecting it, leading to net social loss.  (The classic cases involved sparks from locomotives causing fire damage to crops—perfect avoidance of damage, the courts ruled, would cost too much relative to the harm caused and the probability of it occurring.)

That, at least, is the common law regime that applies in the U.S.  The E.U., under laws enacted in support of its 1995 Privacy Directive, follow a different rule, one that comes closer to product liability law, where any failure leads to per se liability for the manufacturer, or indeed for any company in the chain of sales to a consumer.

A case last week from the Ninth Circuit Court of Appeals, however, reminds us that a finding of liability doesn’t necessarily lead to an award of damages.  In Ruiz v. Gap, a job applicant whose personal information was lost when two laptop computers were stolen from a Gap vendor who was processing applications sued Gap, claiming to represent a class of applicants who were victims of the loss.

All of Ruiz’s claims, however, were rejected.  Affirming the lower court and agreeing with most other courts to consider the issue, the Ninth Circuit held that Ruiz could not sue Gap without a showing of “appreciable and actual damage.”  The cost of forward-looking credit monitoring didn’t count (Gap offered to pay Ruiz for that in any case), nor did speculative claims of future losses.  Actual losses, expressible and provable in monetary terms, were required.

The court also rejected claims under California state law and the state constitution, noting that an “invasion of privacy” does not occur until there is actual misuse of the data contained on the stolen laptops.  (Most laptop thefts are presumably motivated by the value of the hardware, not any data that might reside on the hard drive.)

As Eric Goldman succinctly points out, the Ninth Circuit case highlights some odd behavior by plaintiff class action lawyers in the recent hubbub involving Facebook, Google, and other companies who either change their privacy policies or who use customer data in ways that arguably violate that policy.  “[T]he most disturbing thing,” Eric writes, “is that so many plaintiffs’ lawyers seem completely uninterested in pleading how their clients suffered any consequence (negative or otherwise) from the gaffe at all. Their approach appears to be that the service provider broke a privacy promise, res ipsa loquitur, now write us a check containing a lot of zeros.”

A Surprising Lack of Law – And an Alternative Model for Redress

It’s not just the lawyers who are confused here.  U.S. consumers, riled up by stories in mainstream media, seem to live under the misapprehensions that they have some legal right to privacy, or that the protection of personal information that can be enforced in courts against corporations.

That is true in the E.U., but not in the U.S.  The Constitutional “right to privacy” detailed in U.S. Supreme Court decisions of the last fifty years only applies to protections against government behavior.  There is no Constitutional right to privacy that can be enforced against employers, business partners, corporations, parents, or anyone else.

What about statutes?  With a few specific exceptions for medical information, credit history, and a few other categories, there is also no U.S. or for the most part state law that protects consumer privacy against corporations.  There’s no law that requires a website to publish its privacy policy, let alone follow it.  Even if policy constitutes an enforceable contract (not entirely a settled matter of law), the Ruiz case reminds us that breach of contract is irrelevant without evidence of actual monetary damages.

Before storming the barricades demanding justice, however, keep in mind that the law is not the only source of a remedy.  (Indeed, law is rarely the most efficient or effective in any case.)

The lack of a legal remedy for misuse of private information doesn’t mean that companies can do whatever they like with data they collect, or need take no precautions to ensure that information isn’t lost or stolen.

As more and more personal and even intimate data migrates to the cloud, it has become crystal-clear that consumers are increasingly sensitive (perhaps, economically-speaking, over-sensitive) about what happens to it.  Consumers express their unhappiness in a variety of media, including social networking sites, blogs, emails, and tweets.  They can and do put economic pressure on companies whose behavior they find unacceptable:  boycotts, switching to other providers, and through activism that damages the brand of the miscreant.

Even if the law offers no remedy, in other words, the court of public opinion has proven quite effective.  Even without a court ordering them to do so, some of the largest data handlers have made drastic changes to their policies, software, and how they communicate with users.

Looming in the background of these stories is always the possibility that if companies fail to appease their customers, the customers will lobby their elected representatives to provide the kind of legal protections that so far haven’t proven necessary.  But given the mismatch between the pace of innovation and the pace of legal change, legislation should always be the last, not the first, resort.

So expect lots more stories about security breaches, and expect most of them to involve the potential disclosure of personal information.   (That’s one reason that laws requiring disclosure of breaches are a good idea.  Consumers can’t flex their power if they are kept in the dark about behavior they are likely to object to.)

And that means, as we conclude in the seminar, that IT executives making security decisions had better start talking to their counterparts in the general counsel’s office.

Because as hard as it is for those two groups to talk to each other, it’s much harder to have a conversation after a breach than before.  IT makes decisions that affect the legal position of the company; lawyers make decisions that affect the technical architecture of products and services.  The question isn’t whether to formulate a legally-defensible security policy, in other words, but when.