Category Archives: Globalization

The Italian Job: What the Google Convictions are Really About

I was pleased to be interviewed last night on BBC America World News (live!) about the convictions of three senior Google executives by an Italian court for privacy violations.  The case involved a video uploaded to Google Videos (before the acquisition of YouTube) that showed the bullying of a person with disabilities. (See “Larger Threat is Seen in Google Case” by the New York Times’ Rachel Donadio for the details.)

Internet commentators were up-in-arms about the conviction, which can’t possibly be reconciled with European law or common sense.  The convictions won’t survive appeals, and the government knows that as well as anyone.  They neither want to or intend to win this case.  If they did, it would mean the end of the Internet in Italy, if nothing else. Still, the case is worth worrying about, for reasons I’ll make clear in a moment.

But let’s consider the merits of the prosecution. Prosecutors bring criminal actions because they want to change behavior—behavior of the defendant and, more important given the limited resources of the government, others like him.  What behavior did the government want to change here?

The video was posted by a third party. Within a few months, the Italian government reported to Google their belief that it violated the privacy rights of the bullying victim, and Google took it down. They cooperated in helping the government identify who had posted it, which in turn led to the bullies themselves.

The only thing the company did not do was to screen the video before posting it. The Google executives convicted in absentia had no personal involvement in the video. They are being sued for what they did not do, and did not do personally.

So if the prosecution stands, it leads to a new rule for third-party content: to avoid criminal liability, company executives must personally ensure that no hosted content violates the rights of any third party.

In the future, the only thing employees of Internet hosting services of all kinds could do to avoid criminal prosecution would be to pre-screen all user content before putting it on their website.  And pre-screen them for what?  Any possible violation of any possible rights.  So not only would they have to review the contents with an eye toward the laws of every possible jurisdiction, but they would also need to obtain releases from everyone involved, and to ensure those releases were legally binding. For starters.

It’s unlikely that such filtering could be done in an automated fashion. It is true that YouTube, for example, filters user postings for copyright violations, but that is only because the copyright holders give them reference files that can be compared. The only instruction this conviction communicates to service providers is “don’t violate any rights.” You can’t filter for that!

The prosecutor’s position in this case is that criminal liability is strict—that is, that it attaches even to third parties who do nothing beyond hosting the content.

If that were the rule, there would of course be no Internet as we know it. No company could possibly afford to take that level of precaution, particularly not for a service that is largely or entirely free to users. The alternative is to risk prison for any and all employees of the company.

(The Google execs got sentences of six months in prison each, but they won’t serve them no matter how the case comes out. In Italy, sentences of less than three years are automatically suspended.)

And of course that isn’t the rule.  Both the U.S. and the E.U. wisely grant immunity to services that simply host user content, whether it’s videos, photos, blogs, websites, ads, reviews, or comments. That immunity has been settled law in the U.S. since 1996 and the E.U. since 2000. Without that immunity, we simply wouldn’t have–for better or worse–YouTube, Flickr, MySpace, Twitter, Facebook, Craigslist, eBay, blogs, user reviews, comments on articles or other postings, feedback, etc.

(The immunity law, as I write in Law Five of “The Laws of Disruption,” is one of the best examples of the kind of regulating that encourages rather than interferes with emerging technologies and the new forms of interaction they enable.)

Once a hosting service becomes aware of a possible infringement of rights, to preserve immunity most jurisdictions require a reasonable investigation and (assuming there is merit to the complaint), removal of the offending content. That, for example, is the “notice and takedown” regime in the U.S. for content that violates copyright.

The government in this case knows the rule as well as anyone.  This prosecution is entirely cynical—the government neither wants to nor intends to win on appeal.  It was brought to give the appearance of doing something in response to the disturbing contents of the video (the actual perpetrators and the actual poster have already been dealt with). Google in this sense is an easy target, and a safe one in that the company will vigorously fight the convictions until the madness ends.

And not unrelated, it underscores a message the Italian government has been sending any way it can to those forms of media it doesn’t already control—that it will use whatever means at its disposal, including the courts, to intimidate sources it can’t yet regulate.

So in the end it isn’t a case about liability on the Internet so much as a case about the power of new media to challenge governments that aren’t especially interested in free speech.

Internet pundits are right to be outraged and disturbed by the audacious behavior of the government. But they should be more concerned about what this case says about freedom of the press in Italy and less what it says about the future of liability for content hosts.

And what it says about the Internet as a powerful, emerging form of communication that can’t easily be intimidated.

Comcast: The New Forces at Work

comcast logoMy op-ed today in The Hill (see “The Winter of Our Content,”) argues against those who want to derail the merger of Comcast and NBC Universal.  I don’t know enough to say whether the deal makes good business sense—that’s for the companies’ shareholders to decide in any case.  But I do know that every media or communications merger of the last twenty years has been resisted for the same reason—that the combined entity will both have and exercise excessive market power to the detriment of consumers.

That argument has turned out to be wrong every time.  It will be here as well.

Under the terms of the agreement, Comcast will get a 51% interest in NBC, Universal and several valuable cable channels including MSNBC and Bravo.  Comcast already owns E!, the Golf Channel, and other content, as well as being a leading provider of cable TV access, Internet access and, more recently, phone service.

A wide range of public advocacy groups have already objected that the new Comcast will be too powerful, and will have “every incentive” to keep programming it controls off the Internet, including new services such as Hulu, which is 33% owned by NBC.  Consumer groups also fear that Comcast will dismantle NBC’s broadcast network, all in the service of pushing American consumers onto paid cable TV subscriptions.

Why Comcast would want to use its leverage in the interest of only one part of its business I don’t understand.  But even if that was the goal, I very much doubt that goal would be achievable even with the new assets it will acquire.

As is typical in industries undergoing wrenching and dramatic consolidation and reallocation of assets, the urge to merge is a function of three principal forces, first introduced in my earlier book, Unleashing the Killer App. These forces—globalization, digitization, and deregulation—are themselves a function of the profound technological innovation that all of us know as consumers of devices, services, and products that didn’t exist just a few years ago.

There are several technologies involved here, including standards (the Internet protocols as well as compression and data structures for various media), software (the Web et al), hardware (faster-cheaper-smaller everything) and new forms of bit transportation, including cable, satellite, and fiber.  It’s the combination of these that makes possible the dramatic ascent of new applications—everything from Napster to YouTube to the iPhone to TiVo.  It’s why there are now hundreds if not thousands of channels of available programming, increasingly in high-definition and perhaps soon in 3D and other innovations.

With the advance of digital technology, driven by Moore’s Law and Metcalfe’s Law, all content is moving at accelerating speeds from analog to digital forms of creation, storage, and transport.  (This includes media content a well as user content—email, phone calls, home movies and photos.)  See my earlier post, “Hollywood:  We have met the enemy…”

That fundamental shift has made it easier to create global markets for content use and in turn has put pressure on regulators to open what had been highly-parochial approaches to  protecting the diversity of content.  Until very recently,  in the U.S. that diversity was represented by a whopping three choices of television programming—that of ABC, CBS, and NBC.

As globalization and digitization advance, the pressure to deregulate increases.  Caps and other artificial limitations of media ownership have been falling away over the last twenty years.  Clear rules separating who can transport data versus voice versus video make less and less sense, and have been removed.

Each of these changes has been resisted by consumer groups.  One long-forgotten change to the media industry occurred even before the rise of digital life, in the stone age of 1995.  That was the year the FCC eliminated the “financial syndication” rules, or finsyn, which had been adopted in 1970 to limit the power of the three broadcast networks.  (See Capital Cities v. FCC, 29 F.3d 309 (7th Cir.1994)).

Finsyn, among other controls, limited the ownership in prime-time programming the networks could obtain, and prohibited them from selling the programming they owned directly.  Once a program, say “Gilligan’s Island,” finished its prime-time network run, the networks could only syndicate it through third party syndicators.  The goal was to protect non-affiliated stations (mostly on the UHF band), who might not get a chance to buy syndicated programs at all if the networks kept control.  The networks might have only syndicated to their own affiliates.

Cable TV, which made the weak UHF signal stronger, along with the rise of Fox as a fourth network and independent producers who self-syndicated (particularly Paramount, which produced several made-for-syndication Star Trek series), made clear that the finsyn rules were no longer necessary.  The independent stations and consumer advocates fought to retain them anyway, and lost.

Of course we now have more diversity of programming than anyone in 1995 would have ever imagined possible.  Not because finsyn was repealed, but in spite of that fact.  Technology, left alone, achieved multiples of whatever metric regulators established for their efforts.

Those who object to the reallocation of industry assets see these deals entirely as efforts by vested interests to resist change inspired by what I called “the new forces.”  In part these deals are surely trying to hold back the flood.  They may even be motivated by the belief that consolidation translates to control.

But it never works out that way.  Consumers always get what they want, usually sooner than later, and regardless of what entrenched industry providers may or may not want.  Artificial limits on who can do what do more to hold back the technological inevitability than they do to protect consumers.

Resistance here is not only futile, it’s counter-productive.

The End of the American Internet

icann logo

Forty years after the first successful connection was made on the predecessor to the Internet, the U.S. has given up its fading claims to govern the network.

A fight over governance which erupted in 1998 has ended with a whimper.

In this case, I’m not talking about the regulation of human activity that takes place using the Internet, but of the internal working of the network itself.

As reported by the Advisory Committee of the Congressional Internet Caucus, the U.S. government’s agreement with ICANN was allowed to expire on September 29th. (The Department of Commerce has a separate agreement with ICANN, which was also significantly modified.)

ICANN is a non-profit corporation formed in 1998 to manage two key aspects of network governance: the assignment of domain names and website suffixes and of IP addresses for computers connected to the Internet. There are now over 110,000,000 registered domains.

Hard as it is to believe, before 1998 the management of names and addresses was largely left to the efforts of Jon Postel, a computer science professor at the University of Southern California. As the Internet shifted dramatically from an academic and government network to a consumer and business network, it became clear that some more formal mechanism of governance was required.

But by then the Internet had become a global phenomenon. The U.S. government was adamant that it retain some measure of control over its invention; the rest of the world argued that resting authority for a global infrastructure with one national government would cripple it, or worse.  Hearings were held, speeches were made, the U.N. was called in (literally).

ICANN was the compromise, and it was an ugly compromise at that. ICANN has run through several executive directors and political battles. Just explaining the selection of members of its Board of Directors, as David Post demonstrates in Figure 10.3 of his book, “In Search of Jefferson’s Moose: Notes on the State of Cyberspace,” requires a flowchart with nearly fifty boxes.

It has also been the subject of regular criticism, in particular for the ways in which it subcontracts the registration of domain names, its resistance to creating new “dot” suffixes, and its evolving and weird process for resolving disputes over “ownership” of domains, typically involving a claim of trademark infringement or unfair competition. Former board member Karl Auerbach, quoted in Information Week, put it this way:

At the end of the day it comes down to this: ICANN remains a body that stands astride the Internet’s domain name system, not as a colossus but more as a Jabba the Hutt. ICANN is a trade guild in which member vendors meet, set prices, define products, agree to terms of sales, and allow only chosen new vendors to enter the guild and sell products.

Still, through dot.com boom and bust, Web 2.0 and social media, the Internet has continued to grow, operate, and reinvent itself as new technologies arrive on the scene.

And what started as a U.S. government project is now clearly a worldwide convenience. According to Christopher Rhoads in The Wall Street Journal, “today just 15% of the world’s estimated 1.7 billion Internet users reside in North America.”

Which is perhaps why the end of federal government oversight of ICANN received so little attention in 2009.

But in 1998, you would have thought the future of civilization depended on keeping the Internet an American property.