Category Archives: Information Economics

Viacom v. YouTube: The Principle of Least Cost Avoidance

I’m late to the party, but I wanted to say a few things about the District Court’s decision in the Viacom v. YouTube case this week and.  This will be a four-part post, covering:

1.  The holding

2.  The economic principle behind it

3.  The next steps in the case

4.  A review of the errors in legal analysis and procedure committed by reporters covering the case

I’ve written before (see “Two Smoking Guns and a Cold Case”, “Google v. Everyone” and “The Revolution will be Televised…on YouTube”) about this case, in which Viacom back in 2007 sued YouTube and Google (which owns YouTube) for $1 billion in damages, claiming massive copyright infringement of Viacom content posted by YouTube users.

There’s no question of the infringing activity or its scale.  The only question in the case is whether YouTube, as the provider of a platform for uploading and hosting video content, shares any of the liability of those among its users who uploaded Viacom content (including clips from Comedy Central and other television programming) without permission.

The more interesting questions raised by the ascent of new video sites aren’t addressed in the opinion.  Whether the users understood copyright law or not and whether their intent in uploading their favorite clips from Viacom programming was to promote Viacom rather than to harm it, were not considered.   Indeed, whether on balance Viacom was helped more than harmed by the illegal activity, and how either should be calculated under current copyright law, is not relevant to this decision, and are saved for another day and perhaps another case.

That’s because Google moved for summary judgment on the basis of the Digital Millennium Copyright Act’s “safe harbor” provisions, which immunize service providers from any kind of attributed or “secondary” liability for user behavior when certain conditions are met.  Most important, a service provider can dock safe from liability only if it can show that it :

– did not have “actual knowledge that the material…is infringing,” or is “not aware of facts or circumstances from which infringing activity is apparent” and

– upon obtaining such knowledge or awareness “acts expeditiously to remove…the material” and

– does not “receive a financial benefit directly attributable to the infringing activity, “in a case in which the service provider has the right ability to control such activity,” and

– upon notification of the claimed infringement, “responds expeditiously to remove…the material that is claimed to be infringing….”

Note that all four of these elements must be satisfied to benefit from the safe harbor

The question for Judge Stanton to decide on YouTube’s motion for summary judgment was whether YouTube met all the conditions, and he has ruled that they did so.

1.  The Slam-Dunk for Google

The decision largely comes down to an interpretation of what phrases like “the material” and “such activity” means in the above-quoted sections of the DMCA.

Indeed, the entire opinion can be boiled down to one sentence on page 15.  After reviewing the legislative history of the DMCA at length, Judge Stanton concludes that the “tenor” of the safe harbor provisions leads him to interpret infringing “material” and “activity” to mean “specific and identifiable infringements of particular individual items.”

General knowledge, which YouTube certainly had, that some of its users were (and still are) uploading content protected by copyright law without permission, is not enough to defeat the safe harbor and move the case to a determination of whether or not secondary liability can be shown.  “Mere knowledge of prevalence of such activity in general,” Judge Stanton writes, “is not enough.”

To defeat a safe harbor defense at the summary judgment stage, in other words, a content owner must show that the service provider knew or should have known about specific instances of infringement.  Such knowledge could come from a service provider hosting subsites with names like “Pirated Content” or other “red flags.”  But in most cases, as here, the service provider would not be held to know about specific instances of infringement until informed of them, most often from takedown notices sent by copyright holders themselves.

Whether ad revenue constitutes “direct financial benefit” was not tested, because, again, that provision only applies to “activity” the service provider has the right to control.  “Activity,” as Judge Stanton reads it, also refers to specific instances of illegal content distribution.

As Judge Stanton notes, YouTube users currently post 24 hours of video content every minute, making it difficult if not impossible, as a practical matter, for YouTube to have any idea which ones are not authorized by rights holders.  And when Viacom informed the site of some 100,000 potentially-infringing clips, YouTube removed nearly all of them within a day.  That is how the DMCA was intended to work, according to Judge Stanton, and indeed demonstrates that it is working just fine.

Viacom, of course, is free to pursue the individuals who posted its content without permission, but everyone should know by now that for many reasons that’s a losing strategy.

2.  The Least-Cost Avoider Principle

On balance, Judge Stanton is reading what is clearly an ambiguous statute with a great deal of common sense.  To what extent the drafters of the DMCA intended the safe harbor to apply to general vs. specific knowledge is certainly not clear from the plain language, nor, really, from the legislative history.  (Some members of the U.S. Supreme Court believe strongly that legislative history, in any case, is irrelevant in interpreting a statute, even if ambiguous.)

To bolster his interpretation that the safe harbor protects all but specific knowledge of infringement, interestingly, Judge Stanton points out that this case is similar to one decided a few months ago in the Second Circuit.  In that case, the court refused to apply vicarious liability for trademark infringement to eBay for customer listings of fake Tiffany’s products.

Though trademark and copyright law are quite different, the analogy is sensible.  In both cases, the question comes down to one of economic efficiency.  Which party, that is, is in the best position to police the rights being violated?

Here’s how the economic analysis might go.  Given the existence of new online marketplaces and video sharing services, and given the likelihood and ease with which individuals can use those services to violate information rights (intentionally or otherwise, for profit or not), the question for legislators and courts is how to minimize the damage to the information rights of some while still preserving the new value to information in general that such services create.

For there is also no doubt that the vast majority of eBay listings and YouTube clips are posted without infringing the rights of any third party, and that the value of such services, though perhaps not easily quantifiable, is immense.  EBay has created liquidity in markets that were too small and too disjointed to work efficiently offline.  YouTube has enabled a new generation of users with increasingly low-cost video production tools to distribute their creations, get valuable feedback and, increasingly, make money.

That these sites (and others, including Craigslist) are often Trojan Horses for illegal activities could lead legislators to ban them outright, but that clearly gets the cost-benefit equation wrong.  A ban would generate too much protection.

At the same time, throwing up one’s hands and saying that a certain class of rights-holders must accept all the costs of damage without any means of reducing or eliminating those costs, would be overly generous in the other direction.  Neither users, service providers, nor rights holders would have any incentives to police user behavior.  The basic goals of copyright and trademark might be seriously damaged as a result.

The goal of good legislation in situations like this—where overall benefit outweighs individual harm and where technology is changing the equation rapidly–is to produce rules that are most likely to get the balance right and do so with the least amount of expensive litigation.  The DMCA provisions described above are one attempt at creating such rules.

But those rules, given the uncertainties of emerging technologies and the changing behaviors of users, can’t possibly give judges the tools to decide every case with precision.  Such rules must be a least a little ambiguous (if not a lot).  Judges, as they have done for centuries, must apply other, objective interpretive tools to help decide individual cases even as the targets keep moving.

Judge Stanton’s interpretation of the safe harbor provisions follows, albeit implicitly, one of those neutral tools, the same one applied by the Second Circuit in the eBay case.  And that is the principle of the least-cost avoider.

This principle encourages judges to interpret the law, where possible, such that the burden of reducing harmful behavior falls to the party in the best position, economically, to avoid it.  That way, as parties in similar situations in the future evaluate the risk of liability, they will be more likely to choose a priori behaviors that not only reduce the risk of damages but also the cost of more litigation.

In the future, if Judge Stanton’s ruling stands, rights holders will be encouraged to police video sites more carefully.  Service providers such as YouTube will be encouraged to respond quickly to legitimate demands to remove infringing content.

Given the fact that activities harmful to rights holders are certain to occur, in other words, the least cost avoider principles says that a judge should rule in a way that puts the burden of minimizing the damage on the party who can most efficiently avoid it.  In this case, the choice would be between YouTube (preview all content before posting and ensure legal rights have been cleared), Viacom (monitor sites carefully and quickly demand takedown of infringing content) or the users themselves (don’t post unauthorized content without expecting to pay damages or possible criminal sanctions).

Here, the right answer economically is Viacom, the rights holder who is directly harmed by the infringing behavior.

That may seem unfair from a moral standpoint.  For, after all, Viacom is the direct victim of the users’ clearly unlawful behavior and the failure of YouTube, the enabler of the users, to stop it.  Why should the victim be held responsible for making sure they are not caused further damage in the future?

But there’s a certain economic logic to that decision, though one difficult to quantify (Judge Stanton made no effort to do so; indeed he did not invoke the least cost avoider principle explicitly.)  The grant of a copyright or a trademark is the grant of a monopoly on a certain class of information, a grant that itself comes with inherent economic inefficiencies in the service of encouraging overall social value–encouraging investment in creative works.

Part of the cost of having such a valuable monopoly is the cost of policing it, even in new media and new services that the rights holder may not have any particular interest in using itself.

By interpreting the DMCA as protecting service providers from mere general knowledge of infringing behavior, Judge Stanton has signaled that Viacom can police YouTube more efficiently than YouTube can.  Why?  For one thing, Viacom has the stronger incentive to ensure unauthorized content stays off the site.  It alone also has the knowledge both of what content it has rights to and when that content appears without authorization.  (Several examples arose in the course of discovery of content Viacom ordered YouTube to remove that, it turned out, had been posted by Viacom or its agents masquerading as users in order to build buzz.)

The cost of monitoring and stopping unauthorized posting is not negligible, of course.  But YouTube, eBay and other service providers increasingly provide tools to make the process easier, faster, and cheaper for rights holders.  They may or may not be obligated to do so as a matter of law; for now, their decision to do so represents an organic and efficient form of extra-legal rulemaking that Judge Stanton is eager to encourage.

No matter what, someone has to bear the bulk of the cost of monitoring and reporting violations.  Viacom can do it cheaper, and can more easily build that cost into the price it charges for authorized copies of its content.

And where it cannot easily issue takedown orders to large, highly-visible service providers like YouTube, it retains the option, admittedly very expensive, to sue the individuals who actually infringed.  It can also try to invoke the criminal aspect of copyright law, and get the FBI (that is, the taxpayer) to absorb the cost.

To rule the other way–to deny YouTube its safe harbor–would encourage service providers to overspend on deterrence of infringing behavior.  In response, perhaps YouTube and other sites would require, before posting videos, that users provide legally-binding and notarized documentation that the user either owns the video or has a license to post it.  Obtaining such agreements, not to mention evaluating them for accuracy, would effectively mean the end of video sites.  Denying the safe harbor based on general knowledge, to put it another way, would effectively interpret the DMCA as a ban on video sites.

That would be cheaper for Viacom, of course, but would lead to overall social loss.  Right and wrong, innocence and guilt, are largely excluded from this kind of analysis, though certainly not from the rhetoric of the parties.  And remember that actual knowledge or general awareness of specific acts of infringement would, according to Judge Stanton’s rule, defeat the safe harbor.  In that case, to return to the economic terminology, the cost of damages—or, if you prefer, assigning some of the blame—would shift back on YouTube.

3.  What’s Next?

Did Judge Stanton get it right as a matter of information economics?  It appears that the answer is yes.  But did he get it right as a matter of law—in this case, of the DMCA?

That remains to be seen.

Whether one likes the results or not, as I’ve written before, summary judgment rulings by district courts are never the last word in complex litigation between large, well-funded parties.  That is especially so here, where the lower court’s interpretation of a federal law is largely untested in the circuit and indeed, as here, in any circuit.

Judge Stanton cites as authority for his view of the DMCA a number of other lower court cases, many of them in the Ninth Circuit.  But as a matter of federal appellate law, Ninth Circuit cases are not binding precedent on the Second Circuit, where Judge Stanton sits.  And other district (that is, lower) court opinions cannot be cited by the parties as precedent even within a circuit.  They are merely advisory.  (A Ninth Circuit case involving Veoh is currently on appeal; the service provider won on a “safe harbor” argument similar to Google’s in the lower court.)

So this case will certainly head for appeal to the Second Circuit, and perhaps from there to the U.S. Supreme Court.  But a Supreme Court review of the case is far from certain.  Appeals to the circuit court are the right of the losing party.  A petition to the Supreme Court, on the other hand, is accepted at the Court’s discretion, and the Court turns down the vast majority of cases that it is asked to hear, often without regard to the economic importance or newsworthiness of the case.  (The Court refused to hear an appeal in the Microsoft antitrust case, for example, because the lower courts largely applied existing antitrust precedents.)

A circuit court reviewing summary judgment will make a fresh inquiry into the law, accepting the facts alleged by Viacom (the losing party below) as if they were all proven.  If the Second Circuit follows Judge Stanton’s analogy to the eBay case, Google is likely to prevail.

If the appellate court rejects Judge Stanton’s view of specificity, the case will return to the lower court and move on, perhaps to more summary judgment attempts by both parties and, failing that, a trial.  More likely, at that point, the parties will reach a settlement, or an overall licensing agreement, which may have been the point of bringing this litigation in the first place.  (A win for Viacom, as in most patent cases, would have given the company better negotiating leverage.)

4.  Getting it Right or Wrong in the Press

That brief review of federal appellate practice is entirely standard—it has nothing to do with the facts of this case, the parties, the importance of the decision, or the federal law in question.

Which makes it all the more surprising when journalists who regularly cover the legal news of particular companies continually get it wrong when describing what has happened and/or what happens next.

Last and perhaps least, here are a few examples from some of the best-read sources:

The New York Times – Miguel Helft, who covers Google on a regular basis, commits some legal hyperbole in saying that Judge Stanton “threw out” Viacom’s case, and that “the ruling” (that is, this opinion) could have “major implications for …scores of Internet sites.”  The appellate court decision will be the important one, but technically it will apply only to cases brought in the Second Circuit.  The lower court’s decision, even if upheld, will have no implications for future litigation.  Helft also quotes from counsel at both Viacom and Google which are filled with legal errors, though perhaps understandably so.

The Wall Street Journal –Sam Schechner and Jessica E. Vasellaro make no mistakes in their report of the decision.  They correctly explain what summary judgment means, and summarize the ruling without distorting it.  Full marks.

The Washington Post – Cecilia Kang, who covers technology policy for the Post, incorrectly characterizes Judge Stanton’s ruling as a “dismissal” of Viacom’s lawsuit.  A dismissal, as opposed to the granting of a motion for summary judgment, generally happens earlier in litigation, and signals a much weaker case, often one for which the court finds it has no jurisdiction or which, even if all the alleged facts are true, doesn’t amount to behavior for which a legal remedy exists.  Kang repeats the companies’ statements, but also adds a helpful quote from Public Knowledge’s Sherwin Siy about the balance of avoiding harms.

The National Journal – At the website of this legal news publication, Juliana Gruenwald commits no fouls in this short piece, with an even better quote from PK’s Siy.

CNET News.com – Tech news site CNET’s media reporter Greg Sandoval suggests that “While the case could continue to drag on in the appeals process, the summary judgment handed down in the Southern District of New York is a major victory for Google . . . .”  This is odd wording, as the case will certainly “drag on” to an appeal to the Second Circuit.  (A decision by the Second Circuit is perhaps a year or more away.)  Again, a district court decision, no matter how strongly worded, does not constitute a “major victory” for the prevailing party.

Sandoval (who, it must be said, posted his story quite quickly), also exaggerates the sweep of Google’s argument and the judge’s holding.  He writes, “Google held that the DMCA’s safe harbor provision protected it and other Internet service providers from being held responsible for copyright infringements committed by users.  The judge agreed.”  But Google argued only that it (not other providers) was protected, and protected only from user infringements it didn’t know about specifically.  That is the argument with which Judge Stanton agreed

Perhaps these are minor infractions.  You be the judge.

FCC Votes for Reclassification, Dog Bites Man

Not surprisingly, FCC Commissioners voted 3 to 2 today to open a Notice of Inquiry on changing the classification of broadband Internet access from an “information service” under Title I of the Communications Act to “telecommunications” under Title II.  (Title II was written for telephone service, and most of its provisions pre-date the breakup of the former AT&T monopoly.)  The story has been widely reported, including posts from The Washington Post, CNET, Computerworld, and The Hill.

As CNET’s Marguerite Reardon counts it, at least 282 members of Congress have already asked the FCC not to proceed with this strategy, including 74 Democrats.

I have written extensively about why a Title II regime is a very bad idea, even before the FCC began hinting it would make this attempt.  I’ve argued that the move is on extremely shaky legal grounds, usurps the authority of Congress in ways that challenge fundamental Constitutional principles of agency law, would cause serious harm to the Internet’s vibrant ecosystem, and would undermine the Commission’s worthy goals in implementing the National Broadband Plan.  No need to repeat any of these arguments here.  Reclassification is wrong on the facts, and wrong on the law.

What is Net Neutrality?

Instead, I thought it would be useful to return to the original problem, which is last fall’s Notice of Proposed Rulemaking on net neutrality.  For despite a smokescreen argument that reclassification is necessary to implement the NBP, everyone knows that today’s NOI was motivated by the Commission’s crushing defeat in Comcast v. FCC, which held that “ancillary authority” associated with Title I did not give the agency jurisdiction to enforce its existing net neutrality policy.

Rather than request an en banc rehearing of Comcast, or appeal the case, or follow the court’s advice and return to Congress for the authority to enforce the net neutrality rules, the FCC has chosen in the name of expediency simply to rewrite the Communications Act itself.

Many metaphors have been applied to this odd decision.  I liken it to setting your house on fire to light your cigarette.  (You shouldn’t be smoking in the first place.)

Let me be clear, once again, that I am all for an open and transparent Internet.  I believe the packet-switching architecture is one of the key reasons TCP/IP has become the dominant data communications protocol (and will soon dominate voice and video).

Packet-switching isn’t the only reason the Internet has triumphed.  Perhaps the other, more important secrets to TCP/IP’s success are that it is a non-proprietary standard –so long SNA, DECNet and OSI and the corporate strategies their respective owners tried to pursue through them–and simple enough to be baked in to even the least-powerful computing devices. The Internet doesn’t care if you are an RFID tag or a supercomputer.  If you speak the language, you can participate in the network.

These features have made the Internet, as I first argued in 1998 in “Unleashing the Killer App,” an engine for remarkable innovation over the last ten years

The question for me, as I wrote in Chapter 4 of “The Laws of Disruption,” comes down most importantly to one of institutional economics.  Who is best-suited, legal authority aside, to enforce the features of the Internet’s architecture and protocols that make it work so well?  The market?  Industry self-regulation?  A global NGO?  The FCC?  Or put another way, why is a federal government agency (limited, by definition, to enforcing it authority only within the U.S.) such a poor choice for the job, despite the best intentions of its leadership and the obviously strong work ethic of its staff?

To answer that, let’s back all the way up.  Net neutrality is a political concept overlayed on a technical and business architecture.  That’s what makes this debate both dangerous and frustrating.

For starters, it’s hard to come up with a concise definition of net neutrality, largely because it’s one of those terms like “family values” that means something different to everyone who uses it.  For me it’s become something of a litmus test—people who use it positively are generally hostile to large communications companies.  People who use it negatively are generally hostile to regulatory agencies.  A lot of that anger, wherever it comes, seems to get channeled into net neutrality.

In fact the FCC doesn’t even use the term—they talk about the “open and transparent” Internet instead.

But here’s the general idea.  The defining feature of the Internet is that information is broken up into small “packets” of data which are routed through any number of computers on the world-wide network and then are reassembled when they reach their destination.

Up until now, with some notable exceptions, every participating computer relays those packets without knowing what’s in them or who they come from.  The network operates on a packet-neutral model—when one computer receives it, it looks only to see where it’s heading and sends it, depending on traffic congestion at the time, to some other computer along the way just as quickly as it can.

That’s still the model on which the Internet works.  The FCC’s concern is not with current practice, but of future problems.   Increasingly, they see a few dominant providers controlling the outgoing and incoming packets to and from consumers—the first and last mile.  So while the computers between my house and Google headquarters all treat my packets to Google and Google’s packets back to me in a neutral fashion, there’s no law that keeps Comcast (my provider) from opening those packets on their way in or on their way out and deciding to slow or speed up some or all of them.

(Well, the law of antitrust and unfair trade could in fact apply here, depending on how the non-neutral behavior was expressed and by whom.  See below.)

Why would they do that?  Perhaps they make a deal with Google to give priority to Google-related packets in exchange for a fee or a share of Google’s ad revenues.  Or, maybe they want to encourage me to watch Comcast programming instead of YouTube videos, and intentionally slow down YouTube packets to make those videos less appealing to watch.

Most of this is theoretical so far.  No ISP offers the premium or “fast lane” service to individual applications.  Comcast, however, was caught a few years ago experimenting with slowing down the BitTorrent peer-to-peer protocol.  Some of Comcast’s most active customers were clogging the pipes sending and receiving very large files (mostly illegal copies of movies, it turns out).

When they were caught, the company agreed instead to stop offering “unlimited” access and to use more sophisticated network management techniques to ensure a few customers didn’t slow traffic for everyone else.  Comcast and BitTorrent made peace, but the FCC held hearings and sanctioned Comcast after-the-fact, leading to the court case that made clear the FCC has no authority to enforce its neutrality policies.

The simple-minded dichotomy of the ensuing “debate” leaves out some important and complicated technical details.  First, some applications already require and get “premium” treatment for their packets.  Voice and video packets have to arrive pretty much at the same time in order to maintain good quality, so Voice over IP telephone calls (Skype, Vonage, Comcast) get priority treatment, as do cable programming packets, which, after all, are using the same connection to your home that the data uses.

Google, as one of the largest providers of outbound packets, has deals with some ISPs to locate Google-only servers in their hubs to ensure local copies of their web pages are always close by, a service offered more generally by companies such as Akamai, which caches copies of the most frequently-used sites to speed things up for everyone.  In that sense, technology is being used to give priority even to data packets, about which no one should complain.

Fighting over the Future

So the net neutrality fight, aside from leaving out any real appreciation either for technological or business realities, is really a fight about the future.  As cable and telephone companies invest billions in the next generation of technology—including fiber optics and next-generation cellular services–application providers fear they will be asked to shoulder more of the costs of that investment through premium service fees.

Consumer groups have been co-opted into this fight, and see it as one that pits big corporations against powerless customers who need outside advocates to save them from dangers they do not understand.  That increasingly quaint attitude, for one thing, grossly underestimates the growing power of consumers to effect change using the Internet itself (see:  Facebook et al.). Consumers can save themselves, thanks very much.

What is true is that consumers do not and aren’t likely to be asked to pay the true costs of broadband access given the intense competition in major markets between large ISPs such as Comcast, AT&T, Verizon and others.  That is the source of anxiety for the application providers–they are seen as having more elasticity in pricing than end-users.

The existence of provider competition, however, also weighs heavily against the need for government intervention.  If an ISP interferes with the open and transparent Internet, customers will know and they will complain. Ultimately they will find a provider that gives them full and unfettered access.   (There are plenty of interested parties who help consumers with the “know” part of that equation, but still, I fully support the principle of ISP transparency with regard to network management principles.  Few consumers would actually read them, and fewer still understand them, but it’s still a good practice.)

If the market really does fail, or fails in significant local ways (rural or poor customers, for example), then some kind of regulatory intervention might make sense.  But it’s a bad idea to regulate ahead of a market failure, especially when dealing with technology that is evolving rapidly.  In the last ten years, as I argue in The Laws of Disruption, the Internet has proven to be a source of tremendous embarrassment for regulators trying to “fix” problems that shift under their feet even as they’re legislating.  Often the laws are meaningless by the time the ink is dry or—worse—inadvertently make the problems worse after the fact.

Nevertheless, in October of last year the FCC proposed—in a 107-page document—six net neutrality rules that would codify what I described above and a number of peripheral, perhaps unrelated, ideas.  Right now the agency has only a net neutrality policy, and that policy, the D.C. Circuit Court of Appeals ruled, doesn’t constitute enforceable law.  Implicit in that rulemaking was the assumption that someone needed to codify these principles, that the FCC was that someone, and that the agency had the authority from Congress to be that someone.  (The court’s ruling made clear that the latter is not the case.)

There are good reasons to be skeptical that the FCC in particular is the right agency to solve this problem even if it is a problem.  Through most of its existence the agency has been fixed on regulating a legal monopoly—the old phone company—and on managing what were very limited broadcast spectrum—now largely supplanted by cable and more sophisticated technologies for managing the spectrum.

The FCC, recall, is the agency that watches broadcast (but not cable) television and issues fines for indecent content—an activity they do more, rather than less, even as broadcast becomes a trivial part of programming reception.  Congress has three times tried to give the FCC authority to regulate indecency on the Internet as well, but the U.S. Supreme Court has stopped all three.

So if the FCC were to be the “smart cop on the beat” as Chairman Genachowski characterized his view of net neutrality, how would the agency’s temptation to shape content itself be curbed?

Worse, no one seems to have thought ahead as to how the FCC would enforce these rules.  If I complain that my access is slow today and I believe that must mean my ISP is acting in a non-neutral fashion, the agency would have to look at the traffic and inside the packets in order to investigate my complaint.  Again, the temptation to use that information and to share it with law enforcement under the name of anti-terrorism or other popular goals would be strong—strong enough that it ought to worry some of the groups advocating for net neutrality laws as a placebo to keep the ISPs in line.

The Investment Effect

It should be obvious that the course being followed by the FCC – the enactment of net neutrality rules in the first place and the increasingly desperate methods by which it hopes to establish its authority to do so—will cast a pall over the very investments in infrastructure the FCC is counting on to achieve the worthy goals of the NBP.  If nothing else, the reclassification NOI will invariably end in some heavy-duty litigation, which is likely to take years to resolve.  Courts move even more slowly than legislators, who move more slowly than regulators, all of whom aren’t even moving compared to the speed of technological innovation.

How serious a drag on the markets will regulatory uncertainty prove to be?  For what it’s worth, New York Law School’s Advanced Communications Law & Policy Institute today issued an economic analysis of the Commission’s proposed net neutrality rules, arguing that as many as 604,000 jobs and $80 billion in GDP loss would result from their passage.  Matthew Lasar at Ars Technica summarizes the report, which I have not yet read.

But one doesn’t need sophisticated economic analysis to understand why markets are already reacting poorly to the FCC’s sleight-of-hand.  The net neutrality rules the FCC proposed in October would, depending on how the agency decided to enforce them, greatly limit the future business arrangements that broadband providers could offer to their business customers.

Application providers worry that the offer of “fast lane” service invariably means everything else will become noticeably slower (not necessarily true from a technical standpoint).  But in any case the limitation of future business innovations by providers is bound to discourage, at least to some extent, up-front investments in broadband, which are characterized by high fixed costs and a long payback.

Worse, the proposed rules would also apply to Internet access over cellular networks, which is still in a very early stage of development and has much more limited capacity.  Cellular providers have to limit access to video and other high-bandwidth applications just to keep the networks up and running.   (Some of those limits are the result of resistance from local regulators to allow investments in cell towers and other infrastructure.)  The proposed rules would require them not to discriminate against any applications, no matter how resource-intensive.  That simply won’t work.

Investors are worried that the hundreds of billions they’ve spent so far on fiber optics, cellular upgrades and cable upgrades and the amount left to be spent to get the U.S. to 100 mbps speeds in the next ten years are going to be hard to recover if they don’t have flexibility to innovate new business models and services.

To Wall Street, the net neutrality rules are perceived not as enshrining a level playing field for the Internet so much as a land grab by content providers to ensure they are the only ones who can innovate with a free hand, pushing the access providers increasingly to a commodity business as, for example, long distance telephony has become.  Why should investors spend hundreds of billions to upgrade the networks if they won’t be able to make their money back?

Investors are also concerned more generally that the FCC will implement and enforce the proposed neutrality rules in unpredictable ways, bowing to lobbying pressure by the content companies even more in the future.  Up until now, the FCC has played no meaningful role in regulating access or content, and the Internet has worked brilliantly.  The networks the FCC does regulate–local telephone, broadcast TV–are increasingly unprofitable.

How would the FCC proceed if the rules are enrolled and upheld?  The NPRM says only that the Commission would investigate charges of non-neutral behavior “on a case-by-case basis.”  That approach is understandable when technology is changing rapidly, but at the same time it introduces even more uncertainty and more opportunities for regulatory mischief.  Better to wait until an identifiable problem arises, one that has an identifiable solution a regulatory agency can implement more efficiently than any other institution.

It’s possible of course that access providers, especially in areas where there is little competition, could use their leverage to make bad business decisions that would harm consumers, content providers, or both.  But that that risk could be adequately covered by existing antitrust law, or, if necessary, by FCC action when the problem actually arises.

The problem isn’t here yet, other than a handful of anecdotal problems dealt with quickly and without the need for federal intervention.  Again, the danger of rulemaking ahead of an actual failure of the market is acute, especially when one is dealing with an emerging and fast-changing set of technologies.

The more the FCC pushes ahead on the net neutrality rules, even in the face of a court decision that it has no authority to do so, the more irrational the agency appears to the investor community.  And given the almost complete reliance for the broadband plan on private investment, this seems a poor choice of battles for the FCC to be spending its political capital on now.

Preserving the Ecosystem

There’s a forest among all these trees.  So far, the Internet economy has thrived on a delicate balance today between infrastructure and the innovation of new products and services that Internet companies build on top of it.  If the infrastructure isn’t constantly upgraded in speed, cost, and reliability, entrepreneurs won’t continue to spend time and money building new products.

At the same time, if infrastructure providers don’t think the applications will be there, there’s no reason to invest in more and better capacity.  So far, consumers have shown a voracious appetite for both capacity and applications, in part because there’s been little to make them doubt more of both are always coming.

Given the long lead time for capital investments, the infrastructure providers have to bet pretty far into the future without a lot of information.  Sometimes they overbuild, or build ahead of demand (this has happened at least twice in the last ten years); sometimes (in the case of cellular), the applications arrive faster than the capacity after a long period of relative quiet.   3G support was an industry embarrassment until the iPhone finally put it to good use.

By and large the last decade has seen remarkable success in getting the right infrastructure to the right applications at the right time, as evidenced by the fact that the U.S. is still the leader by far in Internet innovation.   The U.S., despite its geography and economic diversity, is also still the leader in broadband access, with availability to over 96% of U.S. residents.  According to the latest OECD data, the U.S. has twice the number of broadband subscribers as the next-largest market.  Our per capita adoption is lower, as are our broadband speeds—both sources of understandable concern to the authors of the NBP.

The larger issue here is that regulatory intervention, or even the looming possibility of it, can throw a monkey wrench in all that machinery, and make it harder to make quick adjustments when one side gets too far ahead of the other.  Once the machine stalls, restarting it may be difficult if not impossible.   The Internet ecosystem works remarkably well.  By contrast, even regulatory changes intended to smooth out inefficiencies can wind up having the opposite effect, sometimes disastrously so.

That above all else should have given the FCC pause today in its vote.  Apparently not.

EBay Wins Important Victory Against Tiffany

As the Wall Street Journal is already reporting, today eBay sustained an important win in its long-running dispute with Tiffany over counterfeit goods sold through its marketplace.  (The full opinion is available here.)

I wrote about this case as my leading example of the legal problems that appear at the border between physical life and digital life, both in “The Laws of Disruption” and a 2008 article for CIO Insight.

To avoid burying the lede, here’s the key point:  for an online marketplace to operate, the burden has to be on manufacturers to police their brands, not the market operator.  Any other decision, regardless of what the law says or does not say, would effectively mean the end of eBay and sites like it.

Back to the beginning.  Tiffany sued eBay over counterfeit Tiffany goods being sold by some eBay merchants.  The luxury goods manufacturer claimed eBay was “contributorily” liable for trademark infringement—that is, for confusing consumers into thinking that non-Tiffany goods were in fact made by Tiffany.

The problem of counterfeit items has been a long-standing problem for electronic commerce, and as one of the largest and first online marketplaces it’s little surprise that eBay has found itself so often in the cross-hairs of unhappy manufacturers.  While the company has generally won these lawsuits, it lost an important case in France at about the same time the trial court in the Tiffany case ruled it its favor in 2008.

(A related problem that was explicit in the French case is that luxury goods manufacturers are unhappy in general with secondary markets given the tight—sometimes illegal—control they exert over primary channels.  Electronic commerce doesn’t respect local territories, fixed pricing and regulating discounting, perhaps the bigger headache for companies such as Tiffany’s.)

The struggle for courts is to apply traditional law to new forms of behavior.  Many of the opinions in these cases tie themselves in knots trying to figure out just what eBay actually is—is it a department store, where a variety of goods from different manufacturers are sold?  Is it a flea market, where merchants pay for space to sell whatever they want?  Or is it a bulletin board at a local grocery store, where individuals offer products and services?

Of course eBay is none of these things.  But courts must apply the law they have, and the case law for trademark infringement is based on these kinds of outdated classifications.  In the “common law” tradition, judges decide cases by analogy to existing case laws.  That means when there isn’t a good analogy to be found, the law is often thrown into confusion for a long period of time while new analogies get worked out.  Disruptive technologies create such discontinuities in the law, particularly for common law.

At the heart of these decisions is a question of control.  The more the marketplace operator controls the goods that are sold, the more likely they will be found liable for all manner of commercial misconduct.  (Tiffany also sued for false advertising, for example, claiming that eBay ads placed on Google searches promising Tiffany goods at low prices on its site were false, given that some of the goods were counterfeit.  Of course some of the goods were NOT counterfeit.)

A department store operator has complete control over the source of merchandise, and so would be held liable for selling counterfeits.  A bulletin board host has no control, and so would not be held liable.  Flea market operators sit somewhere in between, and depending on the extent and obviousness of the counterfeiting that takes place, operators are sometimes held liable along with the counterfeiters themselves.

The eBay marketplace sits somewhere between the two extremes.  On the one hand, eBay can and does have the ability to review the text of listings prior to their posting, and provides extensive service to merchants including listing services, postage and packaging, and payment management through PayPal.  It can and does respond to complaints by buyers of misrepresented goods (condition and source, e.g.) and by trademark holders who are given extensive tools to review listings to check for counterfeits.  And it charges the sellers for these services—indeed, that is the source of its revenue.

On the other hand, eBay never has physical possession of the goods that are sold through its marketplace—indeed, it never sees them.  That’s an essential feature of the company’s success—eBay couldn’t handle millions of listings in a limitless range of categories if merchants actually sent the goods to eBay during the course of an auction, the way high-end auctioneers such as Sotheby’s and Christie’s would do.

EBay (or buyers for that matter) can’t inspect the goods (other than through photos and text descriptions) prior to purchase, and even if it could the company doesn’t have the expertise to evaluate authenticity and condition of everything from buttons to Rolex watches to cars.  That’s why eBay’s buyer feedback system is so important to the efficient operation of the marketplace.

In today’s decision, the Second Circuit Court of Appeals in New York mostly affirmed the trial court’s holdings.  It agreed that for eBay to be liable for the trademark infringements of its misbehaving sellers, the company had to have actual knowledge of their activities and still continue doing business with them.

There was substantial evidence to the contrary—including direct policing by eBay as well as the tools provided to manufacturers to review and flag suspicious listings.  As the court noted, eBay has plenty of incentives to ensure counterfeit goods stay off the site—for unhappy buyers mean the loss of liquidity and the loss of any competitive advantage.

Tiffany objected to the fact that the eBay tools put the burden on trademark holders rather than marketplace operators to ensure the authenticity of the goods.  But the court agreed with eBay that such is indeed the burden of a trademark, a valuable and exclusive right given to manufacturers to encourage the creation of consistent and quality goods and services.  Since eBay acted on actual knowledge of infringement and could not be said to have willfully ignored the illegal behavior of some merchants, the company had fulfilled its legal obligation to trademark holders.

The opinion is, as to be expected, largely a discussion of legal precedent and the law of trademark.  That, after all, is the role of an appellate court—not to retry the case, but to review the trial judge’s findings in search of legal error.  The decision by the appellate court will serve as a powerful precedent for eBay and other e-commerce sites in the future.  (Tiffany says it may appeal to the U.S. Supreme Court, but it’s unlikely for many reasons that the Court would take the case.)

One important feature of the case that is not discussed directly in the appellate decision, however, is worth highlighting.  Though courts rarely say so explicitly, an important factor in deciding cases has to do with the practical limits of the remedy requested by a plaintiff, in this case Tiffany’s.  Given what eBay already does to police counterfeit goods, it’s hard to see what Tiffany’s actually wanted the company to do—that is, what it wanted the courts to order eBay to do had it won the lawsuit.

For aside from money damages, the purpose of a lawsuit and the reason the taxpayers fund the legal system is that court decisions let everyone know what behaviors are acceptable and which are not–and how to correct those that are not.  Had eBay lost, they would have had to pay damages, but more to the point the loss would have sent a message to them and others to change their behavior to avoid future damage claims.

So would would a loss have signaled?  In essence, eBay would have had either to agree not to sell any Tiffany goods (a limit other brands would have demanded as well) or to verify and authenticate all items before allowing them to be listed on the site.  That would have been the only way to satisfy Tiffany’s that their view of the law was being followed.

That remedy, though theoretically possible, would have meant the end of eBay and sites like it (including Amazon Marketplace).  It would in essence have said that any auction or other third-party sales model other than the high-end Sotheby’s or Christie’s approach is inherently illegal.  For there would have been nothing left to distinguish eBay’s low-cost approach to buying and selling—all of the efficiencies would have been eaten up by the need to authenticate before the auction began.

Such a remedy would have been economically inefficient—it would, to use Ronald Coase’s terminology, have introduced a great deal of unnecessary transaction costs.  For most of the items on eBay are accurately described, and for them the cost of authentication would be a waste.  eBay practices in essence a post-auction model of authentication.  If the buyer doesn’t agree with the description of the item once they receive it, eBay will correct the problem after the fact.

That’s much more efficient, but it does introduce cost to brand holders such as Tiffany’s.  A buyer who gets a counterfeit good may think less not only of the seller and of eBay but also of Tiffany’s.  Worse, the buyer who doesn’t realize they’ve received a counterfeit good may attribute its poorer quality to Tiffany’s, another form of damage to the mark.

The court’s decision implicitly weighs these costs and concludes that eBay’s model is, overall, the more efficient use of resources.  The brand owner can always sue the eBay sellers directly, of course, and can use the tools provided by eBay to reduce the number of bad listings that get posted in the first place.  Those enforcement costs, the court implies, are less than the authentication costs of Tiffany’s proposed remedy.  Faced with two possible outcomes, the court chose the more economically efficient.

Under the “law and economics” approach to legal decision-making, that finding would have been made explicit.  Some appellate judges, including Richard Posner and Frank Easterbrook, would have actually done the math as best they could from the record.

In any case, the finding seems economically sound.  Meanwhile, the law is still struggling mightily to catch up to reality.

Google v. Everyone

I had a long interview this morning with the Christian Science Monitor .  Like many of the interviews I’ve had this year, the subject was Google.    At the increasingly congested intersection of technology and the law, Google seems to be involved in most of the accidents.

Just to name a few of the more recent pileups, consider the Google books deal, net neutrality and the National Broadband Plan, Viacom’s lawsuit against YouTube for copyright infringement, Google’s very public battle with the nation of China, today’s ruling from the European Court of Justice regarding trademarks, adwords, and counterfeit goods, the convictions of Google executives in Italy over a user-posted video, and the reaction of privacy advocates to the less-than-immaculate conception of Buzz.

In some ways, it should come as no surprise to Google’s legal counsel that the company is involved in increasingly serious matters of regulation and litigation.  After all, Google’s corporate goal is the collection, analysis, and distribution of as much of the world’s information as possible, or, as the company puts it,” to organize the world’s information and make it universally accessible and useful.”  That’s a goal it has been wildly successful at in its brief history, whether you measure success by use (91 million searches a day) or market capitalization ($174 billion).

As the world’s economy moves from one based on physical goods to one driven by information flow, the mismatch between industrial law and information behavior has become acute, and Google finds itself a frequent proxy in the conflicts.

As I argue in “The Laws of Disruption”, the unusual economic properties of information make it a poor fit for a body of law that’s based on industrial-era assumptions about physical property.  That’s not to say there couldn’t be an effective law of information, only that the law of physical property isn’t it.  Particularly not when industrial law assumes that the subject of any conflict or effort to control (the res as they say in legal lingo) is visible, tangible, and unlikely to cross too many local, state, or national borders—and certainly not every border at the same time, all the time.

To see the mismatch in action, consider two of Google’s on-going conflicts, both in the news this week:  Google v. China and Google v. Viacom.

Google v. China

In 2006, Google made a Faustian bargain with the Chinese government.  In exchange for permission to operate inside the country, Google agreed to substantially self-censor search results for topics (politics, pornography, religion) that the Chinese government considered dangerous.  The company had strong financial motivations for gaining a foothold in the astronomically-fast expanding Chinese Internet market, of course, but also had a genuine belief that giving Chinese users access to the vast majority of its indexed information had the potential to encourage fewer restrictions over time.

Apparently the result was the opposite, with the government tightening, rather than loosening the reins.  Google’s discomfort was compounded by the revelation in January that widespread hacking and phishing scams had penetrated the Gmail accounts of several Chinese dissidents, leading the company to announce it would soon end its censorship of Chinese searches.  (It also added encryption technology to Gmail and, it is widely believed, began working closely with the National Security Agency to help identify the sources of the attacks.)  Though Google has not claimed the attacks were the work of the Chinese government or entities under its control, the connection was hard to miss.  Google is hacked, Google decides to end cooperation with the government.

This week, the company made good on its promise by closing its search site in China and rerouting searches from there to its site in Hong Kong.  As a result of the long occupation of Hong Kong by western governments, which ended in 1997 when the U.K.’s “lease” expired, Hong Kong maintains special legal status within China.  Searches originating in Hong Kong are not censored, and Hong Kong appears to be largely outside China’s “great firewall” which blocks undesirable information including YouTube and Twitter.

For residents of the mainland, however, the move is a non-event.  China quickly applied the filters that Google had applied on behalf of the government for searches originating inside the country.  So Google searches in China are still censored—only now Google isn’t doing the censoring.  The damage to the company’s relationship with the Chinese government, meanwhile, has been severe, as has collateral damage to the relationship between China and the U.S. government.  The story is by no means over.

Google v. Viacom

Also in the last week, a number of key documents were released by the court that is hearing Viacom’s long-running copyright infringement case against Google’s YouTube.  The case, which began around the same time that Google made its deal with China, seeks $1 billion in damages from copyright violations against Viacom content perpetrated by YouTube users, who posted everything from short clips to music videos to entire programs, including “South Park” and “The Daily Show.”

Under U.S. law, Internet service providers are not liable for copyright infringement perpetrated by their users, provided the service provider is not aware of the infringement and that they respond “expeditiously” to takedown requests sent to them by the copyright holder.   (See Section 512 of the Digital Millennium Copyright Act,  http://www.copyright.gov/legislation/dmca.pdf)  Viacom claims YouTube is not entitled to immunity in that it had actual knowledge of the infringing activities of its users.

Discovery in the case has revealed some warm if not smoking guns—guns that the parties resisted being made public.  (See the New York Times Miguel Helft’s as-always excellent coverage, and also coverage in the Wall Street Journal.)  Viacom claims it has found a number of internal YouTube emails that make clear the company knew of widespread copyright infringement by its users, though Google characterizes those messages as having been taken out of context.

Perhaps more interesting has been the embarrassing revelation that many (though still a minority) of the Viacom clips, from MTV and Comedy Central programming for example, were posted by Viacom itself.  Indeed, these noninfringing posts were often put on YouTube under the guise of being posted by non-affiliated users in the hopes of giving the clips more credibility!

These “fake grassroots” accounts, as Viacom marketing executives referred to them, made use of as many as 18 outside marketing agencies.  Most embarrassing is that Viacom’s own legal team has now admitted that hundreds of the YouTube postings it initially claimed in its list of infringing posts were actually authorized posting by Viacom or its affiliates, disguised to look like unauthorized postings.

(Since 2007, Google has somewhat quieted the concerns of copyright holders over YouTube by introducing filtering technologies that let copyright holders supply reference files that can be digitally compared to weed out infringing copies.  This is an example, for better and for worse, of what Larry Lessig has in mind when he talks of implementing legal rules through software “code.”  Better because it avoids some litigation, worse because the code may be overprotective—filtering out uses that might in fact be legal under “fair use.”)

Google v. Everyone

What do the two examples have in common?  Both highlight the difficulty of judging the use of information with traditional legal tools of property and borders.

In the first example, China considers some forms of information to be dangerous.  To some extent, in fact, all governments restrict the flow of information in the name of national security, consumer safety, or other government aims.  China (along with Burma and Iran) are at one end of the control spectrum, while the U.S. and Europe are at the other end.

Google believes, as do many information economists, that more information is always better than less, even when some of it is of poor quality, outright wrong or which espouses dangerous viewpoints.  Google’s view was perhaps best put by Oliver Wendell Holmes, Jr. in his 1919 dissent in Abrams v. United States, 250 U.S. 616 (1919):

[T]he ultimate good desired is better reached by free trade in ideas…[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.

But even legal systems that believe in the “the marketplace of ideas” as the preferred forum for determining information value can’t resist the temptation sometimes to put their finger on the scales.  Congress and some states have tried and failed repeatedly to censor “indecent” content on the Internet (fortunately, the First Amendment puts a stop to it, but, as John Perry Barlow says, in cyberspace the First Amendment is a local ordinance).  Just this week, Australia came under fire for proposals to beef up the requirements it places on Internet service providers to censor material deemed harmful to children.  The Google convictions in Italy last month suggest that not even Europe is fully prepared to let the marketplace of ideas operate without the worst kind of ex post facto oversight.

Likewise in the Viacom litigation, it’s clear regardless of the final determination of legal arguments that some information uses that are illegal are nonetheless valuable to those whose interests are supposedly being protected by the law.  Viacom can make all the noise it wants to about “pirates” “stealing” their “intellectual property,” as if this were the 1800’s and the Barbary Coast.  .  Those who posted copyrighted material to YouTube were not doing it with the intent of harming Viacom—their intent was just the opposite.  What’s really going on is that users—fans!—who value their programming were using YouTube to share and spread their enthusiasm with others.

Yet intent plays no part in copyright infringement.  The law assumes that, as with physical property, any use that is not authorized by the “owner” of the information is with few exceptions likely to be financial detrimental.  That is certainly what Viacom claims in the litigation.   But the company’s own behavior tells a different story.  Why else would they post their own material, and pretend to be regular users?  Put another way, why is information posted by an anonymous fan more valuable to Viacom than information posted by the company itself?  What is it about an unsanctioned sharing that communicates valuable information to the recipient?

By posting the clips, YouTube users added their own implicit and explicit endorsement to the content.  The fact that Viacom marketing executives pretended to be fans themselves demonstrates the principle that the more information is used, the more valuable it can become.  That’s not always the case, of course, but here the sharing clearly adds value—in fact, it adds new information to the content (the endorsement) that benefits Viacom.

Whether that added value is outweighed by lost revenue to Viacom from users who, having seen the content on YouTube, didn’t watch it (or the commercials that fund it) on an authorized channel ought to be a key consideration in the court’s determination, but in fact it has almost no place in the law of copyright.  Yet Viacom obviously saw that value itself, or it wouldn’t have posted its own clips pretending to be fans of the programming.

Productive v. Destructive Use

Both these cases highlight why traditional property ideas don’t fit well with information uses.  What would work better?  I present what I think is a more useful framework in the book, a view that is so far absent from the law of information.  That framework would analyze information uses not under archaic laws of property but would rather weigh the use as being “productive” or “destructive” or both and determine if, on the whole, the net social value created by the use is positive.  If so, it should not be treated as illegal, regardless of the law.

What do I mean?  Since information can be used simultaneously by everyone and, after use, is still intact if not enhanced by the use, it’s really unhelpful to think about information being “stolen” or, in the censorship context, of being “dangerous.”   Rather, the law should evaluate whether a use adds more value to information than it takes away.  Information use that adds value (reviewing a movie) is productive and should be legal.  A use that only takes value away (for example, identity theft and other forms of Internet fraud) is destructive and should be illegal.  Uses that do both (copyright infringement in the service of promoting the underlying content) should be allowed if the net effect is positive.

Under the productive/destructive model, Google’s actions in entering and now exiting from China make more sense as both policy and business decisions.  Censoring information is destructive in that it gives users the appearance of complete access where in fact the access has been limited.  That harm should be weighed against the benefit of providing information that otherwise wouldn’t have been available at all to Chinese users.

That the government became more rather than less concerned about Google over time might imply that Google had gotten the balance right—that is, that the Chinese government was increasingly aware that even what it originally thought of as benign information could have the kind of transformative effects it wanted to avoid.

Is China wrong to censor “dangerous” information?  Economically, the answer is yes.  There is a strong correlation between countries who are on the “freer” end of the censorship spectrum and those that have gained most financially from the spread of information technology.  The more information there is, the more value gets added by its use, value that is allocated (roughly, sometimes poorly) among those who added the value.

Likewise, the posting of Viacom clips on YouTube should weigh the productive value of information sharing (promotion and endorsement) against the destructive aspects–lost revenue of paid viewers on an authorized channel supported by cable fees, advertising sponsorship, and purchased copies in whatever media.

Under that kind of analysis, it might turn out that Viacom lost little and gained a great deal from the unpaid services of its fans, and that in fact any true accounting would have credited the fans for $1 billion in generated value rather than the other way around.  Or maybe it was a wash.  But just to consider the lost revenue, particularly using the crazy method of modern copyright law (each viewed clip is considered as a lost sale), is certain to misjudge the true extent of the harm, if any.

A lingering problem in both these examples is the difficulty of determining both the quality and quantity of the productive and destructive uses of the information in question.   How much harm did Google censoring cause?  How much value did YouTube users generate?

We don’t know, not because the answers aren’t knowable but because the tools for making such determinations are so far very primitive.  Traditional rules of accounting follow the industrial assumptions of physical property—that is, if I have it then you don’t, and once I’ve used it, it’s gone or at least greatly depleted—that information doesn’t follow.   It makes little or no allowance for the fact that information use can be non-diminishing, or even productive.

So how would we measure the harm to Chinese Internet users from the censored information, or the value of the information they could get before Google left town?  How would we measure the value of “viral” marketing of Viacom programming posted by real (as opposed to “fake grassroots”) fans?  How would we measure the actual losses Viacom suffered—not the statutory damages they claim under copyright law, which are surely far too generous?

Well, one problem at a time.  First let’s change the rhetoric about information use, positive and negative, from the language of property to a language that’s better-suited to a global, network economy.  If we do, the metrics will invent themselves.

The Italian Job: What the Google Convictions are Really About

I was pleased to be interviewed last night on BBC America World News (live!) about the convictions of three senior Google executives by an Italian court for privacy violations.  The case involved a video uploaded to Google Videos (before the acquisition of YouTube) that showed the bullying of a person with disabilities. (See “Larger Threat is Seen in Google Case” by the New York Times’ Rachel Donadio for the details.)

Internet commentators were up-in-arms about the conviction, which can’t possibly be reconciled with European law or common sense.  The convictions won’t survive appeals, and the government knows that as well as anyone.  They neither want to or intend to win this case.  If they did, it would mean the end of the Internet in Italy, if nothing else. Still, the case is worth worrying about, for reasons I’ll make clear in a moment.

But let’s consider the merits of the prosecution. Prosecutors bring criminal actions because they want to change behavior—behavior of the defendant and, more important given the limited resources of the government, others like him.  What behavior did the government want to change here?

The video was posted by a third party. Within a few months, the Italian government reported to Google their belief that it violated the privacy rights of the bullying victim, and Google took it down. They cooperated in helping the government identify who had posted it, which in turn led to the bullies themselves.

The only thing the company did not do was to screen the video before posting it. The Google executives convicted in absentia had no personal involvement in the video. They are being sued for what they did not do, and did not do personally.

So if the prosecution stands, it leads to a new rule for third-party content: to avoid criminal liability, company executives must personally ensure that no hosted content violates the rights of any third party.

In the future, the only thing employees of Internet hosting services of all kinds could do to avoid criminal prosecution would be to pre-screen all user content before putting it on their website.  And pre-screen them for what?  Any possible violation of any possible rights.  So not only would they have to review the contents with an eye toward the laws of every possible jurisdiction, but they would also need to obtain releases from everyone involved, and to ensure those releases were legally binding. For starters.

It’s unlikely that such filtering could be done in an automated fashion. It is true that YouTube, for example, filters user postings for copyright violations, but that is only because the copyright holders give them reference files that can be compared. The only instruction this conviction communicates to service providers is “don’t violate any rights.” You can’t filter for that!

The prosecutor’s position in this case is that criminal liability is strict—that is, that it attaches even to third parties who do nothing beyond hosting the content.

If that were the rule, there would of course be no Internet as we know it. No company could possibly afford to take that level of precaution, particularly not for a service that is largely or entirely free to users. The alternative is to risk prison for any and all employees of the company.

(The Google execs got sentences of six months in prison each, but they won’t serve them no matter how the case comes out. In Italy, sentences of less than three years are automatically suspended.)

And of course that isn’t the rule.  Both the U.S. and the E.U. wisely grant immunity to services that simply host user content, whether it’s videos, photos, blogs, websites, ads, reviews, or comments. That immunity has been settled law in the U.S. since 1996 and the E.U. since 2000. Without that immunity, we simply wouldn’t have–for better or worse–YouTube, Flickr, MySpace, Twitter, Facebook, Craigslist, eBay, blogs, user reviews, comments on articles or other postings, feedback, etc.

(The immunity law, as I write in Law Five of “The Laws of Disruption,” is one of the best examples of the kind of regulating that encourages rather than interferes with emerging technologies and the new forms of interaction they enable.)

Once a hosting service becomes aware of a possible infringement of rights, to preserve immunity most jurisdictions require a reasonable investigation and (assuming there is merit to the complaint), removal of the offending content. That, for example, is the “notice and takedown” regime in the U.S. for content that violates copyright.

The government in this case knows the rule as well as anyone.  This prosecution is entirely cynical—the government neither wants to nor intends to win on appeal.  It was brought to give the appearance of doing something in response to the disturbing contents of the video (the actual perpetrators and the actual poster have already been dealt with). Google in this sense is an easy target, and a safe one in that the company will vigorously fight the convictions until the madness ends.

And not unrelated, it underscores a message the Italian government has been sending any way it can to those forms of media it doesn’t already control—that it will use whatever means at its disposal, including the courts, to intimidate sources it can’t yet regulate.

So in the end it isn’t a case about liability on the Internet so much as a case about the power of new media to challenge governments that aren’t especially interested in free speech.

Internet pundits are right to be outraged and disturbed by the audacious behavior of the government. But they should be more concerned about what this case says about freedom of the press in Italy and less what it says about the future of liability for content hosts.

And what it says about the Internet as a powerful, emerging form of communication that can’t easily be intimidated.

Protecting consumers from Moore's Law: CNET

intel_logo

I write today on CNET News.com (see “FTC’s new strategy:  kick ’em when they’re down”) that the FTC’s decision yesterday to attack Intel seems oddly-timed.

Regular readers of this blog will recall that only a month ago, I wrote that Intel’s settlement of long-standing disputes with rival AMD (see “The Intel/AMD Settlement:  Watch What Happens”) was likely to mean the end of government-sponsored litigation against Intel, or at least a toning down of the rhetoric.  I was, clearly, wrong.

It’s hard to know the real background here, but piecing together bits and pieces it appears that the FTC and Intel were close to resolving issues related to how the company sells CPU chips for personal computers when, perhaps at the urging of Nvidia and other graphics processing unit makers, the FTC began looking at the GPU market as well.  Intel flinched, the FTC got mad, and filed a complaint that recites all over again the issues that appear in most of the other litigation, plus the GPU complaints.

Hell hath no fury, it seems, like a regulator scorned.

Aside from the addition of GPU complaints, there are several important differences between the FTC’s action and the rest of the pending or already-completed litigation.  Most disturbing is the proposed remedy.  Instead of money damages and fines, the FTC is proposing, should it make its case, to dramatically redesign the way Intel–and therefore the rest of the semiconductor industry–does business.  Some of the relief the agency is seeking is, truly, draconian.  Intel would be essentially run by an outside monitor, and would need to pre-approve most transactions and even advertising with the FTC.

The FTC is charged with protecting consumers from fraudulent practices–false advertising, for example, or inadequate cigarette warnings, or misleading terms in credit card applications and the like.  It’s hard to see how it has anything to offer here by way of expertise in the chip market, which only affects consumers after-the-fact.  The likelihood that the agency’s actions will help consumers seems very very low.

It’s also hard to see what the harm to consumers (harm to competitors aside) can be.  As I write in The Laws of Disruption, the continued operation of Moore’s Law means that computing power gets faster, cheaper and smaller all the time–indeed, on a predictable schedule.  The PS3 that now sells for $299 is the rough equivalent of enough early-era computers to fill the state of Washington.  Today’s cell phones have more processing power than yesterday’s supercomputers.  And so on.

Well, the FTC replies, maybe if Intel didn’t have a monopoly on PC CPUs those prices would fall even faster.  Maybe, doubtful, but in any case, don’t they have bigger problems and more broken industries to mess with?