Category Archives: Information Economics

Apple Abandons its Principles…Not! (Necessarily)

apple logoFollowing reports by Randall Stross in The New York Times and elsewhere that Apple had filed a patent application for technology that forces users of mobile or other devices to watch ads, the blogosphere lit up with lamentations. One blogger quoted by The Independent on Monday, to pick a representative example, called it “the most invasive, demeaning, anti-utopian and downright horrible piece of cross-platform software technology that anybody’s ever thought of.”

Sigh. Slow down, folks.

As Randy Stross correctly pointed out in his article, applying for a patent does not indicate an intention to use the technology in question. I also put very little significance to the fact that Steve Jobs himself is named as one of the inventors of the patent.

Here’s the reality. Companies file patent, trademark, and copyright applications for a variety of reasons. These days, perhaps the most common reason to file a patent is defensive; that is, to ensure that another patent holder can’t sue you for infringement, especially problematic given the absurdly broad applications that the Patent Office has been routinely granting in the last decade. (See my earlier post on the Bilski case.) If sued by a competitor, dormant patents can be useful bargaining chips in cross-licensing, pooling or other industry arrangements.

More to the point, the connection between patent filings and corporate strategy, especially for large technology companies, is generally nonexistent. For better and often for worse, company lawyers and the rest of the executive team historically only speak when something goes wrong.

Recently there have been movements to treat patents and other information assets using the same asset management tools applied to physical plant. That’s a good first step, but hardly the end of the road here. The full integration of legal and business strategy, including for patents, isn’t even a dream most executives dare to dream.

Apple’s patent filing almost certainly signals nothing about the company’s future intentions one way or the other.

There’s plenty of things for bloggers to get agitated about. This isn’t one of them.

An Unpopular View of Google Books

090209_googlebooks

I’m starting to feel like the only person who thinks the Google Books settlement with authors and publishers is a good deal. One voice that seems not to be heard, however, over the din of Google competitors, panicky law professors, and regulators who wouldn’t know a workable solution to a copyright problem (created by regulators) if it bit them, is anyone speaking for consumers.

My opinion piece today on CNET argues that the real problem with the settlement has nothing to do with the 165-page document, which is increasingly coming to look like the sausage-making that it is.  (Does anyone really expect authors or publishers or anyone other than lawyers to read this and make any sense of it?)  The problem is the insanity of “modern” copyright law, which grants endless rights to all content creators, rights only the richest media companies can enforce.

For everyone else, once the modest commercial life of a work has ended, the rights are abandoned but not eliminated, leaving a no-man’s land of millions of stranded or “orphaned” works. The Google Books settlement, at least for digital users, would elegantly solve the orphan works problem. But the Copyright Office and the Department of Justice, among other creators of this mess, don’t like having their authority stepped on or their difficulties made to look easy.

As I write in Law Seven of “The Laws of Disruption”, a few basic reforms would bring copyright not only into balance but also into the reality of the 21st century. Until that happens, Google has done a good deed, which so far has not gone unpunished.

The Bilski Case: Not With My Digital Economy, You Don't

big money logoMy view on today’s Supreme Court case regarding business method and software patents appears in The Big Money.

This case, which concerns the patentability of a paper-and-pencil system for hedging weather risks in consumer energy prices, drew over sixty friend-of-the-court briefs, more than any other case this term.

The reason has little to do with the claimed method, which almost no one (except the inventors) seem to think deserves the denied patent.

The real issue here is the deeply troubled intersection of information age inventions and the badly broken patent system. Nearly all of the briefs are concerned that a ruling from the Court of Appeals for the Federal Circuit, if left standing by the Supreme Court, will eliminate patent protection for some if not all inventions implemented in software.

Software patents have only been granted in the U.S. since the early 1980’s, after an earlier Supreme Court case expressed its approval for a process that included software in the operation of injection molds. (European patent law has looked much more skeptically on the practice.) Since then, “pure” software patents and, since 1998, “business method” patent applications have swamped the U.S. Patent Office, which has taken to granting more patents and letting interested parties sort out the good from the bad through the expensive corrective of litigation.

Litigation is a terrible way to determine whether a claimed invention ought to be granted a government-enforced monopoly. As I write in Law Eight (“Virtual Machines Need Virtual Lubrication”) of The Laws of Disruption, even when patent grantees lose in court, they often win in the market. Amazon, for example, successfully asserted its “one-click” checkout patent against Barnes & Noble in 1999, a crucial moment in the introduction of on-line bookstores. In 2001, an appellate court ruled that Amazon’s injunction was wrongly issued. Too late.

To quote from the book:

“Other business-method patents of dubious quality have likewise been used to gain a strategic advantage, perhaps unfairly. Playing the slow pace of litigation off the accelerating speed of digital life and its rapid evolution, patents can be more valuable as legal weapons than as protection for real innovation. Interim rulings, for example, supported TiVo’s claim against other DVR manufacturers to technology that allows viewers to pause, fastforward, or rewind television programs; Netflix’s claim to the idea of online home video rentals against Blockbuster; and patents asserted by IBM against Amazon for core features of the concept of electronic commerce. Each win, even those later overturned, provided the patent holder with a valuable, sometimes priceless, bargaining chip: time.”

I’m with the open source people here, including Red Hat, who are urging the Supreme Court to use the Bilski case to end the reign of terror of software patents.

If the inventions of digital life really need the kind of incentives the patent system grants, Congress should create a special form of protection more in keeping with their shorter useful lives and lower investment costs relative to, for example, new drugs. (Amazon’s Jeff Bezos, for one, thinks software patents should last 3-5 years, not the standard 20.)

In the meantime, we’d be better off with no protection at all.

FTC to Bloggers: Drop that Sample!

The Federal Trade Commission has announced plans to regulate the behavior of bloggers.  Unfortunately, not their terrible grammar, short attention spans or inexplicably short fuses.

Instead, the FTC announced updates to its 1980 policy regarding endorsements and testimonials, first developed to reign in the use of celebrity endorsers with no real connection or experience with products they claimed to use and adore.

The proposed changes require bloggers who recommend products or services to disclose when they have a “material connection” to the provider—that is, that they were paid to write positive reviews or given freebies to encourage them to do so.  (The FTC, of course, is limited to activities in the U.S.)

You might think bloggers would be flattered to be put in the same category as celebrities, but no.  The response has been universal outrage, as noted by Santa Clara University Law Professor Eric Goldman in his detailed analysis of the proposed changes. (The complete FTC report is available here, but it is 81 pages of mostly mush.)

The principal objection is that the changes, which take effect December 1st, continues to exempt journalists in traditional media but not those in what the agency quaintly refers to as “new media”—that is, those whose content appears online, whether in blogs, social networking, email, or other electronic communications.  While professional journalists can be trusted to speak truthfully about products even when they are provided sample or review copies, bloggers cannot.

L. Gordon Crovitz’s column in today’s Wall Street Journal nicely dismantles the faulty reasoning in the Commission’s analysis.  Moreover, Eric Goldman’s post (cited above) argues persuasively that the one example the FTC gives of a violation of the policy as applied to bloggers is directly at odds with Section 230 of the Communications Act, which provides broad immunity to third parties for content posted by someone else through any Internet service.  So it may be that the proposed change is pre-empted by the broad and sensible provisions of Section 230, which creates a wide breathing space for interactive communications to develop. (The FTC makes no mention of Section 230 in its report.)

To me, in any event, this is a classic problem of the poor fit between traditional legal systems and rapidly-evolving new information technologies.  Legal change, as I write in The Laws of Disruption, relies heavily on the process of “reasoning by analogy.”  When confronted with new situations, lawmakers, regulators and judges will look for analogous situations elsewhere in the law and apply the rules that most closely match the new circumstances.

In times of radical transformation at the hands of disruptive technologies, however, reasoning by analogy is a terrible way to develop a  body of law for new activities. Bloggers are not like journalists and they are not like celebrity endorsers.  They are like bloggers—a new form of communication, still very much in its early stages of development, that uses new technology to engage in a new kind of conversation.

No old rule, extended and mangled until it is unrecognizable, is likely to fit the new situation.  And rather than try to guess at a new rule, regulators should fight their natural tendencies and just wait.  For now, the Web has been developing a variety of self-correcting mechanisms and reputational metrics that may do an effective and efficient job of policing abuses of the trust between bloggers and their readers.  Sorry folks, but we may not need the FTC and its cumbersome enforcement mechanisms to save the day this time.

What’s more, the risk of applying ill-considered old-world regulations to new situations is that regulations (even if lightly or not at all enforced) will retard, skew, or otherwise chill the development of new ways of interacting at the heart of digital life.

That doesn’t seem to worry the FTC.   “[C]ommenters who expressed concerns about the future of these new media if the Guides were applied to them,” they say, “did not submit any evidence supporting their concerns.”

Let’s turn that objection around to the right direction.

The FTC did not submit any evidence of a problem that needs to be solved, or of their ability to solve it.

The Nobel Prize in Disruption

williamsonThough most of the coverage of this year’s Nobel Prize in Economics focused on the work of Elinor Ostrom, I’m more interested in the award to Oliver Williamson, Prof. Emeritus at the Haas School of Business at UC-Berkeley.

Williamson is a leading scholar in the field of “Institutional Economics,” which studies the relative economic behaviors of organizations in the market. The field traces its origins to the pioneering work of a previous Nobel winner, Ronald Coase, who first observed that the existence of large, complex corporations suggested the existence of inefficiencies in the market that companies, an alternative to market transactions, could overcome, or at least reduce. Rather than negotiating with every person involved in the production of every car, for example, GM could internalize labor, production, sourcing and so on and achieve economies of scale.

I’ve written about Coase and the importance of his work in all of my books. I’ve had the distinct pleasure of knowing him since my days as a law student at the University of Chicago, where Coase was on the faculty of the law school. My view, still controversial in some quarters, is that information technology has been reducing transaction costs in the market faster than it does in organizations, resulting in a shift in the balance of power between the two.

In his seminal book, “Markets and Hierarchies,” Williamson enumerated different kinds of transaction costs and their impact on economic events. In particular, he describes how differences in information sources between two or more parties to a transaction (say a buyer and a seller, for example) will affect the structure and behavior of the transaction in important ways.

Coase, Williamson and others grew frustrated by the unwillingness of their economist colleagues to study the nature and causes of transaction costs, and in the mid-1990’s formed their own field, which is organized under the International Society for New Institutional Economics. Both Williamson and Ostrom are members of ISNIE.

Congratulations to both Prof. Ostrom and Williamson. Perhaps the award of the Nobel Prize will encourage more study of the nature and causes of transaction costs, and the ways in which disruptive technologies affect both.

The Incredible Information Valuation!

When is a comic book worth $3,600,000 more than its face value? Answer: When Walt Disney wants to buy it.

Today, Disney announced it had agreed to acquire Marvel Entertainment, Inc. for $4 billion. Marvel, which only a few years ago was mired in bankruptcy, owns the rights to a range of intellectual property including Spider-Man, The Fantastic Four, Iron Man, the X-Men and the Hulk. It is surely a valuable company. But $4 billion? According to the company’s 2008 financial statements, the company ended the year with only $400 million in assets.

Why would Disney pay ten times that amount?

The answer is that financial accounting standards fail to assign any value to intangible information assets like copyrights and trademarks, which in this case represents the vast majority of the company’s true worth. You will look in vain on Marvel’s balance sheet to find anything assigned to the on-going value of over 5,000 trademarked characters that the company has in its vaults and which in recent years have made largely successful leaps into motion pictures. As Marvel CEO Ike Perlmutter said today, Marvel is “the most profitable print publishing business in the world.”

Not exactly. The profits appear to be assigned to the print business, but only because the accounting industry steadfastly refuses to figure out how to price the information assets and assign value to them.

Why not? The answer, apparently, is that doing so is too hard. To add insult to information injury, once the acquisition is complete Disney will be able to report the $3.6 billion excess as an asset, “excess paid for Marvel,” which is to say, all the IP. And even though that property is and will continue to appreciate in value, for tax purposes Disney will be allowed to depreciate it!

But plenty of physical assets are hard to value, too, and that hasn’t stopped the industry from coming up with formula for doing so. The real reason seems to be that accountants just don’t want to–or, more to the point, don’t want to bring their profession into the 21st century. As the economy continues to move toward one in which most value is derived from information assets, that failing is making financial statements increasingly useless to investors as a source of information.

We know that information assets can be valued. Someone at Disney figured out a way, for one thing.