Category Archives: Antitrust

Big Bang Launch of "Big Bang Disruption"–and a Note on Regulatory Implications

In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture’s Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling “Big Bang Disruption.”

The project is looking at the emerging ecosystem for innovation based on disruptive technologies, following up on work we have done separately and now together over the last fifteen years.

Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition in information-intensive industries–which is to say, these days, every industry.

The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything (including, increasingly, not just people but things).

The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly “undiscplined.”  It shouldn’t succeed.  But it does.

So when the disruptor arrives and takes off with a bang, often after a series of low-cost, failed experiments, incumbents have no time for a competitive response.  The old rules for dealing with disruptive technologies, most famously from the work of Harvard’s Clayton Christensen, have become counter-productive.   If incumbents haven’t learned to read the new tea leaves ahead of time, it’s game over.

The HBR article doesn’t go into much depth on the policy implications of this new innovation model, but the book we are now writing will.  The answer should be obvious.

This radical new model for product and service introduction underscores the robustness of market behaviors that quickly and efficiently correct many transient examples of dominance, especially in high-tech markets.

As a general rule (though obviously not one without exceptions), the big bang phenomenon further weakens the case for regulatory intervention.  Market dominance is sustainable for ever-shorter periods of time, with little opportunity for incumbents to exploit it.

Quickly and efficiently, a predictable next wave of technology will likely put a quick and definitive end to any “information empires” that have formed from the last generation of technologies.

Or, at the very least, do so more quickly and more cost-effectively than alternative solutions from regulation.  The law, to paraphrase Mark Twain, will still be putting its shoes on while the big bang disruptor has spread halfway around the world.

Unfortunately, much of the contemporary literature on competition policy from legal academics is woefully ignorant of even the conventional wisdom on strategy, not to mention the engineering realities of disruptive technologies already in the market.  Looking at markets solely through the lens of legal theory is, truly, an academic exercise, one with increasingly limited real-world applications.

Indeed, we can think of many examples where legacy regulation actually makes it harder for the incumbents to adapt as quickly as necessary in order to survive the explosive arrival of a big bang disruptor.  But that is a story for another day.

Much more to come.

Related links:

Why Best Buy is Going out of Business…Gradually,” Forbes.com.

What Makes an Idea a Meme?“, Forbes.com

The Five Most Disruptive Technologies at CES 2013,” Forbes.com

 

Updates to the Media Page

We’ve added over a dozen new posts to the Media page, covering some of the highlights in articles and press coverage for April and May, 2012.

Topics include privacy, security, copyright, net neutrality, spectrum policy, the continued fall of Best Buy and antitrust.

The new posts include links to Larry’s inaugural writing for several publications, including Techdirt, Fierce Mobile IT, and Engine Advocacy.

There are also several new video clips, including Larry’s interview of Andrew Keen, author of the provocative new book, “Digital Vertigo,” which took place at the Privacy Identity and Innovation conference in Seattle.

June was just as busy as the rest of the year, and we hope to catch up with the links soon.

On LightSquared and Dish Networks Use of Spectrum: What Would Ronald Coase Do?

 

On CNET today, I have a longish post on the FCC’s continued machinations over LightSquared and Dish Networks respective efforts to use existing satellite spectrum to build terrestrial mobile broadband networks. Both companies plan to build 4G LTE networks; LightSquared has already spent $4 billion in build-out for its network, which it plans to offer wholesale.

After first granting and then, a year later, revoking LightSquared’s waiver to repurpose its satellite spectrum, the agency has taken a more conservative (albeit slower) course with Dish. Yesterday, the agency initiated a Notice of Proposed Rulemaking that would, if adopted, assign flexible use rights to about 40 Mhz. of MSS spectrum licensed to Dish.

Current allocations of spectrum have little to do with the technical characteristics of different bands. That existing licenses limit Dish and LightSquared to satellite applications, for example, is simply an artifact of more-or-less random carve-outs to the absurdly complicated spectrum map managed by the agency since 1934. Advances in technology makes it possible to successfully use many different bands for many different purposes.

But the legacy of the FCC’s command-and-control model for allocations to favor “new” services (new, that is, until they are made obsolete in later years or decades) and shape competition to its changing whims is a confusing and unnecessary pile-up of limitations and conditions that severely and artificially limit the ways in which spectrum can be redeployed as technology and consumer demands change. Today, the FCC sits squarely in the middle of each of over 50,000 licenses, a huge bottleneck that is making the imminent spectrum crisis in mobile broadband even worse.

Even with the best of intentions, the agency can’t possibly continue to micromanage the map. And, as the LightSquared and Dish stroies demonstrate yet again, the risk of agency capture and political pressure often mean the agency doesn’t do the right thing even when it does act.

Who would be the more efficient and neutral regulator? According to Nobel Prize-winning economist Ronald Coase’s seminal 1959 article, “The Federal Communications Commission,” the answer is the market. In his trademark straight-forward, common sense style, Coase elegantly dismantles the idea that scarce spectrum resources demand a non-market solution of government management.

For one thing, Coase demonstrates how screwed up the system already was over fifty years ago. There’s little doubt that the problems he describes have only gotten worse with time and increased demand on the airwaves by insatiable consumers.

Instead, Coase proposed to treat spectrum like any other industry input–as property. The FCC, he said, should auction spectrum rights to the highest bidder, without licenses, conditions, or limitations on use, and then stand back. (He acknowledged the risk of antitrust problems, but, as in any industry, such problems could be addressed by antitrust regulators and not the FCC.) Spectrum rights would efficiently change hands when new applications and devices created higher-value uses.

Potential interference problems–such as those raised by GPS device manufacturers in the case of LightSquared–would be resolved precisely as they are in other property contexts. Without an FCC to run to, the parties would be forced to negotiate against a backdrop of established liability rules and a safety net of potential litigation. Indeed, LightSquared and GPS offer a classic example of Coase’s later work demonstrating that regardlesss of how property is initially allocated, liability rules ensure that parties will bargain to the most socially-efficient solution to interference.

Of course we’ll never know if the socially-optimal solution here is for LightSquared to protect GPS devices from receiving its signal or for device manufacturers to change their designs to stay out of LightSquared’s bands. The heavy hand of the regulator has foreclosed a market solution, or even an attempt at negotiations.

Instead, we have the disaster of the FCC’s decision in January, 2011 to grant a conditional waiver to LightSquared and then, last month, indefinitely revoking it. Meanwhile, LightSqaured spent $4 billion on infrastructure it may never use, and lost its CEO and key customers including Sprint. No one is happy, and no one reasonably argue this is was an optimal outcome, or even close to one.

For Dish, the NPRM will ensure a more orderly process, but at the cost of months or perhaps longer delay before Dish can begin building its terrestrial network. And in the interim, all sorts of irrelevant issues may interfere with the orderly (and expensive) resolution.

When Coase proposed a property model for spectrum in 1959, the idea was considered too radical. Congress and the FCC have, slowly but surely, taken pieces of the proposal to heart, introducing auctions (but not property rights) in the 1990’s. Yesterday’s NPRM takes a small step toward more flexible use licenses, but this may be too little reform too late. We have all the evidence we need that micromanagement of spectrum can’t possibly keep up with the pace of innovation. Time to try a new, fifty year old, approach.

What Makes an Idea a Meme?

Ceci c'est un meme.

On Forbes today, I look at the phenomenon of memes in the legal and economic context, using my now notorious “Best Buy” post as an example. Along the way, I talk antitrust, copyright, trademark, network effects, Robert Metcalfe and Ronald Coase.

It’s now been a month and a half since I wrote that electronics retailer Best Buy was going out of business…gradually.  The post, a preview of an article and future book that I’ve been researching on-and-off for the last year, continues to have a life of its own.

Commentary about the post has appeared in online and offline publications, including The Financial Times, The Wall Street Journal, The New York Times, TechCrunch, Slashdot, MetaFilter, Reddit, The Huffington Post, The Motley Fool, and CNN. Some of these articles generated hundreds of user comments, in addition to those that appeared here at Forbes.

(I was also interviewed by a variety of news sources, including TechCrunch’s Andrew Keen.)

Today, the original post hit another milestone, passing 2.9 million page views.

Watching the article move through the Internet, I’ve gotten a first-hand lesson in how network effects can generate real value.

Network effects are an economic principle that suggests certain goods and services experience increasing returns to scale.  That means the more users a particular product or service has, the more valuable the product becomes and the more rapidly its overall value increases.  A barrel of oil, like many commodity goods, does not experience network effects – only one person can own it at a time, and once it’s been burned, it’s gone.

In sharp contrast, the value of networked goods increase in value as they are consumed.  Indeed, the more they are used, the faster the increase–generating a kind of momentum or gravitational pull.  As Robert Metcalfe, founder of 3Com and co-inventor of Ethernet explained it, the value of a network can be plotted as the square of the number of connected users or devices—a curve that approaches infinity until most everything that can be connected already is.  George Gilder called that formula “Metcalfe’s Law.”

Since information can be used simultaneously by everyone and never gets used up, nearly all information products can be the beneficiaries of network effects.  Standards are the obvious example.  TCP/IP, the basic protocol that governs interactions between computers connected to the Internet, started out humbly as an information exchange standard for government and research university users.  But in part because it was non-proprietary and therefore free for anyone to use without permission or licensing fees, it spread from public to private sector users, slowly at first but over time at accelerating rates.

Gradually, then suddenly, TCP/IP became, in effect, a least common denominator standard by which otherwise incompatible systems could share information.  As momentum grew, TCP/IP and related protocols overtook and replaced better-marketed and more robust standards, including IBM’s SNA and DEC’s DECnet.  These proprietary standards, artificially limited to the devices of a particular manufacturer, couldn’t spread as quickly or as smoothly as TCP/IP.

From computing applications, Internet standards spread even faster, taking over switched telephone networks (Voice over IP), television (over-the-top services such as YouTube and Hulu), radio (Pandora, Spotify)—you name it.

Today the TCP/IP family of protocols, still free-of-charge, is the de facto global standard for information exchange, the lynchpin of the Internet revolution.  The standards continue to improve, thanks to the largely-voluntary efforts of The Internet Society and its virtual engineering task forces.  They’re the best example I know of network effects in action, and they’ve created both a platform and a blueprint for other networked goods that make use of the standards.

Beyond standards, network effects are natural features of other information products including software.  Since the marginal cost of a copy is low (essentially free in the post-media days of Web-based distribution and cloud services), establishing market share can happen at relatively low cost.  Once a piece of software—Microsoft Windows, AOL instant messenger in the old days, Facebook and Twitter more recently—starts ramping up the curve, it gains considerable momentum, which may be all it takes to beat out a rival or displace an older leader.  At saturation, a software product becomes, in essence, the standard.

From a legal standpoint, unfortunately, market saturation begins to resemble an illegal monopoly, especially when viewed through the lens of industrial age ideas about markets and competition.  (That, of course, is the lens that even 21st century regulators still use.)  But what legal academics, notably Columbia’s Tim Wu, misunderstand about this phenomenon is that such products have a relatively short life-cycle of dominating.  These “information empires,” as Wu calls them, are short-lived, but not, as Wu argues, because regulators cut them down.

Even without government intervention, information products are replaced at accelerating speeds by new disruptors relying on new (or greatly improved) technologies, themselves the beneficiaries of network effects.  The actual need for legal intervention is rare.  Panicked interference with the natural cycle, on the other hand, results in unintended consequences that damage emerging markets rather than correcting them.  Distracted by lingering antitrust battles at home and abroad, Microsoft lost momentum in the last decade.  No consumer benefited from that “remedy.”

For more, see “What Makes an Idea a Meme?” on Forbes.

 

For Incentive Auctions, The FCC Reaps what it Sowed

After three years of politicking, it now looks like Congress may actually give the FCC authority to conduct incentive auctions for mobile spectrum, and soon.  That, at least, is what the FCC seems to think.

At CES last week, FCC Chairman Julius Genachowski largely repeated the speech he has now given three years in a row.  But there was a subtle twist this time, one echoed by comments from Wireless Bureau Chief Rick Kaplan at a separate panel.

Instead of simply warning of a spectrum crunch and touting the benefits of the incentive auction idea, the Chairman took aim at a House Republican bill that would authorize the auctions but limit the agency’s “flexibility” in designing and conducting them. “My message on incentive auctions today is simple,” he said, “we need to get it done now, and we need to get it done right.”

By “done right,” Genachowski means without meaningful limits on how the agency constructs or oversees the auctions.  The Chairman’s attitude now seems to be if the FCC can’t have complete freedom, it would rather not have incentive auctions at all.  That’s a strange stance given the energy the FCC has expended making the case that such auctions are critical for mobile broadband users.

What’s the fight about?  The House bill would prohibit the agency from introducing bidder qualifications based on external factors, such as current spectrum holdings.  The FCC could not, in other words, directly or indirectly exclude carriers who already have significant spectrum licenses.  The agency would also be limited in its ability to attach special conditions to new licenses issued as part of particular auctions.  An amendment by Rep. Marsha Blackburn (T-Tenn.) that was approved last month would specifically forbid special net neutrality conditions.

This may sound like an inside-the-beltway spat, but the stakes are in fact quite high, going right to the core of what role the FCC should play in 21st century communications.  For the Chairman, these limits rise to the level of an existential crisis, casting doubt on the agency’s very nature as an expert regulator.  Congress should, he argued, authorize the auctions and let the agency’s staff of legal, economic and technical experts decide how best to organize them.  Tying the FCC’s hands by statute, he said, is “a mistake”:

because it preempts an expert agency process that’s fact-based, data-driven and informed by a broad range of economists, technologists and stakeholders on an open record. The proposals on the table to restrict the FCC’s flexibility in its area of technical expertise would be a significant departure from precedent.

Spectrum- and auction-related issues pose hard questions.  I believe they should be answered based on the evidence, on an open record, as close as possible to the time when they need to be made.

House leaders see it very differently.  They see an agency that badly bungled the recent 700 Mhz. auctions—the last major auctions the FCC has conducted.   As a pre-condition to bidding, for example, Google demanded “open access” conditions, which the FCC belatedly agreed to add.  Instead of answering “hard” questions based on “facts” and “data” in an open record, the agency simply gave in to pressure from a late and well-connected bidder.

There was no expertise applied here.  And the result, as I’ve noted elsewhere, was that bids for the C block (where the open access conditioned were applied) were discounted to the tune of billions of dollars that would otherwise have gone to the Treasury.

Verizon won the auction, but now faces uncertain application of the conditions, which differ materially from the open Internet rules the agency passed last year in the net neutrality rulemaking.  Meanwhile, the mobile marketplace is a very different place than it was when Google first stepped in, dominated by device  and operating system providers and proprietary app stores that didn’t even exist in 2008.

Larger bidders, meanwhile, wary of the vaguely-defined new conditions, shifted to the A and B blocks, pushing out smaller carriers.  Precisely the opposite result to what the agency intended in designing the auctions in the first place.

Politically-driven choices on how the D block should be licensed for public safety turned out even worse.  That auction could not find a bidder willing to live with the FCC’s conditions.  The spectrum sits unused, even as public safety still has no interoperable network more than a decade after 9/11.

If that’s what an “expert” agency with does with its “flexibility,” then it’s no wonder House leaders are skeptical.   “Flexibility” should mean maximizing revenues and ensuring that limited and critical spectrum assets are licensed to those who can put them to the best and highest use.  Not trying to stack the deck in favor of some bidders–and still getting it wrong.

Nothing has changed.  The agency still seems determined to use its auction authority to shape mobile broadband competition in its own sclerotic image.  It wants to create a competitive market among carriers even as competition is increasingly driven by other players in the mobile ecosystem.  It wants a return to the failed practice of unbundling to create an abundance of phantom competitors who have no assets and no understanding of communications, created by financial engineers who recognize a good regulatory arbitrage when they see one.

Not so, says the Chairman.  Our view of the market is deeply analytical, the result of thorough technical and economic analysis conducted by the bureaus.  His evidence?  The agency’s annual competition reports.  Or so he told CEA Gary Shapiro following his speech, when asked for proof that the agency understands the markets with which it tinkers.

But the competition reports are hardly models of lucid analysis.  They are constrained by the bureaus’ crabbed view of the market, a view required by the statutory requirements that generate the reports.  They continue to emphasize obsolete proxies for measuring competition, including HHIs and the spectrum screen, even as actual data on market conditions is relegated to the back of the report.  For the last two years, the mobile competition report pointedly refused to say whether the agency thought the market was competitive or not.

Yet the agency deliberately forfeited even the limited value of the competition reports by rejecting out-of-hand the AT&T/T-Mobile USA deal.  Rather than focusing on declining prices for voice, text, and data over the last ten years, or the regulatory constraints that make mergers necessary to expand coverage and service (both amply documented in the reports), the staff report on the T-Mobile deal largely swallowed the simplistic mantra of opponents of the deal that taking out one “national” carrier was per se anti-competitive.  The report’s principal objection seemed to be that any horizontal merger of two companies would result in one fewer competitor.  True, but irrelevant.

There was no sign of expert regulator at work here; nothing to suggest an analysis that was “fact-based, data-driven and informed by a broad range of economists, technologists and stakeholders.”  The analysis started with a conclusion and worked backwards.  And when even the old formulas didn’t come out right, at least in the case of the spectrum screen, the books were simply cooked until they did.

Well, that’s all water under the bridge in 2012.  “This is an incredibly fast-moving space,” the Chairman said of the need for flexibility, “and any policy that pre-judges or predicts the future runs a great risk of unintended and unfortunate consequences.”

That’s a good point.  But it’s also a perfect description of last year’s Net Neutrality rulemaking.  During a year of proceedings, the FCC turned up next to no evidence of an actual problem, let alone a market failure.  Still, the agency stuck doggedly to its first principals, insisting after-the-fact that “prophylactic” rules limiting network management technologies of the future were essential to maintaining a “level playing field.”  Never mind that the playing field showed no signs of imbalance, or that it continued to evolve dramatically (iPhone, iPad, Android and Verizon’s LTE introduction, for starters) as deliberations dragged on in a regulatory vacuum.

One “unintended and unfortunate consequence” of that and similar missteps has already become clear—Congress doesn’t trust the Chairman to follow the law.

Which is, I suspect, the main reason incentive auction authority hasn’t yet passed, even though nearly everyone agrees it’s the best short-term solution to a spectrum crisis of the government’s own making.  And why, when it does come, there are likely to be plenty of strings attached.

Which is too bad.  Because, if the FCC really acted as the expert agency it is chartered to be, Genachowski would be right about the value of flexibility.

Updates to the media page

We’ve recently added another dozen posts to the Media page. These include several articles and interviews related to Larry’s efforts to help stop dangerous copyright legislation pending in Congress, known as SOPA and Protect IP. Larry also provides detailed analysis of a more sensible alternative proposal from Sen. Ron Wyden and Rep. Darrell Issa, the OPEN Act.

Larry appeared last month on the PBS Newshour to debate domain name seizures and the pending bills with a representative of the U.S. Chamber of Commerce. Video and a transcript are on the Media page.

Larry has also been busy with FCC stories, including the clumsy release of the staff report on the failed AT&T/T-Mobile merger and the subsequent revelations about manipulation of the agency’s important “spectrum screen,” a key metric in merger reviews that was grossly mishandled. (Congress is looking into how the agency fumbled so badly.) With the collapse of the deal, watch for louder cries about the spectrum crisis and the lack of any solution to it.

A New Year’s Day post at Forbes on the dimming prospect of consumer electronics retailer Best Buy generated tremendous response, including over 2.3 million page views on the site and thousands of comments there and elsewhere, most by customers with horror stories to tell. Follow-up and fall-out to come.