Last week, I participated in a program co-sponsored by the Progressive Policy Institute, the Lisbon Council, and the Georgetown Center for Business and Public Policy on "Growing the Transatlantic Digital Economy." The complete program, including keynote remarks from EU VP Neelie Kroes and U.S. Under Secretary of State Catherine A. Novelli, is available below.

My remarks reviewed worrying signs of old-style interventionist trade practices creeping into the digital economy in new guises, and urged traditional governments to stay the course (or correct it) on leaving the Internet ecosystem largely to its own organic forms of regulation and market correctives:

Vice President Kroes’s comments underscore an important reality about innovation and regulation. Innovation, thanks to exponential technological trends including Moore’s Law and Metcalfe’s Law, gets faster and more disruptive all the time, a phenomenon my co-author and I have coined “Big Bang Disruption.”

Regulation, on the other hand, happens at the same pace (at best). Even the most well-intentioned regulators, and I certainly include Vice President Kroes in that list, find in retrospect that interventions aimed at heading off possible competitive problems and potential consumer harms rarely achieve their objectives, and, indeed, generate more harmful unintended consequences.

This is not a failure of government. The clock speeds of innovation and regulation are simply different, and diverging faster all the time. The Internet economy has been governed from its inception by the engineering-driven multistakeholder process embodied in the task forces and standards groups that operate under the umbrella of the Internet Society. Innovation, for better or for worse, is regulated more by Moore’s Law than traditional law.

I happen to think the answer is “for better,” but I am not one of those who take that to the extreme in arguing that there is no place for traditional governments in the digital economy. Governments have and continue to play an essential part in laying the legal foundations for the remarkable growth of that economy and in providing incentives if not funding for basic research that might not otherwise find investors. And when genuine market failures appear, traditional regulators can and should step in to correct them as efficiently and narrowly as they can.

Sometimes this has happened. Sometimes it has not. Where in particular I think regulatory intervention is least effective and most dangerous is in regulating ahead of problems—in enacting what the FCC calls “prophylactic rules.” The effort to create legally sound Open Internet regulations in the U.S. has faltered repeatedly, yet in the interim investment in both infrastructure and applications continues at a rapid pace—far outstripping the rest of the world.

The results speak for themselves. U.S. companies dominate the digital economy, and, as Prof. Christopher Yoo has definitively demonstrated, U.S. consumers overall enjoy the best wired and mobile infrastructure in the world at competitive prices. At the same time, those who continue to pursue interventionist regulation in this area often have hidden agendas. Let me give three examples:

1. As we saw earlier this month at the Internet Governance Forum, which I attended along with Vice President Kroes and 2,500 other delegates, representatives of the developing world were told by so-called consumer advocates from the U.S. and the EU that they must reject so-called “zero rated” services, in which mobile network operators partner with service providers including Facebook, Twitter and Wikimedia to provide their popular services to new Internet users without use applying to data costs.

Zero rating is an extremely popular tool for helping the 2/3 of the world’s population not currently on the Internet get connected and, likely, from these services to many others. But such services violate the “principle” of neutrality that has mutated from an engineering concept to a nearly-religious conviction. And so zero rating must be sacrificed, along with users who are too poor to otherwise join the digital economy.

2. Closer to home, we see the wildly successful Netflix service making a play to hijack the Open Internet debate into one about back-end interconnection, peering, and transit—engineering features that work so well that 99% of the agreements involved between networks, according to the OECD, aren’t even written down.

3. And in Europe, there are other efforts to turn the neutrality principle on its head, using it as a hammer not to regulate ISPs but to slow the progress of leading content and service providers, including Apple, Amazon and Google, who have what the French Digital Council and others refer to as non-neutral “platform monopolies” which must be broken.

To me, these are in fact new faces on very old strategies—colonialism, rent-seeking, and protectionist trade warfare respectively. My hope is that Internet users—an increasingly powerful and independent source of regulatory discipline in the Internet economy—will see these efforts for what they truly are…and reject them resoundingly.

The more we trust (but also verify) the engineers, the faster the Internet economy will grow, both in the U.S. and Europe, and the greater our trade in digital goods and services will strengthen the ties between our traditional economies. It’s worked brilliantly for almost two decades.

The alternatives, not so much.


This week, Forbes Senior Online Editor Kashmir Hill and I launched a new video series, “VC/DC,” where we review the latest developments at the accident-prone intersection of technology and policy. The first two episodes are embedded below.

We’ve envisioned the series as a regular look at the policy issues technology companies and investors should be paying attention to but probably aren’t.

Kashmir and I each bring a unique perspective to technology and policy. A former D.C. resident, Kashmir relocated to the Bay Area a few years ago to cover privacy, security, digital currency and other cutting edge topics.

As a Silicon Valley veteran who now spends nearly half my time in Washington at the Georgetown Center for Business and Public Policy, on the other hand, I am working to bridge the gap between disruptive innovations and the regulators who sometimes love them to death.

The program will cover a wide range of topics, and won’t be limited just to developments inside the beltway. As our inaugural episodes makes clear, we’re also looking closely at how technology businesses are affected by local and international laws, as well as developments in the courts and the legal system overall.

I hope you like the series and find it interesting enough to subscribe.  We'd be grateful for your feedback in any case, as well as suggestions for future episodes.


Episode 1:  "The Accident-Prone Intersection of Innovation and Policy"


Episode 2:  "Security Standards and the Patent Crisis"

On June 24, 2014, Larry will testify before the Senate Judiciary Committee at a hearing on the proposed merger of AT&T and DIRECTV and the on-going Big Bang Disruption of the media ecosystem generally.  The hearing takes place at 2:30 PM in 226 Dirksen, and is open to the public.  His written testimony will be posted to the Committee's website.

In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture's Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling "Big Bang Disruption."

The project is looking at the emerging ecosystem for innovation based on disruptive technologies, following up on work we have done separately and now together over the last fifteen years.

Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition in information-intensive industries--which is to say, these days, every industry.

The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything (including, increasingly, not just people but things).

The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly "undiscplined."  It shouldn't succeed.  But it does.

So when the disruptor arrives and takes off with a bang, often after a series of low-cost, failed experiments, incumbents have no time for a competitive response.  The old rules for dealing with disruptive technologies, most famously from the work of Harvard's Clayton Christensen, have become counter-productive.   If incumbents haven't learned to read the new tea leaves ahead of time, it's game over.

The HBR article doesn't go into much depth on the policy implications of this new innovation model, but the book we are now writing will.  The answer should be obvious.

This radical new model for product and service introduction underscores the robustness of market behaviors that quickly and efficiently correct many transient examples of dominance, especially in high-tech markets.

As a general rule (though obviously not one without exceptions), the big bang phenomenon further weakens the case for regulatory intervention.  Market dominance is sustainable for ever-shorter periods of time, with little opportunity for incumbents to exploit it.

Quickly and efficiently, a predictable next wave of technology will likely put a quick and definitive end to any "information empires" that have formed from the last generation of technologies.

Or, at the very least, do so more quickly and more cost-effectively than alternative solutions from regulation.  The law, to paraphrase Mark Twain, will still be putting its shoes on while the big bang disruptor has spread halfway around the world.

Unfortunately, much of the contemporary literature on competition policy from legal academics is woefully ignorant of even the conventional wisdom on strategy, not to mention the engineering realities of disruptive technologies already in the market.  Looking at markets solely through the lens of legal theory is, truly, an academic exercise, one with increasingly limited real-world applications.

Indeed, we can think of many examples where legacy regulation actually makes it harder for the incumbents to adapt as quickly as necessary in order to survive the explosive arrival of a big bang disruptor.  But that is a story for another day.

Much more to come.

Related links:

"Why Best Buy is Going out of Business...Gradually,"

"What Makes an Idea a Meme?",

"The Five Most Disruptive Technologies at CES 2013,"


We've added over a dozen new posts to the Media page, covering some of the highlights in articles and press coverage for April and May, 2012.

Topics include privacy, security, copyright, net neutrality, spectrum policy, the continued fall of Best Buy and antitrust.

The new posts include links to Larry's inaugural writing for several publications, including Techdirt, Fierce Mobile IT, and Engine Advocacy.

There are also several new video clips, including Larry's interview of Andrew Keen, author of the provocative new book, "Digital Vertigo," which took place at the Privacy Identity and Innovation conference in Seattle.

June was just as busy as the rest of the year, and we hope to catch up with the links soon.


On CNET today, I have a longish post on the FCC's continued machinations over LightSquared and Dish Networks respective efforts to use existing satellite spectrum to build terrestrial mobile broadband networks. Both companies plan to build 4G LTE networks; LightSquared has already spent $4 billion in build-out for its network, which it plans to offer wholesale.

After first granting and then, a year later, revoking LightSquared's waiver to repurpose its satellite spectrum, the agency has taken a more conservative (albeit slower) course with Dish. Yesterday, the agency initiated a Notice of Proposed Rulemaking that would, if adopted, assign flexible use rights to about 40 Mhz. of MSS spectrum licensed to Dish.

Current allocations of spectrum have little to do with the technical characteristics of different bands. That existing licenses limit Dish and LightSquared to satellite applications, for example, is simply an artifact of more-or-less random carve-outs to the absurdly complicated spectrum map managed by the agency since 1934. Advances in technology makes it possible to successfully use many different bands for many different purposes.

But the legacy of the FCC's command-and-control model for allocations to favor "new" services (new, that is, until they are made obsolete in later years or decades) and shape competition to its changing whims is a confusing and unnecessary pile-up of limitations and conditions that severely and artificially limit the ways in which spectrum can be redeployed as technology and consumer demands change. Today, the FCC sits squarely in the middle of each of over 50,000 licenses, a huge bottleneck that is making the imminent spectrum crisis in mobile broadband even worse.

Even with the best of intentions, the agency can't possibly continue to micromanage the map. And, as the LightSquared and Dish stroies demonstrate yet again, the risk of agency capture and political pressure often mean the agency doesn't do the right thing even when it does act.

Who would be the more efficient and neutral regulator? According to Nobel Prize-winning economist Ronald Coase's seminal 1959 article, "The Federal Communications Commission," the answer is the market. In his trademark straight-forward, common sense style, Coase elegantly dismantles the idea that scarce spectrum resources demand a non-market solution of government management.

For one thing, Coase demonstrates how screwed up the system already was over fifty years ago. There's little doubt that the problems he describes have only gotten worse with time and increased demand on the airwaves by insatiable consumers.

Instead, Coase proposed to treat spectrum like any other industry input--as property. The FCC, he said, should auction spectrum rights to the highest bidder, without licenses, conditions, or limitations on use, and then stand back. (He acknowledged the risk of antitrust problems, but, as in any industry, such problems could be addressed by antitrust regulators and not the FCC.) Spectrum rights would efficiently change hands when new applications and devices created higher-value uses.

Potential interference problems--such as those raised by GPS device manufacturers in the case of LightSquared--would be resolved precisely as they are in other property contexts. Without an FCC to run to, the parties would be forced to negotiate against a backdrop of established liability rules and a safety net of potential litigation. Indeed, LightSquared and GPS offer a classic example of Coase's later work demonstrating that regardlesss of how property is initially allocated, liability rules ensure that parties will bargain to the most socially-efficient solution to interference.

Of course we'll never know if the socially-optimal solution here is for LightSquared to protect GPS devices from receiving its signal or for device manufacturers to change their designs to stay out of LightSquared's bands. The heavy hand of the regulator has foreclosed a market solution, or even an attempt at negotiations.

Instead, we have the disaster of the FCC's decision in January, 2011 to grant a conditional waiver to LightSquared and then, last month, indefinitely revoking it. Meanwhile, LightSqaured spent $4 billion on infrastructure it may never use, and lost its CEO and key customers including Sprint. No one is happy, and no one reasonably argue this is was an optimal outcome, or even close to one.

For Dish, the NPRM will ensure a more orderly process, but at the cost of months or perhaps longer delay before Dish can begin building its terrestrial network. And in the interim, all sorts of irrelevant issues may interfere with the orderly (and expensive) resolution.

When Coase proposed a property model for spectrum in 1959, the idea was considered too radical. Congress and the FCC have, slowly but surely, taken pieces of the proposal to heart, introducing auctions (but not property rights) in the 1990's. Yesterday's NPRM takes a small step toward more flexible use licenses, but this may be too little reform too late. We have all the evidence we need that micromanagement of spectrum can't possibly keep up with the pace of innovation. Time to try a new, fifty year old, approach.