Category Archives: Digital Life

What Google Fiber, Gig.U and US Ignite Teach us About the Painful Cost of Legacy Regulation

On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.

Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan.  Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .

Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals.  Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks.  And then go build a lot more of them.

Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year.  (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)

US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption.  It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.

I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem:  the U.S. is nearing a dangerous stalemate in its communications infrastructure.  We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband.  Right now, ultra high-speed broadband is technically possible by running fiber to the home.  Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.

But the kinds of visionary applications in smart grid, classroom-free education, advanced telemedicine, high-definition video, mobile backhaul and true teleworking that would make full use of a fiber network don’t really exist yet.  Consumers (and many businesses) aren’t demanding these speeds, and Wall Street isn’t especially interested in building ahead of demand.  There’s already plenty of dark fiber deployed, the legacy of earlier speculation that so far hasn’t paid off.

So the hope is that by deploying fiber to showcase communities and encouraging the development of demonstration applications, entrepreneurs and investors will get inspired to build next generation networks.

Let’s hope they’re right.

What interests me personally about the projects, however, is what they expose about regulatory disincentives that unnecessarily and perhaps fatally retard private investment in next-generation infrastructure.  In the Forbes piece, I note almost a dozen examples from the Google Fiber development agreement where Kansas City voluntarily waived permits, fees, and plodding processes that would otherwise delay the project.  As well, in several key areas the city actually commits to cooperate and collaborate with Google Fiber to expedite and promote the project.

As Levin notes, Kansas City isn’t offering any funding or general tax breaks to Google Fiber.  But the regulatory concessions, which implicitly acknowledge the heavy burden imposed on those who want to deploy new privately-funded infrastructure (many of them the legacy of the early days of cable TV deployments), may still be enough to “change the math,” as Levin puts it, making otherwise unprofitable investments justifiable after all.

Just removing some of the regulatory debris, in other words, might itself be enough to break the stalemate that makes building next generation IP networks unprofitable today.

The regulatory cost puts a heavy thumb on the side of the scale that discourages investment.  Indeed, as fellow Forbes contributor Elise Ackerman pointed out last week, Google has explicitly said that part of what made Kansas City attractive was the lack of excessive infrastructure regulation, and the willingness and ability of the city to waive or otherwise expedite the requirements that were on the books.(Despite the city’s promises to bend over backwards for the project, she notes, there have still been expensive regulatory delays that promoted no public values.)

Particularly painful to me was testimony by Google Vice President Milo Medin, who explained why none of the California-based proposals ever had a real chance.  “Many fine California city proposals for the Google Fiber project were ultimately passed over,” he told Congress, “in part because of the regulatory complexity here brought about by [the California Environmental Quality Act] and other rules. Other states have equivalent processes in place to protect the environment without causing such harm to business processes, and therefore create incentives for new services to be deployed there instead.”

Ouch.

This is a crucial insight.  Our next-generation communications infrastructure will surely come, when it does come, from private investment.  The National Broadband Plan estimated it would take $350 billion to get 100 Mbps Internet to 100 million Americans through a combination of fiber, cable, satellite and high-speed mobile networks.  Mindful of reality, however, the plan didn’t even bother to consider the possibility of full or even significant taxpayer funding to reach that goal.

Unlike South Korea, we aren’t geographically-small, with a largely urban population living in just a few cities.  We don’t have a largely- nationalized and taxpayer-subsidized communications infrastructure.   On a per-person basis, deploying broadband in the U.S. is much harder, complicated and more expensive than it is in many competing nations in the global economy.

Of course, nationwide fiber and mobile deployments by network operators including Verizon and AT&T can’t rely on gimmicks like Google Fiber’s hugely successful competition, where 1,100 communities applied to become a test site.  Nor can they, like Gig.U, cherry-pick research university towns, which have the most attractive demographics and density to start with.  Nor can they simply call themselves start-ups and negotiate the kind of freedom from regulation that Google and Gig.U’s membership can.

Large-scale network operators need to build, if not everywhere, than to an awful lot of somewheres.  That’s a political reality of their size and operating model, as well as the multi-layer regulatory environment in which they must operate.  And it’s a necessity of meeting the ambitious goal of near-universal high-speed broadband access, and of many of the applications that would use it.

Under the current regulatory and economic climate, large-scale fiber deployment has all but stopped for now.  Given the long lead-time for new construction, we need to find ways to restart it.

So everyone who agrees that gigabit Internet is a critical element in U.S. competitiveness in the next decade or so ought to look closely at the lessons, intended or otherwise, of the various testbed projects.  They are exposing in stark detail a dangerous and useless legacy of multi-level regulation that makes essential private infrastructure investment economically impossible.

Don’t get me wrong.  The demonstration projects and testbeds are great.  Google Fiber, Gig.U, and US Ignite are all valuable efforts.  But if we want to overcome our “strategic bandwidth deficit,” we’ll need something more fundamental than high-profile projects and demonstration applications.  To start with, we’ll need a serious housecleaning of legacy regulation at the federal, state, and local level.

Regulatory reform might not be as sexy as gigabit Internet demonstrations, but the latter ultimately won’t make much difference without the former.  Time to break out the heavy demolition equipment—for both.

Updates to the Media Page

We’ve added over a dozen new posts to the Media page, covering some of the highlights in articles and press coverage for April and May, 2012.

Topics include privacy, security, copyright, net neutrality, spectrum policy, the continued fall of Best Buy and antitrust.

The new posts include links to Larry’s inaugural writing for several publications, including Techdirt, Fierce Mobile IT, and Engine Advocacy.

There are also several new video clips, including Larry’s interview of Andrew Keen, author of the provocative new book, “Digital Vertigo,” which took place at the Privacy Identity and Innovation conference in Seattle.

June was just as busy as the rest of the year, and we hope to catch up with the links soon.

Everyone Out of the Internet!

During the 1970’s, I remember a bumper sticker that summed up the prevailing anti-colonial attitude that had developed during the late 1960’s:  “U.S. Out of North America.”

That sentiment reflects nicely my activities this week, which include three articles decrying efforts by regulators to oversee key aspects of the Internet economy.  Of course their intentions—at least publicly—are always good.  But even with the right idea, the unintended negative consequences always overwhelm the benefits by a wide margin.

Governments are just too slow to respond to the pace of change of innovations in information technology.  Nothing will fix that.  So better just to leave well enough alone and intercede only when genuine consumer harm is occurring.  And provable.

The articles cover the spectrum from state (California), federal (FCC) and international (ITU) regulators and a wide range of  truly bad ideas, from the desire of California’s Public Utilities Commission to “protect” consumers of VoIP services, to the FCC’s latest effort to elbow its way into regulating broadband Internet access at the middle milel, to a proposal from European telcos to have the U.N. implement a tariff system on Internet traffic originating from the U.S.

 Here they are:

  1. “Government Control of the Net is Always a Bad Idea” (CNET) – http://news.cnet.com/8301-13578_3-57446383-38/government-control-of-net-is-always-a-bad-idea/?tag=mncol;cnetRiver
  2. “The FCC Noses Under the Broadband Internet Tent” (Forbes) – http://www.forbes.com/sites/larrydownes/2012/06/06/the-fcc-noses-under-the-broadband-internet-tent/
  3. “U.N. Could Tax U.S.-based Websites, Leaked Docs Show” (CNET) – http://news.cnet.com/8301-1009_3-57449375-83/u.n-could-tax-u.s.-based-web-sites-leaked-docs-show/?tag=mncol;topStories

That third one, by the way, was written with CNET’s Chief Political Correspondent Declan McCullagh.  It represents a genuine scoop, based on leaked documents posted by my Tech Liberation Front colleagues Jerry Brito and Eli Dourado on WCITLeaks.org!

Updates to the Media Page

2012 is off to a fast start, and we’re trying hard just to keep up. We’ve already added over thirty posts to the Media Page, including articles, radio and television interviews, and quotes in a wide range of online and offline publications. There are several video and audio clips for your enjoyment.

The year began with two big stories: the successful fight to halt efforts for quick passage of SOPA and PIPA, two bills that would have added dangerous new legal remedies for government and private parties to tinker with the underlying engineering of the Internet in a fool-hardy effort to curb unlicensed copying by consumers. Larry was front and center, making several trips to Washington to speak with Members of Congress urging them to reconsider the bills, and reported as well from the annual Consumer Electronics Show in Las Vegas, where the tide turned definitively against the bills.  Larry’s work, including a controversial article for Forbes on “Who Really Stopped SOPA, and Why,” was cited in publications as varied as The National Review, Aljazeera, The National Post, TechCrunch, Techdirt, AdWeek and a radio interview on WHYY’s “Radio Times.”

The second big story was Larry’s barn-burning article for Forbes on the failure of electronics retailer Best Buy to adapt to changing market and technology dynamics.  The original article how has nearly 3,000,000 pageviews, and set off a firestorm of response both positive and negative. The article spawned several follow-up pieces on Forbes, as well as extensive coverage nearly everywhere else, including  The Financial Times, The Wall Street Journal, The New York Times, TechCrunch, Slashdot, MetaFilter, Reddit, The Huffington Post, The Motley Fool, MSN Money and CNN. Some of these articles generated thousands of user comments, in addition to over a thousand that appeared on Forbes.

With the SOPA and PIPA fights temporarily on hold, Larry pivoted back to other important innovation and policy matters, including reform of the FCC’s troubled Lifeline program, Internet privacy,  and a fight over future spectrum auctions vital to the future of mobile broadband.  Look for articles in CNET and Forbes as well as interviews in U.S. News, This Week in Law, The Los Angeles Times, The Hill, WebProNews and the Heartland Institute.

Stayed tuned!  It’s going to be an exciting year.

What Makes an Idea a Meme?

Ceci c'est un meme.

On Forbes today, I look at the phenomenon of memes in the legal and economic context, using my now notorious “Best Buy” post as an example. Along the way, I talk antitrust, copyright, trademark, network effects, Robert Metcalfe and Ronald Coase.

It’s now been a month and a half since I wrote that electronics retailer Best Buy was going out of business…gradually.  The post, a preview of an article and future book that I’ve been researching on-and-off for the last year, continues to have a life of its own.

Commentary about the post has appeared in online and offline publications, including The Financial Times, The Wall Street Journal, The New York Times, TechCrunch, Slashdot, MetaFilter, Reddit, The Huffington Post, The Motley Fool, and CNN. Some of these articles generated hundreds of user comments, in addition to those that appeared here at Forbes.

(I was also interviewed by a variety of news sources, including TechCrunch’s Andrew Keen.)

Today, the original post hit another milestone, passing 2.9 million page views.

Watching the article move through the Internet, I’ve gotten a first-hand lesson in how network effects can generate real value.

Network effects are an economic principle that suggests certain goods and services experience increasing returns to scale.  That means the more users a particular product or service has, the more valuable the product becomes and the more rapidly its overall value increases.  A barrel of oil, like many commodity goods, does not experience network effects – only one person can own it at a time, and once it’s been burned, it’s gone.

In sharp contrast, the value of networked goods increase in value as they are consumed.  Indeed, the more they are used, the faster the increase–generating a kind of momentum or gravitational pull.  As Robert Metcalfe, founder of 3Com and co-inventor of Ethernet explained it, the value of a network can be plotted as the square of the number of connected users or devices—a curve that approaches infinity until most everything that can be connected already is.  George Gilder called that formula “Metcalfe’s Law.”

Since information can be used simultaneously by everyone and never gets used up, nearly all information products can be the beneficiaries of network effects.  Standards are the obvious example.  TCP/IP, the basic protocol that governs interactions between computers connected to the Internet, started out humbly as an information exchange standard for government and research university users.  But in part because it was non-proprietary and therefore free for anyone to use without permission or licensing fees, it spread from public to private sector users, slowly at first but over time at accelerating rates.

Gradually, then suddenly, TCP/IP became, in effect, a least common denominator standard by which otherwise incompatible systems could share information.  As momentum grew, TCP/IP and related protocols overtook and replaced better-marketed and more robust standards, including IBM’s SNA and DEC’s DECnet.  These proprietary standards, artificially limited to the devices of a particular manufacturer, couldn’t spread as quickly or as smoothly as TCP/IP.

From computing applications, Internet standards spread even faster, taking over switched telephone networks (Voice over IP), television (over-the-top services such as YouTube and Hulu), radio (Pandora, Spotify)—you name it.

Today the TCP/IP family of protocols, still free-of-charge, is the de facto global standard for information exchange, the lynchpin of the Internet revolution.  The standards continue to improve, thanks to the largely-voluntary efforts of The Internet Society and its virtual engineering task forces.  They’re the best example I know of network effects in action, and they’ve created both a platform and a blueprint for other networked goods that make use of the standards.

Beyond standards, network effects are natural features of other information products including software.  Since the marginal cost of a copy is low (essentially free in the post-media days of Web-based distribution and cloud services), establishing market share can happen at relatively low cost.  Once a piece of software—Microsoft Windows, AOL instant messenger in the old days, Facebook and Twitter more recently—starts ramping up the curve, it gains considerable momentum, which may be all it takes to beat out a rival or displace an older leader.  At saturation, a software product becomes, in essence, the standard.

From a legal standpoint, unfortunately, market saturation begins to resemble an illegal monopoly, especially when viewed through the lens of industrial age ideas about markets and competition.  (That, of course, is the lens that even 21st century regulators still use.)  But what legal academics, notably Columbia’s Tim Wu, misunderstand about this phenomenon is that such products have a relatively short life-cycle of dominating.  These “information empires,” as Wu calls them, are short-lived, but not, as Wu argues, because regulators cut them down.

Even without government intervention, information products are replaced at accelerating speeds by new disruptors relying on new (or greatly improved) technologies, themselves the beneficiaries of network effects.  The actual need for legal intervention is rare.  Panicked interference with the natural cycle, on the other hand, results in unintended consequences that damage emerging markets rather than correcting them.  Distracted by lingering antitrust battles at home and abroad, Microsoft lost momentum in the last decade.  No consumer benefited from that “remedy.”

For more, see “What Makes an Idea a Meme?” on Forbes.

 

How the SOPA Fight Was Won…For Now

On Forbes yesterday, I posted a detailed analysis of the successful (so far) fight to block quick passage of the Protect-IP Act (PIPA) and the Stop Online Piracy Act (SOPA). (See “Who Really Stopped SOPA, and Why?“) I’m delighted that the article, despite its length, has gotten such positive response.

As regular readers know, I’ve been following these bills closely from the beginning, and made several trips to Capitol Hill to urge lawmakers to think more carefully about some of the more half-baked provisions.

But beyond traditional advocacy–of which there was a great deal–something remarkable happened in the last several months. A new, self-organizing protest movement emerged on the Internet, using social news and social networking tools including Reddit, Tumblr, Facebook and Twitter to stage virtual teach-ins, sit-ins, boycotts, and other protests. Continue reading