Category Archives: Globalization

On NSA Surveillance, Why We're Doomed to Repeat History

***Cross-posted from Forbes.com***

It was, to paraphrase Yogi Berra, déjà vu all over again. Fielding calls last week from journalists about reports the NSA had been engaged in massive and secret data mining of phone records and Internet traffic, I couldn’t help but wonder why anyone was surprised by the so-called revelations.

Not only had the surveillance been going on for years, the activity had been reported all along—at least outside the mainstream media. The programs involved have been the subject of longstanding concern and vocal criticism by advocacy groups on both the right and the left.

For those of us who had been following the story for a decade, this was no “bombshell.” No “leak” was required. There was no need for an “expose” of what had long since been exposed.

As the Cato Institute’s Julian Sanchez and others reminded us, the NSA’s surveillance activities, and many of the details breathlessly reported last week, weren’t even secret. They come up regularly in Congress, during hearings, for example, about renewal of the USA Patriot Act and the Foreign Intelligence Surveillance Act, the principal laws that govern the activity.

In those hearings, civil libertarians (Republicans and Democrats) show up to complain about the scope of the law and its secret enforcement, and are shot down as being soft on terrorism. The laws are renewed and even extended, and the story goes back to sleep.

But for whatever reason, the mainstream media, like the corrupt Captain Renault in “Casablanca,” collectively found itself last week “shocked, shocked” to discover widespread, warrantless electronic surveillance by the U.S. government. Surveillance they’ve known about for years.

Let me be clear. As one of the long-standing critics of these programs, and especially their lack of oversight and transparency, I have no objection to renewed interest in the story, even if the drama with which it is being reported smells more than a little sensational with a healthy whiff of opportunism.

In a week in which the media did little to distinguish itself, for example, The Washington Post stood out, and not in a good way. As Ed Bott detailed in a withering post for ZDNet on Saturday, the Post substantially revised its most incendiary article, a Thursday piece that originally claimed nine major technology companies had provided direct access to their servers as part of the Prism program.

That “scoop” generated more froth than the original “revelation” that Verizon had been complying with government demands for customer call records.

Except that the Post’s sole source for its claims turned out to a PowerPoint presentation of “dubious provenance.” A day later, the editors had removed the most thrilling but unsubstantiated revelations about Prism from the article. Yet in an unfortunate and baffling Orwellian twist, the paper made absolutely no mention of the “correction.” As Bott points out, that violated not only common journalistic practice but the paper’s own revision and correction policy.

All this and much more, however, would have been in the service of a good cause–if, that is, it led to an actual debate about electronic surveillance we’ve needed for over a decade.

Unfortunately, it won’t. The mainstream media will move on to the next story soon enough, whether some natural or man-made disaster.

And outside the Fourth Estate, few people will care or even notice when the scandal dies. However they feel this week, most Americans simply aren’t informed or bothered enough about wholesale electronic surveillance to force any real accountability, let alone reform. Those who are up in arms today might ask themselves where they were for the last decade or so, and whether their righteous indignation now is anything more than just that.

As Politico’s James Hohmann noted on Saturday, “Government snooping gets civil libertarians from both parties exercised, but this week’s revelations are likely to elicit a collective yawn from voters if past polling is any sign.”

Why so pessimistic? I looked over what I’ve written on this topic in the past, and found the following essay, written in 2008, which appeared in slightly different form in my 2009 book, “The Laws of Disruption.” It puts the NSA’s programs in historical context, and tries to present both the costs and benefits of how they’ve been implemented. It points out why at least some aspects of these government activities are likely illegal, and what should be done to rein them in.

What I describe is just as scandalous, if not moreso, than anything that came out last week.

Yet I present it below with the sad realization that if I were writing it today–five years later–I wouldn’t need to change a single word. Except maybe the last sentence. And then, just maybe.

Searching Bits, Seizing Information

U.S. citizens are protected from unreasonable search and seizure of their property by their government. In the Constitution, that right is enshrined in the Fourth Amendment, which was enacted in response to warrantless searches by British agents in the run-up to the Revolutionary War. Over the past century, the Supreme Court has increasingly seen the Fourth Amendment as a source of protection for personal space—the right to a “zone of privacy” that governments can invade only with probable cause that evidence of a crime will be revealed.

Under U.S. law, Americans have little in the way of protection of their privacy from businesses or from each other. The Fourth Amendment is an exception, albeit one that applies only to government.

But digital life has introduced new and thorny problems for Fourth Amendment law. Since the early part of the twentieth century, courts have struggled to extend the “zone of privacy” to intangible interests—a right to privacy, in other words, in one’s information. But to “search” and “seize” implies real world actions. People and places can be searched; property can be seized.

Information, on the other hand, need not take physical form, and can be reproduced infinitely without damaging the original. Since copies of data may exist, however temporarily, on thousands of random computers, in what sense do netizens have “property” rights to their information? Does intercepting data constitute a search or a seizure or neither?

The law of electronic surveillance avoids these abstract questions by focusing instead on a suspect’s expectations. Courts reviewing challenged investigations ask simply if the suspect believed the information acquired by the government was private data and whether his expectation of privacy was reasonable.

It is not the actual search and seizure that the Fourth Amendment forbids, after all, but unreasonable search and seizure. So the legal analysis asks what, under the circumstances, is reasonable. If you are holding a loud conversation in a public place, it isn’t reasonable for you to expect privacy, and the police can take advantage of whatever information they overhear. Most people assume, on the other hand, that data files stored on the hard drive of a home computer are private and cannot be copied without a warrant.

One problem with the “reasonable expectation” test is that as technology changes, so do user expectations. The faster the Law of Disruption accelerates, the more difficult it is for courts to keep pace. Once private telephones became common, for example, the Supreme Court required law enforcement agencies to follow special procedures for the search and seizure of conversations—that is, for wiretaps. Congress passed the first wiretap law, known as Title III, in 1968. As information technology has revolutionized communications and as user expectations have evolved, the courts and Congress have been forced to revise Title III repeatedly to keep it up to date.

In 1986, the Electronic Communications Privacy Act amended Title III to include new protection for electronic communications, including e-mail and communications over cellular and other wireless technologies. A model of reasonable lawmaking, the ECPA ensured these new forms of communication were generally protected while closing a loophole for criminals who were using them to evade the police. (By 2005, 92 percent of wiretaps targeted cell phones.)

As telephone service providers multiplied and networks moved from analog to digital, a 1994 revision required carriers to build in special access for investigators to get around new features such as call forwarding. Once a Title III warrant is issued, law enforcement agents can now simply log in to the suspect’s network provider and receive real-time streams of network traffic.

Since 1968, Title III has maintained an uneasy truce between the rights of citizens to keep their communications private and the ability of law enforcement to maintain technological parity with criminals. As the digital age progresses, this balance is harder to maintain. With each cycle of Moore’s Law, criminals discover new ways to use digital technology to improve the efficiency and secrecy of their operations, including encryption, anonymous e-mail resenders, and private telephone networks. During the 2008 terrorist attacks in Mumbai, for example, co-conspirators used television reports of police activity to keep the gunmen at various sites informed, using Internet telephones that were hard to trace.

As criminals adopt new technologies, law enforcement agencies predictably call for new surveillance powers. China alone employs more than 30,000 “Internet police” to monitor online traffic, what is sometimes known as the “Great Firewall of China.” The government apparently intercepts all Chinese-bound text messages and scans them for restricted words including democracy, earthquake, and milk powder.

The words are removed from the messages, and a copy of the original along with identifying information is stored on the government’s system. When Canadian human rights activists recently hacked into Chinese government networks they discovered a cluster of message-logging computers that had recorded more than a million censored messages.

Netizens, increasingly fearful that the arms race between law enforcement and criminals will claim their privacy rights as unintended victims, are caught in the middle. Those fears became palpable after the September 11, 2001, terrorist attacks and those that followed in Indonesia, London, and Madrid. The world is now engaged in a war with no measurable objectives for winning, fought against an anonymous and technologically savvy enemy who recruits, trains, and plans assaults largely through international communication networks. Security and surveillance of all varieties are now global priorities, eroding privacy interests significantly.

The emphasis on security over privacy is likely to be felt for decades to come. Some of the loss has already been felt in the real world. To protect ourselves from future attacks, everyone can now expect more invasive surveillance of their activities, whether through massive networks of closed-circuit TV cameras in large cities or increased screening of people and luggage during air travel.

The erosion of privacy is even more severe online. Intelligence is seen as the most effective weapon in a war against terrorists. With or without authorization, law enforcement agencies around the world have been monitoring large quantities of the world’s Internet data traffic. Title III has been extended to private networks and Internet phone companies, who must now insert government access points into their networks. (The FCC has proposed adding other providers of phone service, including universities and large corporations.)

Because of difficulties in isolating electronic communications associated with a single IP address, investigators now demand the complete traffic of large segments of addresses, that is, of many users. Data mining technology is applied after the fact to search the intercepted information for the relevant evidence.

Passed soon after 9/11, the USA Patriot Act went much further. The Patriot Act abandoned many of the hard-fought controls on electronic surveillance built into Title III. New “enhanced surveillance procedures” allow any judge to authorize electronic surveillance and lower the standard for warrants to seize voice mails.

The FBI was given the power to conduct wiretaps without warrants and to issue so-called national security letters to gag network operators from revealing their forced cooperation. Under a 2006 extension, FBI officials were given the power to issue NSLs that silenced the recipient forever, backed up with a penalty of up to five years in prison.

Gone is even a hint of the Supreme Court’s long-standing admonitions that search and seizure of information should be the investigatory tool of last resort.

Despite the relaxed rules, or perhaps inspired by them, the FBI acknowledged in 2007 that it had violated Title III and the Patriot Act repeatedly, illegally searching the telephone, Internet, and financial records of an unknown number of Americans. A Justice Department investigation found that from 2002 to 2005 the bureau had issued nearly 150,000 NSLs, a number the bureau had grossly under-reported to Congress.

Many of these letters violated even the relaxed requirements of the Patriot Act. The FBI habitually requested not only a suspect’s data but also those of people with whom he maintained regular contact—his “community of interest,” as the agency called it. “How could this happen?” FBI director Robert Mueller asked himself at the 2007 Senate hearings on the report. Mueller didn’t offer an answer.

Ultimately, a federal judge declared the FBI’s use of NSLs unconstitutional on free-speech grounds, a decision that is still on appeal. The National Security Agency, which gathers foreign intelligence, undertook an even more disturbing expansion of its electronic surveillance powers.

Since the Constitution applies only within the U.S., foreign intelligence agencies are not required to operate within the limits of Title III. Instead, their information- gathering practices are held to a much more relaxed standard specified in the Foreign Intelligence Surveillance Act. FISA allows warrantless wiretaps anytime that intercepted communications do not include a U.S. citizen and when the communications are not conducted through U.S. networks. (The latter restriction was removed in 2008.)

Even these minimal requirements proved too restrictive for the agency. Concerned that U.S. operatives were organizing terrorist attacks electronically with overseas collaborators, President Bush authorized the NSA to bypass FISA and conduct warrantless electronic surveillance at will as long as one of the parties to the information exchange was believed to be outside the United States.

Some of the president’s staunchest allies found the NSA’s plan, dubbed the Terrorist Surveillance Program, of dubious legality. Just before the program became public in 2005, senior officials in the Justice Department refused to reauthorize it.

In a bizarre real-world game of cloak-and-dagger, presidential aides, including future attorney general Alberto Gonzales, rushed to the hospital room of then-attorney general John Ashcroft, who was seriously ill, in hopes of getting him to overrule his staff. Justice Department officials got wind of the end run and managed to get to Ashcroft first. Ashcroft, who was barely able to speak from painkillers, sided with his staff.

Many top officials, including Ashcroft and FBI director Mueller, threatened to resign over the incident. President Bush agreed to stop bypassing the FISA procedure and seek a change in the law to allow the NSA more flexibility. Congress eventually granted his request.

The NSA’s machinations were both clumsy and dangerous. Still, I confess to having considerable sympathy for those trying to obtain actionable intelligence from online activity. Post-9/11 assessments revealed embarrassing holes in the technological capabilities of most intelligence agencies worldwide. (Admittedly, it also revealed repeated failures to act on intelligence that was already collected.) Initially at least, the public demanded tougher measures to avoid future attacks.

Keeping pace with international terror organizations and still following national laws, however, is increasingly difficult. For one thing, communications of all kinds are quickly migrating to the cheaper and more open architecture of the Internet. An unintended consequence of this change is that the nationalities of those involved in intercepted communications are increasingly difficult to determine.

E-mail addresses and instant-message IDs don’t tell you the citizenship or even the location of the sender or receiver. Even telephone numbers don’t necessarily reveal a physical location. Internet telephone services such as Skype give their customers U.S. phone numbers regardless of their actual location. Without knowing the nationality of a suspect, it is hard to know what rights she is entitled to.

The architecture of the Internet raises even more obstacles against effective surveillance. Traditional telephone calls take place over a dedicated circuit connecting the caller and the person being called, making wiretaps relatively easy to establish. Only the cooperation of the suspect’s local exchange is required.

The Internet, however, operates as a single global exchange. E-mails, voice, video, and data files—whatever is being sent is broken into small packets of data. Each packet follows its own path between connected computers, largely determined by data traffic patterns present at the time of the communication.

Data may travel around the world even if its destination is local, crossing dozens of national borders along the way. It is only on the receiving end that the packets are reassembled.

This design, the genius of the Internet, improves network efficiency. It also provides a significant advantage to anyone trying to hide his activities. On the other hand, NSLs and warrantless wiretapping on the scale apparently conducted by the NSA move us frighteningly close to the “general warrant” American colonists rejected in the Fourth Amendment. They were right to revolt over the unchecked power of an executive to do what it wants, whether in the name of orderly government, tax collection, or antiterrorism.

In trying to protect its citizens against future terror attacks, the secret operations of the U.S. government abandoned core principles of the Constitution. Even with the best intentions, governments that operate in secrecy and without judicial oversight quickly descend into totalitarianism. Only the intervention of corporate whistle-blowers, conscientious government officials, courts, and a free press brought the United States back from the brink of a different kind of terrorism.

Internet businesses may be entirely supportive of government efforts to improve the technology of policing. A society governed by laws is efficient, and efficiency is good for business. At the same time, no one is immune from the pressures of anxious customers who worry that the information they provide will be quietly delivered to whichever regulator asks for it. Secret surveillance raises the level of customer paranoia, leading rational businesses to avoid countries whose practices are not transparent.

Partly in response to the NSA program, companies and network operators are increasingly routing information flow around U.S. networks, fearing that even transient communications might be subject to large-scale collection and mining operations by law enforcement agencies. But aside from using private networks and storing data offshore, routing transmissions to avoid some locations is as hard to do as forcing them through a particular network or node.

The real guarantor of privacy in our digital lives may not be the rule of law. The Fourth Amendment and its counterparts work in the physical world, after all, because tangible property cannot be searched and seized in secret. Information, however, can be intercepted and copied without anyone knowing it. You may never know when or by whom your privacy has been invaded. That is what makes electronic surveillance more dangerous than traditional investigations, as the Supreme Court realized as early as 1967.

In the uneasy balance between the right to privacy and the needs of law enforcement, the scales are increasingly held by the Law of Disruption. More devices, more users, more computing power: the sheer volume of information and the rapid evolution of how it can be exchanged have created an ocean of data. Much of it can be captured, deciphered, and analyzed only with great (that is, expensive) effort. Moore’s Law lowers the costs to communicate, raising the costs for governments interested in the content of those communications.

The kind of electronic surveillance performed by the Chinese government is outrageous in its scope, but only the clumsiness of its technical implementation exposed it. Even if governments want to know everything that happens in our digital lives, and even if the law allows them or is currently powerless to stop them, there isn’t enough technology at their disposal to do it, or at least to do it secretly.

So far.

Big Bang Launch of "Big Bang Disruption"–and a Note on Regulatory Implications

In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture’s Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling “Big Bang Disruption.”

The project is looking at the emerging ecosystem for innovation based on disruptive technologies, following up on work we have done separately and now together over the last fifteen years.

Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition in information-intensive industries–which is to say, these days, every industry.

The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything (including, increasingly, not just people but things).

The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly “undiscplined.”  It shouldn’t succeed.  But it does.

So when the disruptor arrives and takes off with a bang, often after a series of low-cost, failed experiments, incumbents have no time for a competitive response.  The old rules for dealing with disruptive technologies, most famously from the work of Harvard’s Clayton Christensen, have become counter-productive.   If incumbents haven’t learned to read the new tea leaves ahead of time, it’s game over.

The HBR article doesn’t go into much depth on the policy implications of this new innovation model, but the book we are now writing will.  The answer should be obvious.

This radical new model for product and service introduction underscores the robustness of market behaviors that quickly and efficiently correct many transient examples of dominance, especially in high-tech markets.

As a general rule (though obviously not one without exceptions), the big bang phenomenon further weakens the case for regulatory intervention.  Market dominance is sustainable for ever-shorter periods of time, with little opportunity for incumbents to exploit it.

Quickly and efficiently, a predictable next wave of technology will likely put a quick and definitive end to any “information empires” that have formed from the last generation of technologies.

Or, at the very least, do so more quickly and more cost-effectively than alternative solutions from regulation.  The law, to paraphrase Mark Twain, will still be putting its shoes on while the big bang disruptor has spread halfway around the world.

Unfortunately, much of the contemporary literature on competition policy from legal academics is woefully ignorant of even the conventional wisdom on strategy, not to mention the engineering realities of disruptive technologies already in the market.  Looking at markets solely through the lens of legal theory is, truly, an academic exercise, one with increasingly limited real-world applications.

Indeed, we can think of many examples where legacy regulation actually makes it harder for the incumbents to adapt as quickly as necessary in order to survive the explosive arrival of a big bang disruptor.  But that is a story for another day.

Much more to come.

Related links:

Why Best Buy is Going out of Business…Gradually,” Forbes.com.

What Makes an Idea a Meme?“, Forbes.com

The Five Most Disruptive Technologies at CES 2013,” Forbes.com

 

The Latest Leak Makes Even Clearer UN Plans to Take Over Internet Governance

On Friday evening, I posted on CNET a detailed analysis of the most recent proposal to surface from the secretive upcoming World Conference on International Telecommunications, WCIT 12.  The conference will discuss updates to a 1988 UN treaty administered by the International Telecommunications Union, and throughout the year there have been reports that both governmental and non-governmental members of the ITU have been trying to use the rewrite to put the ITU squarely in the Internet business.

The Russian federation’s proposal, which was submitted to the ITU on Nov. 13th, would explicitly bring “IP-based Networks” under the auspices of the ITU, and would in specific substantially if not completely change the role of ICANN in overseeing domain names and IP addresses.

According to the proposal, “Member States shall have the sovereign right to manage the Internet within their national territory, as well as to manage national Internet domain names.”  And a second revision, also aimed straight at the heart of today’s multi-stakeholder process, reads:  “Member States shall have equal rights in the international allocation of Internet addressing and identification resources.”

Of course the Russian Federation, along with other repressive governments, uses every opportunity to gain control over the free flow of information, and sees the Internet as it’s most formidable enemy.  Earlier this year, Prime Minister Vladimir Putin told ITU Secretary-General Hamadoun Toure that Russia was keen on the idea of “establishing international control over the Internet using the monitoring and supervisory capability of the International Telecommunications Union.”

As I point out in the CNET piece, the ITU’s claims that WCIT has nothing to do with Internet governance and that the agency itself has no stake in expanding its jurisdiction ring more hollow all the time.  Days after receiving the Russian proposal, the ITU wrote in a post on its blog that, “There have not been any proposals calling for a change from the bottom-up multistakeholder model of Internet governance to an ITU-controlled model.”

This would appear to be an outright lie, and also a contradiction of an earlier acknowledgment by Dr. Touré.  In a September interview, Toure told Bloomberg BNA that “Internet Governance as we know it today,” concerns only “Domain Names and addresses.  These are issues that we’re not talking about at all,” Touré said. “We’re not pushing that, we don’t need to.”

The BNA article continues:

Touré, expanding on his emailed remarks, told BNA that the proposals that appear to involve the ITU in internet numbering and addressing were preliminary and subject to change.

‘These are preliminary proposals,’ he said, ‘and I suspect that someone else will bring another counterproposal to this, we will analyze it and say yes, this is going beyond, and we’ll stop it.’

Another tidbit from the BNA Interview that now seems ironic:

Touré disagreed with the suggestion that numerous proposals to add a new section 3.5 to the ITRs might have the effect of expanding the treaty to internet governance.

‘That is telecommunication numbering,’ he said, something that preceded the internet. Some people, Touré added, will hijack a country code and open a phone line for pornography. ‘These are the types of things we are talking about, and they came before the internet.’

I haven’t seen all of the proposals, of course, which are technically secret.   But the Russian proposal’s most outrageous proposals are contained in a proposed new section 3A, which is titled, “IP-based Networks.”

There’s more on the ITU’s subterfuge in Friday’s CNET piece, as well as these earlier posts:

1.  “Why is the UN Trying to Take Over the Internet?” Forbes.com, Aug 9, 2012.

2.  “UN Agency Reassures:  We Just Want to Break the Internet, Not Take it Over,” Forbes.com, Oct. 1, 2012.

Praise for California passage of law protecting VoIP from local utility regulators

On Friday, California Governor Jerry Brown signed SB 1161, which prohibits the state’s Public Utilities Commission from any new regulation of Voice over Internet Protocol or other IP-based services without the legislature’s authorization.

California now joins over twenty states that have enacted similar legislation.

The bill, which is only a few pages long, was introduced by State Senator Alex Padilla (D) in February.  It passed both houses of the California legislature with wide bi-partisan majorities.

California lawmakers and the governor are to be praised for quickly enacting this sensible piece of legislation.

Whatever the cost-benefit of continued state regulation of traditional utilities such as water, power, and landline telephone services, it’s clear that the toolkit of state and local PUCs is a terrible fit for Internet services such as Skype, Google Voice or Apple’s FaceTime.

Historically, as I argued in a Forbes piece last month, the imposition of public utility status on a service provider has been an extreme response to an extreme situation—a monopoly provider, unlikely to have competition because of the high cost of building  and operating competing infrastructure (so-called “natural monopoly”), offering a service that is indispensable to everyday life.

Service providers meeting that definition are transformed by PUC oversight into entities that are much closer to government agencies than private companies.  The PUC sets and modifies the utility’s pricing in excruciating detail.  PUC approval is required for each and every change or improvement to the utility’s asset base, or to add new services or retire obsolete offerings.

In exchange for offering service to all residents, utilities in turn are granted eminent domain and rights of way to lay and maintain pipes, wires and other infrastructure.

VoIP services may resemble traditional switched telephone networks, but they have none of the features of a traditional public utility.  Most do not even charge for basic service, nor do they rely on their own dedicated infrastructure.  Indeed, the reason VoIP is so much cheaper to offer than traditional telephony is that it can take advantage of the existing and ever-improving Internet as its delivery mechanism.

Because entry is cheap, VoIP providers have no monopoly, natural or otherwise.  In California, according to the FCC, residents have their choice of over 125 providers—more than enough competition to ensure market discipline.

Nor would residents be in any way helped by interposing a regulator to review and pre-approve each and every change to a VoIP provider’s service offerings.  Rather, the lightning-fast evolution of Internet services provides perhaps the worst mismatch possible for the deliberate and public processes of a local PUC.

Software developers don’t need eminent domain.

But the most serious mismatch between PUCs and VoIP providers is that there is little inherently local about VoIP offerings.  Where a case can be made for local oversight of public utilities operating extensive–even pervasive–local infrastructure, it’s hard to see what expertise a local PUC brings to the table in supervising a national or even international VoIP service.

On the other hand, it’s not hard to imagine the chaos and uncertainty VoIP providers and their customers would face if they had to satisfy fifty different state PUCs, not to mention municipal regulators and regulators in other countries.

In most cases that would mean dealing with regulators on a daily basis, on every minor aspect of a service offering.  In the typical PUC relationship, the regulator becomes the true customer and the residents mere “rate-payers” or even just “meters.”

Public utilities are not known for their constant innovation, and for good reason.

Whatever oversight VoIP providers require, local PUCs are clearly the wrong choice.  It’s no surprise, then, that SB 1161 was endorsed by major Silicon Valley trade groups, including TechNet, TechAmerica, and the Silicon Valley Leadership Group.

The law is a win for California residents and California businesses—both high-tech and otherwise.

Links                                                                                                                                         

  1. Government Control of Net is Always a Bad Idea,” CNET News.com, June 4, 2012.
  2. Memo to Jerry Brown:  Sign SB 1161 for all Internet users,” CNET News.com, August 30, 2012.
  3. The Madness of Regulating VoIP as a Public Utility,” Forbes.com, Sept. 10, 2012.
  4. Brown Endorses Hands off Stance on Internet Calls,” The San Francisco Chronicle, Sept. 28. 2012.

What Google Fiber, Gig.U and US Ignite Teach us About the Painful Cost of Legacy Regulation

On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.

Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan.  Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .

Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals.  Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks.  And then go build a lot more of them.

Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year.  (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)

US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption.  It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.

I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem:  the U.S. is nearing a dangerous stalemate in its communications infrastructure.  We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband.  Right now, ultra high-speed broadband is technically possible by running fiber to the home.  Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.

But the kinds of visionary applications in smart grid, classroom-free education, advanced telemedicine, high-definition video, mobile backhaul and true teleworking that would make full use of a fiber network don’t really exist yet.  Consumers (and many businesses) aren’t demanding these speeds, and Wall Street isn’t especially interested in building ahead of demand.  There’s already plenty of dark fiber deployed, the legacy of earlier speculation that so far hasn’t paid off.

So the hope is that by deploying fiber to showcase communities and encouraging the development of demonstration applications, entrepreneurs and investors will get inspired to build next generation networks.

Let’s hope they’re right.

What interests me personally about the projects, however, is what they expose about regulatory disincentives that unnecessarily and perhaps fatally retard private investment in next-generation infrastructure.  In the Forbes piece, I note almost a dozen examples from the Google Fiber development agreement where Kansas City voluntarily waived permits, fees, and plodding processes that would otherwise delay the project.  As well, in several key areas the city actually commits to cooperate and collaborate with Google Fiber to expedite and promote the project.

As Levin notes, Kansas City isn’t offering any funding or general tax breaks to Google Fiber.  But the regulatory concessions, which implicitly acknowledge the heavy burden imposed on those who want to deploy new privately-funded infrastructure (many of them the legacy of the early days of cable TV deployments), may still be enough to “change the math,” as Levin puts it, making otherwise unprofitable investments justifiable after all.

Just removing some of the regulatory debris, in other words, might itself be enough to break the stalemate that makes building next generation IP networks unprofitable today.

The regulatory cost puts a heavy thumb on the side of the scale that discourages investment.  Indeed, as fellow Forbes contributor Elise Ackerman pointed out last week, Google has explicitly said that part of what made Kansas City attractive was the lack of excessive infrastructure regulation, and the willingness and ability of the city to waive or otherwise expedite the requirements that were on the books.(Despite the city’s promises to bend over backwards for the project, she notes, there have still been expensive regulatory delays that promoted no public values.)

Particularly painful to me was testimony by Google Vice President Milo Medin, who explained why none of the California-based proposals ever had a real chance.  “Many fine California city proposals for the Google Fiber project were ultimately passed over,” he told Congress, “in part because of the regulatory complexity here brought about by [the California Environmental Quality Act] and other rules. Other states have equivalent processes in place to protect the environment without causing such harm to business processes, and therefore create incentives for new services to be deployed there instead.”

Ouch.

This is a crucial insight.  Our next-generation communications infrastructure will surely come, when it does come, from private investment.  The National Broadband Plan estimated it would take $350 billion to get 100 Mbps Internet to 100 million Americans through a combination of fiber, cable, satellite and high-speed mobile networks.  Mindful of reality, however, the plan didn’t even bother to consider the possibility of full or even significant taxpayer funding to reach that goal.

Unlike South Korea, we aren’t geographically-small, with a largely urban population living in just a few cities.  We don’t have a largely- nationalized and taxpayer-subsidized communications infrastructure.   On a per-person basis, deploying broadband in the U.S. is much harder, complicated and more expensive than it is in many competing nations in the global economy.

Of course, nationwide fiber and mobile deployments by network operators including Verizon and AT&T can’t rely on gimmicks like Google Fiber’s hugely successful competition, where 1,100 communities applied to become a test site.  Nor can they, like Gig.U, cherry-pick research university towns, which have the most attractive demographics and density to start with.  Nor can they simply call themselves start-ups and negotiate the kind of freedom from regulation that Google and Gig.U’s membership can.

Large-scale network operators need to build, if not everywhere, than to an awful lot of somewheres.  That’s a political reality of their size and operating model, as well as the multi-layer regulatory environment in which they must operate.  And it’s a necessity of meeting the ambitious goal of near-universal high-speed broadband access, and of many of the applications that would use it.

Under the current regulatory and economic climate, large-scale fiber deployment has all but stopped for now.  Given the long lead-time for new construction, we need to find ways to restart it.

So everyone who agrees that gigabit Internet is a critical element in U.S. competitiveness in the next decade or so ought to look closely at the lessons, intended or otherwise, of the various testbed projects.  They are exposing in stark detail a dangerous and useless legacy of multi-level regulation that makes essential private infrastructure investment economically impossible.

Don’t get me wrong.  The demonstration projects and testbeds are great.  Google Fiber, Gig.U, and US Ignite are all valuable efforts.  But if we want to overcome our “strategic bandwidth deficit,” we’ll need something more fundamental than high-profile projects and demonstration applications.  To start with, we’ll need a serious housecleaning of legacy regulation at the federal, state, and local level.

Regulatory reform might not be as sexy as gigabit Internet demonstrations, but the latter ultimately won’t make much difference without the former.  Time to break out the heavy demolition equipment—for both.

Updates to the Media Page

We’ve added over a dozen new posts to the Media page, covering some of the highlights in articles and press coverage for April and May, 2012.

Topics include privacy, security, copyright, net neutrality, spectrum policy, the continued fall of Best Buy and antitrust.

The new posts include links to Larry’s inaugural writing for several publications, including Techdirt, Fierce Mobile IT, and Engine Advocacy.

There are also several new video clips, including Larry’s interview of Andrew Keen, author of the provocative new book, “Digital Vertigo,” which took place at the Privacy Identity and Innovation conference in Seattle.

June was just as busy as the rest of the year, and we hope to catch up with the links soon.