(Larry's April 16th article for The Washington Post last week has been reprinted frequently...including here!)

Few revolutions can be said to have lasted for half a century, or to have wrought disruptive change at a predictable pace.

But that’s exactly the case with the digital revolution, which has seen computing get dramatically faster, cheaper and smaller every few years since the 1950’s.

The remarkable prophecy that anticipated that phenomenon is known as Moore’s Law, which turns 50 on April 19. In a four-page article for Electronics magazine, long-time Intel chief executive Gordon Moore (then head of R&D at Fairchild Semiconductor) made his famous prediction that, for the foreseeable future, the number of components on semiconductors or “chips” would continue to double every twelve to eighteen months even as the cost per chip would hold constant.

Moore originally thought his prediction would hold for a decade, but half a century later it’s still going strong. Computing power — and related components of the digital revolution including memory, displays, sensors, digital cameras, software and communications bandwidth — continue to get faster, cheaper, and smaller roughly at the pace Moore anticipated.

Moore’s Law is driven, as Moore explained, largely by economies of scale in producing chips, improvements in design, and the relentless miniaturization of component parts.  The smaller the chip, the cheaper the raw materials. Transistors, the building blocks for chips, have fallen in price from $30 each 50 years ago to a nanodollar today—roughly $0.000000000001.

That low price encourages more uses, which raises production and lowers costs in a virtuous cycle. Miniaturization also means a shorter distance that electricity has to travel to activate software instructions. Smaller, denser chips are consequently not only cheaper to make, they use less power and perform better. Much better. With each cycle of Moore’s Law, computing power doubles, even as price holds constant.

It is the prime example of what Paul Nunes of Accenture and I call an “exponential technology.” It’s hard to get your head around the impact of a core commodity whose price and performance have improved by a factor of two every two years for half a century.  (Compare that to commodities such as oil or meat, which get worse and more expensive.) One example I use is to help make Moore’s Law concrete is to compare the performance, cost and size of the Univac I, sold in the mid-1950’s, with devices available now.

Today’s home video game consoles, for example, have roughly the same processing power of one billion Univac I’s. Even without adjusting for inflation, the cost of a billion Univacs in 1950’s dollars would still exceed the entire money supply of the world today.  And had it been possible to buy that many computers in the 1950’s, you would have needed an area about the size of Iceland just to store them. But the consoles cost about $400, and fit comfortably on a shelf. And they are marketed not to the world’s largest enterprises but to children — who probably have a much better idea how to use a billion Univacs anyway.

There are few if any examples of basic commodities improving on so many dimensions at once, and certainly not at such a rapid and predictable pace.  As Moore reflected recently in an interview with IEEE Spectrum, “The semiconductor technology has some unique characteristics that I don’t see duplicated many other places. By making things smaller, everything gets better. The performance of devices improves; the amount of power dissipated decreases; the reliability increases as we put more stuff on a single chip. It’s a marvelous deal.” A marvelous deal, that is, for consumers. Every few years, the capabilities of our growing number of electronic devices — smartphones, TVs, game consoles and other consumer electronics — doubles, while the price for the previous generation collapses.

Thanks to Moore’s Law, tomorrow’s digital products are certain to be better and cheaper. Your newest phone does far more than your last one.  It has a better display, more memory, longer-lasting battery and more sensors for tracking you and your environment. The price for 12 megapixel digital cameras has fallen from $24,000 in 1995 to a few hundred dollars today. Mobile broadband networks, built of electronic components, have advanced steadily from 2G to 3G to 4G and beyond, even as unit costs for data transmission plummet.

Last year, a writer for The Huffington Post found a 1991 newspaper ad for Radio Shack and calculated the cost of 15 devices listed would have been, back then, over $3,000.  Today, all 15 — including a camcorder, a CD player and a cellular phone — have been replaced by superior equivalents on smartphones costing, unlocked, about $600.  And the smartphone does far more, in a single, smaller, integrated device.

Economists call this phenomenon “consumer surplus” — the excess value of a good beyond the actual price a consumer pays; what you would have been willing to pay, in other words, if you had to.  The difference between the price for the phone and $3,000 represents one estimate — and a conservative one — of the consumer surplus created by the deflationary effects of Moore’s Law.

Our expectation of increasing consumer surplus, however, generates a tremendous disruptive force for computer-related businesses.  We’ve now been trained to anticipate computing’s relentless drive into the realm of the better, cheaper and smaller.  That puts profound pressure on the makers of most consumer electronics to deliver new and innovative products every year or two — or else.

And while both manufacturers and the consumer benefit from the falling cost of digital components, in most cases consumers are keeping the lion’s share of the savings.

As a result, businesses in the most digital industries, including communications, electronics, software and digital entertainment, long ago stopped worrying about what their competitors are going to do next, but rather what the technology is going to make possible. Put another way, they all share the same competitor — Moore’s Law.

Over the last few decades, that phenomenon has expanded.  With the advent of cloud computing and the global Internet, most information-intensive industries have also been subjected to radical and continuous transformation, or what we have called “Big Bang Disruption.”

As the cost of collecting, storing, processing and displaying information falls, the supply chains associated with these businesses are being rebuilt on digital platforms. Think of financial services, newspapers and magazines, music and film, health care, education and even government services, where both the threats and opportunities seem to multiply overnight.

As Moore’s Law enters its second half-century, the echoes of that disruptive tsunami have now reached the shores of even the most non-digital businesses. In the next phase of the digital revolution, manufacturing is about to be upended by 3D printing and nanotechnology.  Agriculture is deconstructing in response to better and cheaper sensors, while transportation is girding itself for revolutionary change from autonomous vehicles, smart roads and drones.

And every dumb item in commerce, from individual light bulbs to your refrigerator, is now getting low-cost computing intelligence and network connectivity, a phenomenon known as the Internet of Things.

Just law month, Amazon introduced the Dash, a system of free WiFi-connected buttons that users can attach to everyday items such as razor blades and laundry detergent.  When it’s time to replace the item, you simply press the button to order it.

Over time, of course, the computing and communications intelligence will be built in, along with the ability for the item to monitor and report on its own condition.  You’ll be able to track electricity and water usage, be alerted when something needs maintenance, and authorize the things in your life to automatically reorder themselves when needed.

As trillions of items communicate their status up the supply chain, meanwhile, every step from design to production, distribution, marketing and sales will be reinvented to become vastly more efficient, generating still more consumer surplus.

In every one of these examples, Moore’s Law is the life-blood of the innovators.  It’s the uber-disruptor.

So even as the better and cheaper revolution is a boon to consumers, it’s causing increased anxiety for businesses, especially those just now starting to feel its full effect.  Every industry must learn to keep pace with exponential improvements in core commodities, and to respond to increasingly demanding and influential consumers.

For executives trained to manage to incremental improvement — the guiding principle of the industrial revolution — exponential innovation presents both a profound opportunity and an existential threat.  Gordon Moore used to say that if the auto industry had been built on exponential technologies, today’s cars would get a million miles per gallon of fuel and travel several hundred thousand mph.  The cost of a new Rolls-Royce would be cheaper than the cost of parking it overnight.  (It would also be only 2 inches long.)

That’s kind of change is not for the timid.  In a world dominated by Moore’s Law, many businesses don’t respond in time, instead going down with the ship.  Entrepreneurs thrive while managers retire.

Consider some of the many goods and services — some digital, some physical — already displaced by your smartphone, including address books, transistor radios, remote controls, taxicab dispatchers and maps.

Not only have these staple items gone completely obsolete, but so have the businesses that made, sold, advertised and serviced them.  Look down the main shopping street where you live, and you’ll see empty storefronts that used to house office supply stores, movie theaters, camera shops, bookstores, travel agents, currency exchanges and more.  Big box retailers and shopping malls have been marginalized; even electronics retailer Radio Shack finally succumbed to digital alternatives built from the components the stores sold.

And in the next generation of digital disruption, that list may be supplemented by post offices, ATMs, locksmiths, real estate agents and rent-a-car offices.

Some enterprises are flexible enough to make the transformation, and do so elegantly.  Philips Lighting, for example, anticipated the exponential power of LED lighting far enough in advance to get out of the incandescent business it both created and dominated for over a century, becoming a different company in the process.

Kodak, on the other hand, which held some of the best patents for digital photography, still couldn’t bring itself to commit to a future without film and chemicals, and wound up going from industry leader to bankruptcy in just a few years.

In that sense, Moore’s Law has acted as a kind of accelerant to economist Joseph Schumpeter’s often-quoted observation that capitalism proceeds in “perennial gales of creative destruction.” As every business becomes digital, the storms become that much more frequent and that much more intense.

This “new normal” for business won’t be ending any time soon.  Despite regular predictions of the end of Moore’s Law, the engineers just keep finding new ways to keep it going, using new materials, improved manufacturing techniques, and ever-greater economies of scale.

Gordon Moore, for one, is confident.  When asked about the rule he initially predicted would stay in force for a decade, Moore recently said. “I have never quite predicted the end of it. I’ve said I could never see more than the next couple of generations, and after that it looked like [we’d] hit some kind of wall. But those walls keep receding.”

As consumers, we’ll happily walk through each wall as it fades away, revealing the next great innovation.

But businesses will have to learn to jump.

 

 

Last week, I participated in a program co-sponsored by the Progressive Policy Institute, the Lisbon Council, and the Georgetown Center for Business and Public Policy on "Growing the Transatlantic Digital Economy." The complete program, including keynote remarks from EU VP Neelie Kroes and U.S. Under Secretary of State Catherine A. Novelli, is available below.

My remarks reviewed worrying signs of old-style interventionist trade practices creeping into the digital economy in new guises, and urged traditional governments to stay the course (or correct it) on leaving the Internet ecosystem largely to its own organic forms of regulation and market correctives:

Vice President Kroes’s comments underscore an important reality about innovation and regulation. Innovation, thanks to exponential technological trends including Moore’s Law and Metcalfe’s Law, gets faster and more disruptive all the time, a phenomenon my co-author and I have coined “Big Bang Disruption.”

Regulation, on the other hand, happens at the same pace (at best). Even the most well-intentioned regulators, and I certainly include Vice President Kroes in that list, find in retrospect that interventions aimed at heading off possible competitive problems and potential consumer harms rarely achieve their objectives, and, indeed, generate more harmful unintended consequences.

This is not a failure of government. The clock speeds of innovation and regulation are simply different, and diverging faster all the time. The Internet economy has been governed from its inception by the engineering-driven multistakeholder process embodied in the task forces and standards groups that operate under the umbrella of the Internet Society. Innovation, for better or for worse, is regulated more by Moore’s Law than traditional law.

I happen to think the answer is “for better,” but I am not one of those who take that to the extreme in arguing that there is no place for traditional governments in the digital economy. Governments have and continue to play an essential part in laying the legal foundations for the remarkable growth of that economy and in providing incentives if not funding for basic research that might not otherwise find investors. And when genuine market failures appear, traditional regulators can and should step in to correct them as efficiently and narrowly as they can.

Sometimes this has happened. Sometimes it has not. Where in particular I think regulatory intervention is least effective and most dangerous is in regulating ahead of problems—in enacting what the FCC calls “prophylactic rules.” The effort to create legally sound Open Internet regulations in the U.S. has faltered repeatedly, yet in the interim investment in both infrastructure and applications continues at a rapid pace—far outstripping the rest of the world.

The results speak for themselves. U.S. companies dominate the digital economy, and, as Prof. Christopher Yoo has definitively demonstrated, U.S. consumers overall enjoy the best wired and mobile infrastructure in the world at competitive prices. At the same time, those who continue to pursue interventionist regulation in this area often have hidden agendas. Let me give three examples:

1. As we saw earlier this month at the Internet Governance Forum, which I attended along with Vice President Kroes and 2,500 other delegates, representatives of the developing world were told by so-called consumer advocates from the U.S. and the EU that they must reject so-called “zero rated” services, in which mobile network operators partner with service providers including Facebook, Twitter and Wikimedia to provide their popular services to new Internet users without use applying to data costs.

Zero rating is an extremely popular tool for helping the 2/3 of the world’s population not currently on the Internet get connected and, likely, from these services to many others. But such services violate the “principle” of neutrality that has mutated from an engineering concept to a nearly-religious conviction. And so zero rating must be sacrificed, along with users who are too poor to otherwise join the digital economy.

2. Closer to home, we see the wildly successful Netflix service making a play to hijack the Open Internet debate into one about back-end interconnection, peering, and transit—engineering features that work so well that 99% of the agreements involved between networks, according to the OECD, aren’t even written down.

3. And in Europe, there are other efforts to turn the neutrality principle on its head, using it as a hammer not to regulate ISPs but to slow the progress of leading content and service providers, including Apple, Amazon and Google, who have what the French Digital Council and others refer to as non-neutral “platform monopolies” which must be broken.

To me, these are in fact new faces on very old strategies—colonialism, rent-seeking, and protectionist trade warfare respectively. My hope is that Internet users—an increasingly powerful and independent source of regulatory discipline in the Internet economy—will see these efforts for what they truly are…and reject them resoundingly.

The more we trust (but also verify) the engineers, the faster the Internet economy will grow, both in the U.S. and Europe, and the greater our trade in digital goods and services will strengthen the ties between our traditional economies. It’s worked brilliantly for almost two decades.

The alternatives, not so much.

***Cross-posted from Forbes.com***

It was, to paraphrase Yogi Berra, déjà vu all over again. Fielding calls last week from journalists about reports the NSA had been engaged in massive and secret data mining of phone records and Internet traffic, I couldn’t help but wonder why anyone was surprised by the so-called revelations.

Not only had the surveillance been going on for years, the activity had been reported all along—at least outside the mainstream media. The programs involved have been the subject of longstanding concern and vocal criticism by advocacy groups on both the right and the left.

For those of us who had been following the story for a decade, this was no “bombshell.” No “leak” was required. There was no need for an “expose” of what had long since been exposed.

As the Cato Institute’s Julian Sanchez and others reminded us, the NSA’s surveillance activities, and many of the details breathlessly reported last week, weren’t even secret. They come up regularly in Congress, during hearings, for example, about renewal of the USA Patriot Act and the Foreign Intelligence Surveillance Act, the principal laws that govern the activity.

In those hearings, civil libertarians (Republicans and Democrats) show up to complain about the scope of the law and its secret enforcement, and are shot down as being soft on terrorism. The laws are renewed and even extended, and the story goes back to sleep.

But for whatever reason, the mainstream media, like the corrupt Captain Renault in “Casablanca,” collectively found itself last week “shocked, shocked” to discover widespread, warrantless electronic surveillance by the U.S. government. Surveillance they’ve known about for years.

Let me be clear. As one of the long-standing critics of these programs, and especially their lack of oversight and transparency, I have no objection to renewed interest in the story, even if the drama with which it is being reported smells more than a little sensational with a healthy whiff of opportunism.

In a week in which the media did little to distinguish itself, for example, The Washington Post stood out, and not in a good way. As Ed Bott detailed in a withering post for ZDNet on Saturday, the Post substantially revised its most incendiary article, a Thursday piece that originally claimed nine major technology companies had provided direct access to their servers as part of the Prism program.

That “scoop” generated more froth than the original “revelation” that Verizon had been complying with government demands for customer call records.

Except that the Post’s sole source for its claims turned out to a PowerPoint presentation of “dubious provenance.” A day later, the editors had removed the most thrilling but unsubstantiated revelations about Prism from the article. Yet in an unfortunate and baffling Orwellian twist, the paper made absolutely no mention of the “correction.” As Bott points out, that violated not only common journalistic practice but the paper’s own revision and correction policy.

All this and much more, however, would have been in the service of a good cause--if, that is, it led to an actual debate about electronic surveillance we’ve needed for over a decade.

Unfortunately, it won’t. The mainstream media will move on to the next story soon enough, whether some natural or man-made disaster.

And outside the Fourth Estate, few people will care or even notice when the scandal dies. However they feel this week, most Americans simply aren’t informed or bothered enough about wholesale electronic surveillance to force any real accountability, let alone reform. Those who are up in arms today might ask themselves where they were for the last decade or so, and whether their righteous indignation now is anything more than just that.

As Politico’s James Hohmann noted on Saturday, “Government snooping gets civil libertarians from both parties exercised, but this week’s revelations are likely to elicit a collective yawn from voters if past polling is any sign.”

Why so pessimistic? I looked over what I’ve written on this topic in the past, and found the following essay, written in 2008, which appeared in slightly different form in my 2009 book, “The Laws of Disruption.” It puts the NSA’s programs in historical context, and tries to present both the costs and benefits of how they’ve been implemented. It points out why at least some aspects of these government activities are likely illegal, and what should be done to rein them in.

What I describe is just as scandalous, if not moreso, than anything that came out last week.

Yet I present it below with the sad realization that if I were writing it today--five years later--I wouldn’t need to change a single word. Except maybe the last sentence. And then, just maybe.

Searching Bits, Seizing Information

U.S. citizens are protected from unreasonable search and seizure of their property by their government. In the Constitution, that right is enshrined in the Fourth Amendment, which was enacted in response to warrantless searches by British agents in the run-up to the Revolutionary War. Over the past century, the Supreme Court has increasingly seen the Fourth Amendment as a source of protection for personal space—the right to a “zone of privacy” that governments can invade only with probable cause that evidence of a crime will be revealed.

Under U.S. law, Americans have little in the way of protection of their privacy from businesses or from each other. The Fourth Amendment is an exception, albeit one that applies only to government.

But digital life has introduced new and thorny problems for Fourth Amendment law. Since the early part of the twentieth century, courts have struggled to extend the “zone of privacy” to intangible interests—a right to privacy, in other words, in one’s information. But to “search” and “seize” implies real world actions. People and places can be searched; property can be seized.

Information, on the other hand, need not take physical form, and can be reproduced infinitely without damaging the original. Since copies of data may exist, however temporarily, on thousands of random computers, in what sense do netizens have “property” rights to their information? Does intercepting data constitute a search or a seizure or neither?

The law of electronic surveillance avoids these abstract questions by focusing instead on a suspect’s expectations. Courts reviewing challenged investigations ask simply if the suspect believed the information acquired by the government was private data and whether his expectation of privacy was reasonable.

It is not the actual search and seizure that the Fourth Amendment forbids, after all, but unreasonable search and seizure. So the legal analysis asks what, under the circumstances, is reasonable. If you are holding a loud conversation in a public place, it isn’t reasonable for you to expect privacy, and the police can take advantage of whatever information they overhear. Most people assume, on the other hand, that data files stored on the hard drive of a home computer are private and cannot be copied without a warrant.

One problem with the “reasonable expectation” test is that as technology changes, so do user expectations. The faster the Law of Disruption accelerates, the more difficult it is for courts to keep pace. Once private telephones became common, for example, the Supreme Court required law enforcement agencies to follow special procedures for the search and seizure of conversations—that is, for wiretaps. Congress passed the first wiretap law, known as Title III, in 1968. As information technology has revolutionized communications and as user expectations have evolved, the courts and Congress have been forced to revise Title III repeatedly to keep it up to date.

In 1986, the Electronic Communications Privacy Act amended Title III to include new protection for electronic communications, including e-mail and communications over cellular and other wireless technologies. A model of reasonable lawmaking, the ECPA ensured these new forms of communication were generally protected while closing a loophole for criminals who were using them to evade the police. (By 2005, 92 percent of wiretaps targeted cell phones.)

As telephone service providers multiplied and networks moved from analog to digital, a 1994 revision required carriers to build in special access for investigators to get around new features such as call forwarding. Once a Title III warrant is issued, law enforcement agents can now simply log in to the suspect’s network provider and receive real-time streams of network traffic.

Since 1968, Title III has maintained an uneasy truce between the rights of citizens to keep their communications private and the ability of law enforcement to maintain technological parity with criminals. As the digital age progresses, this balance is harder to maintain. With each cycle of Moore’s Law, criminals discover new ways to use digital technology to improve the efficiency and secrecy of their operations, including encryption, anonymous e-mail resenders, and private telephone networks. During the 2008 terrorist attacks in Mumbai, for example, co-conspirators used television reports of police activity to keep the gunmen at various sites informed, using Internet telephones that were hard to trace.

As criminals adopt new technologies, law enforcement agencies predictably call for new surveillance powers. China alone employs more than 30,000 “Internet police” to monitor online traffic, what is sometimes known as the “Great Firewall of China.” The government apparently intercepts all Chinese-bound text messages and scans them for restricted words including democracy, earthquake, and milk powder.

The words are removed from the messages, and a copy of the original along with identifying information is stored on the government’s system. When Canadian human rights activists recently hacked into Chinese government networks they discovered a cluster of message-logging computers that had recorded more than a million censored messages.

Netizens, increasingly fearful that the arms race between law enforcement and criminals will claim their privacy rights as unintended victims, are caught in the middle. Those fears became palpable after the September 11, 2001, terrorist attacks and those that followed in Indonesia, London, and Madrid. The world is now engaged in a war with no measurable objectives for winning, fought against an anonymous and technologically savvy enemy who recruits, trains, and plans assaults largely through international communication networks. Security and surveillance of all varieties are now global priorities, eroding privacy interests significantly.

The emphasis on security over privacy is likely to be felt for decades to come. Some of the loss has already been felt in the real world. To protect ourselves from future attacks, everyone can now expect more invasive surveillance of their activities, whether through massive networks of closed-circuit TV cameras in large cities or increased screening of people and luggage during air travel.

The erosion of privacy is even more severe online. Intelligence is seen as the most effective weapon in a war against terrorists. With or without authorization, law enforcement agencies around the world have been monitoring large quantities of the world’s Internet data traffic. Title III has been extended to private networks and Internet phone companies, who must now insert government access points into their networks. (The FCC has proposed adding other providers of phone service, including universities and large corporations.)

Because of difficulties in isolating electronic communications associated with a single IP address, investigators now demand the complete traffic of large segments of addresses, that is, of many users. Data mining technology is applied after the fact to search the intercepted information for the relevant evidence.

Passed soon after 9/11, the USA Patriot Act went much further. The Patriot Act abandoned many of the hard-fought controls on electronic surveillance built into Title III. New “enhanced surveillance procedures” allow any judge to authorize electronic surveillance and lower the standard for warrants to seize voice mails.

The FBI was given the power to conduct wiretaps without warrants and to issue so-called national security letters to gag network operators from revealing their forced cooperation. Under a 2006 extension, FBI officials were given the power to issue NSLs that silenced the recipient forever, backed up with a penalty of up to five years in prison.

Gone is even a hint of the Supreme Court’s long-standing admonitions that search and seizure of information should be the investigatory tool of last resort.

Despite the relaxed rules, or perhaps inspired by them, the FBI acknowledged in 2007 that it had violated Title III and the Patriot Act repeatedly, illegally searching the telephone, Internet, and financial records of an unknown number of Americans. A Justice Department investigation found that from 2002 to 2005 the bureau had issued nearly 150,000 NSLs, a number the bureau had grossly under-reported to Congress.

Many of these letters violated even the relaxed requirements of the Patriot Act. The FBI habitually requested not only a suspect’s data but also those of people with whom he maintained regular contact—his “community of interest,” as the agency called it. “How could this happen?” FBI director Robert Mueller asked himself at the 2007 Senate hearings on the report. Mueller didn’t offer an answer.

Ultimately, a federal judge declared the FBI’s use of NSLs unconstitutional on free-speech grounds, a decision that is still on appeal. The National Security Agency, which gathers foreign intelligence, undertook an even more disturbing expansion of its electronic surveillance powers.

Since the Constitution applies only within the U.S., foreign intelligence agencies are not required to operate within the limits of Title III. Instead, their information- gathering practices are held to a much more relaxed standard specified in the Foreign Intelligence Surveillance Act. FISA allows warrantless wiretaps anytime that intercepted communications do not include a U.S. citizen and when the communications are not conducted through U.S. networks. (The latter restriction was removed in 2008.)

Even these minimal requirements proved too restrictive for the agency. Concerned that U.S. operatives were organizing terrorist attacks electronically with overseas collaborators, President Bush authorized the NSA to bypass FISA and conduct warrantless electronic surveillance at will as long as one of the parties to the information exchange was believed to be outside the United States.

Some of the president’s staunchest allies found the NSA’s plan, dubbed the Terrorist Surveillance Program, of dubious legality. Just before the program became public in 2005, senior officials in the Justice Department refused to reauthorize it.

In a bizarre real-world game of cloak-and-dagger, presidential aides, including future attorney general Alberto Gonzales, rushed to the hospital room of then-attorney general John Ashcroft, who was seriously ill, in hopes of getting him to overrule his staff. Justice Department officials got wind of the end run and managed to get to Ashcroft first. Ashcroft, who was barely able to speak from painkillers, sided with his staff.

Many top officials, including Ashcroft and FBI director Mueller, threatened to resign over the incident. President Bush agreed to stop bypassing the FISA procedure and seek a change in the law to allow the NSA more flexibility. Congress eventually granted his request.

The NSA’s machinations were both clumsy and dangerous. Still, I confess to having considerable sympathy for those trying to obtain actionable intelligence from online activity. Post-9/11 assessments revealed embarrassing holes in the technological capabilities of most intelligence agencies worldwide. (Admittedly, it also revealed repeated failures to act on intelligence that was already collected.) Initially at least, the public demanded tougher measures to avoid future attacks.

Keeping pace with international terror organizations and still following national laws, however, is increasingly difficult. For one thing, communications of all kinds are quickly migrating to the cheaper and more open architecture of the Internet. An unintended consequence of this change is that the nationalities of those involved in intercepted communications are increasingly difficult to determine.

E-mail addresses and instant-message IDs don’t tell you the citizenship or even the location of the sender or receiver. Even telephone numbers don’t necessarily reveal a physical location. Internet telephone services such as Skype give their customers U.S. phone numbers regardless of their actual location. Without knowing the nationality of a suspect, it is hard to know what rights she is entitled to.

The architecture of the Internet raises even more obstacles against effective surveillance. Traditional telephone calls take place over a dedicated circuit connecting the caller and the person being called, making wiretaps relatively easy to establish. Only the cooperation of the suspect’s local exchange is required.

The Internet, however, operates as a single global exchange. E-mails, voice, video, and data files—whatever is being sent is broken into small packets of data. Each packet follows its own path between connected computers, largely determined by data traffic patterns present at the time of the communication.

Data may travel around the world even if its destination is local, crossing dozens of national borders along the way. It is only on the receiving end that the packets are reassembled.

This design, the genius of the Internet, improves network efficiency. It also provides a significant advantage to anyone trying to hide his activities. On the other hand, NSLs and warrantless wiretapping on the scale apparently conducted by the NSA move us frighteningly close to the “general warrant” American colonists rejected in the Fourth Amendment. They were right to revolt over the unchecked power of an executive to do what it wants, whether in the name of orderly government, tax collection, or antiterrorism.

In trying to protect its citizens against future terror attacks, the secret operations of the U.S. government abandoned core principles of the Constitution. Even with the best intentions, governments that operate in secrecy and without judicial oversight quickly descend into totalitarianism. Only the intervention of corporate whistle-blowers, conscientious government officials, courts, and a free press brought the United States back from the brink of a different kind of terrorism.

Internet businesses may be entirely supportive of government efforts to improve the technology of policing. A society governed by laws is efficient, and efficiency is good for business. At the same time, no one is immune from the pressures of anxious customers who worry that the information they provide will be quietly delivered to whichever regulator asks for it. Secret surveillance raises the level of customer paranoia, leading rational businesses to avoid countries whose practices are not transparent.

Partly in response to the NSA program, companies and network operators are increasingly routing information flow around U.S. networks, fearing that even transient communications might be subject to large-scale collection and mining operations by law enforcement agencies. But aside from using private networks and storing data offshore, routing transmissions to avoid some locations is as hard to do as forcing them through a particular network or node.

The real guarantor of privacy in our digital lives may not be the rule of law. The Fourth Amendment and its counterparts work in the physical world, after all, because tangible property cannot be searched and seized in secret. Information, however, can be intercepted and copied without anyone knowing it. You may never know when or by whom your privacy has been invaded. That is what makes electronic surveillance more dangerous than traditional investigations, as the Supreme Court realized as early as 1967.

In the uneasy balance between the right to privacy and the needs of law enforcement, the scales are increasingly held by the Law of Disruption. More devices, more users, more computing power: the sheer volume of information and the rapid evolution of how it can be exchanged have created an ocean of data. Much of it can be captured, deciphered, and analyzed only with great (that is, expensive) effort. Moore’s Law lowers the costs to communicate, raising the costs for governments interested in the content of those communications.

The kind of electronic surveillance performed by the Chinese government is outrageous in its scope, but only the clumsiness of its technical implementation exposed it. Even if governments want to know everything that happens in our digital lives, and even if the law allows them or is currently powerless to stop them, there isn’t enough technology at their disposal to do it, or at least to do it secretly.

So far.

In the upcoming issue of Harvard Business Review, my colleague Paul Nunes at Accenture's Institute for High Performance and I are publishing the first of many articles from an on-going research project on what we are calling "Big Bang Disruption."

The project is looking at the emerging ecosystem for innovation based on disruptive technologies, following up on work we have done separately and now together over the last fifteen years.

Our chief finding is that the nature of innovation has changed dramatically, calling into question much of the conventional wisdom on business strategy and competition in information-intensive industries--which is to say, these days, every industry.

The drivers of this new ecosystem are ever-cheaper, faster, and smaller computing devices, cloud-based virtualization, crowdsourced financing, collaborative development and marketing, and the proliferation of mobile everything (including, increasingly, not just people but things).

The result is that new innovations now enter the market cheaper, better, and more customizable than products and services they challenge.  (For example, smartphone-based navigation apps versus standalone GPS devices.)  In the strategy literature, such innovation would be characterized as thoroughly "undiscplined."  It shouldn't succeed.  But it does.

So when the disruptor arrives and takes off with a bang, often after a series of low-cost, failed experiments, incumbents have no time for a competitive response.  The old rules for dealing with disruptive technologies, most famously from the work of Harvard's Clayton Christensen, have become counter-productive.   If incumbents haven't learned to read the new tea leaves ahead of time, it's game over.

The HBR article doesn't go into much depth on the policy implications of this new innovation model, but the book we are now writing will.  The answer should be obvious.

This radical new model for product and service introduction underscores the robustness of market behaviors that quickly and efficiently correct many transient examples of dominance, especially in high-tech markets.

As a general rule (though obviously not one without exceptions), the big bang phenomenon further weakens the case for regulatory intervention.  Market dominance is sustainable for ever-shorter periods of time, with little opportunity for incumbents to exploit it.

Quickly and efficiently, a predictable next wave of technology will likely put a quick and definitive end to any "information empires" that have formed from the last generation of technologies.

Or, at the very least, do so more quickly and more cost-effectively than alternative solutions from regulation.  The law, to paraphrase Mark Twain, will still be putting its shoes on while the big bang disruptor has spread halfway around the world.

Unfortunately, much of the contemporary literature on competition policy from legal academics is woefully ignorant of even the conventional wisdom on strategy, not to mention the engineering realities of disruptive technologies already in the market.  Looking at markets solely through the lens of legal theory is, truly, an academic exercise, one with increasingly limited real-world applications.

Indeed, we can think of many examples where legacy regulation actually makes it harder for the incumbents to adapt as quickly as necessary in order to survive the explosive arrival of a big bang disruptor.  But that is a story for another day.

Much more to come.

Related links:

"Why Best Buy is Going out of Business...Gradually," Forbes.com.

"What Makes an Idea a Meme?", Forbes.com

"The Five Most Disruptive Technologies at CES 2013," Forbes.com

 

On Friday evening, I posted on CNET a detailed analysis of the most recent proposal to surface from the secretive upcoming World Conference on International Telecommunications, WCIT 12.  The conference will discuss updates to a 1988 UN treaty administered by the International Telecommunications Union, and throughout the year there have been reports that both governmental and non-governmental members of the ITU have been trying to use the rewrite to put the ITU squarely in the Internet business.

The Russian federation’s proposal, which was submitted to the ITU on Nov. 13th, would explicitly bring “IP-based Networks” under the auspices of the ITU, and would in specific substantially if not completely change the role of ICANN in overseeing domain names and IP addresses.

According to the proposal, "Member States shall have the sovereign right to manage the Internet within their national territory, as well as to manage national Internet domain names."  And a second revision, also aimed straight at the heart of today's multi-stakeholder process, reads:  "Member States shall have equal rights in the international allocation of Internet addressing and identification resources."

Of course the Russian Federation, along with other repressive governments, uses every opportunity to gain control over the free flow of information, and sees the Internet as it's most formidable enemy.  Earlier this year, Prime Minister Vladimir Putin told ITU Secretary-General Hamadoun Toure that Russia was keen on the idea of "establishing international control over the Internet using the monitoring and supervisory capability of the International Telecommunications Union."

As I point out in the CNET piece, the ITU’s claims that WCIT has nothing to do with Internet governance and that the agency itself has no stake in expanding its jurisdiction ring more hollow all the time.  Days after receiving the Russian proposal, the ITU wrote in a post on its blog that, "There have not been any proposals calling for a change from the bottom-up multistakeholder model of Internet governance to an ITU-controlled model.”

This would appear to be an outright lie, and also a contradiction of an earlier acknowledgment by Dr. Touré.  In a September interview, Toure told Bloomberg BNA that “Internet Governance as we know it today,” concerns only “Domain Names and addresses.  These are issues that we’re not talking about at all,” Touré said. “We’re not pushing that, we don’t need to.”

The BNA article continues:

Touré, expanding on his emailed remarks, told BNA that the proposals that appear to involve the ITU in internet numbering and addressing were preliminary and subject to change.

‘These are preliminary proposals,’ he said, ‘and I suspect that someone else will bring another counterproposal to this, we will analyze it and say yes, this is going beyond, and we'll stop it.’

Another tidbit from the BNA Interview that now seems ironic:

Touré disagreed with the suggestion that numerous proposals to add a new section 3.5 to the ITRs might have the effect of expanding the treaty to internet governance.

'That is telecommunication numbering,' he said, something that preceded the internet. Some people, Touré added, will hijack a country code and open a phone line for pornography. 'These are the types of things we are talking about, and they came before the internet.'

I haven't seen all of the proposals, of course, which are technically secret.   But the Russian proposal's most outrageous proposals are contained in a proposed new section 3A, which is titled, "IP-based Networks."

There’s more on the ITU’s subterfuge in Friday’s CNET piece, as well as these earlier posts:

1.  "Why is the UN Trying to Take Over the Internet?" Forbes.com, Aug 9, 2012.

2.  "UN Agency Reassures:  We Just Want to Break the Internet, Not Take it Over," Forbes.com, Oct. 1, 2012.

On Friday, California Governor Jerry Brown signed SB 1161, which prohibits the state’s Public Utilities Commission from any new regulation of Voice over Internet Protocol or other IP-based services without the legislature’s authorization.

California now joins over twenty states that have enacted similar legislation.

The bill, which is only a few pages long, was introduced by State Senator Alex Padilla (D) in February.  It passed both houses of the California legislature with wide bi-partisan majorities.

California lawmakers and the governor are to be praised for quickly enacting this sensible piece of legislation.

Whatever the cost-benefit of continued state regulation of traditional utilities such as water, power, and landline telephone services, it’s clear that the toolkit of state and local PUCs is a terrible fit for Internet services such as Skype, Google Voice or Apple’s FaceTime.

Historically, as I argued in a Forbes piece last month, the imposition of public utility status on a service provider has been an extreme response to an extreme situation—a monopoly provider, unlikely to have competition because of the high cost of building  and operating competing infrastructure (so-called “natural monopoly”), offering a service that is indispensable to everyday life.

Service providers meeting that definition are transformed by PUC oversight into entities that are much closer to government agencies than private companies.  The PUC sets and modifies the utility’s pricing in excruciating detail.  PUC approval is required for each and every change or improvement to the utility’s asset base, or to add new services or retire obsolete offerings.

In exchange for offering service to all residents, utilities in turn are granted eminent domain and rights of way to lay and maintain pipes, wires and other infrastructure.

VoIP services may resemble traditional switched telephone networks, but they have none of the features of a traditional public utility.  Most do not even charge for basic service, nor do they rely on their own dedicated infrastructure.  Indeed, the reason VoIP is so much cheaper to offer than traditional telephony is that it can take advantage of the existing and ever-improving Internet as its delivery mechanism.

Because entry is cheap, VoIP providers have no monopoly, natural or otherwise.  In California, according to the FCC, residents have their choice of over 125 providers—more than enough competition to ensure market discipline.

Nor would residents be in any way helped by interposing a regulator to review and pre-approve each and every change to a VoIP provider’s service offerings.  Rather, the lightning-fast evolution of Internet services provides perhaps the worst mismatch possible for the deliberate and public processes of a local PUC.

Software developers don’t need eminent domain.

But the most serious mismatch between PUCs and VoIP providers is that there is little inherently local about VoIP offerings.  Where a case can be made for local oversight of public utilities operating extensive--even pervasive--local infrastructure, it’s hard to see what expertise a local PUC brings to the table in supervising a national or even international VoIP service.

On the other hand, it’s not hard to imagine the chaos and uncertainty VoIP providers and their customers would face if they had to satisfy fifty different state PUCs, not to mention municipal regulators and regulators in other countries.

In most cases that would mean dealing with regulators on a daily basis, on every minor aspect of a service offering.  In the typical PUC relationship, the regulator becomes the true customer and the residents mere “rate-payers” or even just “meters.”

Public utilities are not known for their constant innovation, and for good reason.

Whatever oversight VoIP providers require, local PUCs are clearly the wrong choice.  It’s no surprise, then, that SB 1161 was endorsed by major Silicon Valley trade groups, including TechNet, TechAmerica, and the Silicon Valley Leadership Group.

The law is a win for California residents and California businesses—both high-tech and otherwise.

Links                                                                                                                                         

  1. Government Control of Net is Always a Bad Idea,” CNET News.com, June 4, 2012.
  2. Memo to Jerry Brown:  Sign SB 1161 for all Internet users,” CNET News.com, August 30, 2012.
  3. The Madness of Regulating VoIP as a Public Utility,” Forbes.com, Sept. 10, 2012.
  4. Brown Endorses Hands off Stance on Internet Calls,” The San Francisco Chronicle, Sept. 28. 2012.