Monthly Archives: October 2009

Net Neutrality Debate: The Mistake that Keeps on Giving

fcc logo

Again, a long post on Net Neutrality.  Again, my apologies.

The fallout continues from FCC Chairman Julius Genachowski’s call to initiate new rulemaking to implement Net Neutrality principles promised by candidate Obama during the campaign.

The bottom line:  what proponents wish with all their hearts was a simple matter of mom and apple pie (“play fair, work hard, and get ahead” as Craiglist’s Craig Newmark explains it) is in fact a fight for leverage among powerful interests in the communications, software, and media industries.  Net neutrality, if nothing else, is turning out to be a complex technical problem—technical in both the engineering and regulatory sense.

As I write in Law Four of The Laws of Disruption, there’s nothing neutral about the rules under which Internet provisioning is regulated today, with broadband offered by phone companies subject to one set of rules (“common carrier”) and access offered by everyone else subject to, for the most part, no rules at all.  (Wireless Internet providers, who have far less bandwidth to offer, greatly restrict user behavior, but Genachowski indicated they too would be brought under the neutrality principles he outlined.)

There’s also nothing rational about the current rules.  That’s becoming abundantly clear as neutrality proponents start to back away from the firestorm they helped fund and as the messy details of current network management behavior becomes clearer.  As Vishesh Kumar and Christopher Rhoads of The Wall Street Journal noted in late 2008, Microsoft, Yahoo and perhaps Amazon have quietly backed away from their initial enthusiasm for more FCC oversight of Internet access and traffic management.  Microsoft’s official position:  “Network neutrality is a policy avenue the company is no longer pursuing.”

Nor should they.  Even as the regulatory process grinds on at its naturally-slow pace, Moore’s Law continues to change the technological landscape with breathtaking speed.  Which is a good thing.  Despite all the think-tank and lobbyist hand-wringing, every aspect of digital life has improved dramatically in the last decade—access options, connections speeds, applications, content, devices, you name it.  It’s possible that in the future all of this could come to a grinding halt because of uncompetitive and ultimately irrational behavior by a few market dominators.  But why legislate ahead of a problem in the area most certain to change dramatically regardless of regulation?

Here are just a few of the most recent developments:

  • Google complains to the FCC about Apple’s rejection of Google Voice from the iPhone (The full letter, originally redacted, is now available here)
  • AT&T complains to the FCC about Google Voice’s refusal to connect certain calls (a luxury that common carriers don’t have)
  • Seventy-Two Democrats urge the FCC to tread carefully into Net Neutrality, encouraging Genachowski to “avoid tentative conclusions which favor government regulation.”
  • Wireless network providers object to being included under the Chairman’s proposed six rules for Net Neutrality (“The principles I’ve been speaking about apply to the Internet however accessed, and I will ask my fellow Commissioners to join me in confirming this.”)

AT&T’s complaint about Google Voice is informative.  On October 8th, the FCC announced it was investigating Google Voice’s treatment of calls to certain rural areas, where under FCC rules common carriers are required to pay higher connection fees to complete calls from their subscribers.  These fees are intended to help offset the extra costs rural phone companies must otherwise absorb in order to serve a dispersed customer base.  Unfortunately, as everyone knows, the local companies have abused that rule by hosting a variety of non-local services, including free conference call services and sex chat lines and then splitting the profits with the service providers.

(The technical implementation of Google Voice is largely confidential.  The application, among other features, provides its users a single phone number and  routes incoming calls to any phone device they have and places outbound calls managed by Google (through its wholesale partner Bandwidth.com), for free or a small charge.  Free, that is, in the sense of being supported, of course, by ads.)

The response from Google?  Google Voice, as the company acknowledges, “restricts certain outbound calls from our Web platform to these high-priced destinations.”  But Google Voice is a “Web-based software application,” not a “broadband carrier,” and so is not subject to common carrier rules or existing Net Neutrality principles.  “We agree with AT&T,” Google says, “that the current carrier compensation system is badly flawed, and that the single best answer is for the FCC to take the necessary steps to fix it.”

Not surprising, AT&T argues that Google is violating both common carrier and Net Neutrality principles.  AT&T reports that its tests of Google Voice indicate the service blocks calls to ALL numbers of the rural exchanges, not just those for sex chat lines and teleconferencing services.  (Note in the quote above that Google says only it restricts calls to “high-priced destinations,” leaving it unclear whether by “destination” they mean the over-priced services or the actual area codes.)

The argument between the two companies breaks down to two simple but largely irresolvable questions:  (1) do Internet phone applications that look like traditional phone services, but which rely on customer-leased connections to initiate and terminate calls, need to abide by common carrier rules? (2) does non-Neutral behavior by an application mimicking many of the core functions of a broadband provider violate Net Neutrality, or do the principles (and those proposed by Genachowski) apply only to providers of last mile service?

Regardless of the answers the FCC reaches, here’s the point:  common carrier rules cannot be untangled from the Net Neutrality debate.  Personally, I believe consumers would be better off without either, a position neither company has publicly taken.

In a seemingly unrelated story, Wired’s Ryan Singel reports that Google appears to pay nothing to broadband carriers for its Internet connections.  This despite the fact that Google, in significant part because of its ownership of YouTube, may now account for as much as 10% of all Internet traffic.  That’s because during the great telecom meltdown that followed from the dot com boom, the company wisely snapped up a great deal of unused new fiber optic capacity on the cheap.  Google is now trading (the technical term is “peering”) that capacity with broadband providers in exchange for the company’s own connection.

The story has some interesting quotes from Arbor Network’s chief scientist, Craig Labovitz.  “[T]he real money is in the ads and the services in the packets, not in moving bits from computer to computer,” he told Wired.  Then this:  “Who pays whom is changing.  All sorts of negotiations are happening behind closed doors.”  Most of the net’s architecture, as Singel notes, “remains a secret cloaked in nondisclosure agreements.”

Don’t get me wrong.  I think Google is a great company that has introduced a tremendous range of innovative products and services to consumers, nearly all of which are paid for by an advertising model (which increasingly raises the ire of privacy advocates, but that’s another story).  Consumers, as I said before, have benefited from the technical and business decisions of the companies now publicly airing their dirty laundry in the Net Neutrality fight.  We get more stuff all the time, we get is faster and, for the most part, the cost is either holding steady or declining.

But irony, as Bart Simpson once said, is delicious.  The peering arrangements almost certainly means that Google traffic is getting priority.  Not necessarily transit priority—that is, special privilege through the network.  But they do get what Internet Research Group’s Peter Christy calls “ingress priority,” that is, how you get into the provider’s network.  As Christy says, “If you go through some kind of general exchange then it is sort of a free for all and if traffic is heavy there may well be congestion and packet loss at this point.  With specific private peering you can assure that your traffic will get into the network unimpeded.”

It may not be a “fast lane,” in other words.  But it is a dedicated on-ramp.

So, does ingress priority through peering arrangements violate Net Neutrality?

Consider this explanation for why Neutrality is imperative:  “Some major broadband service providers have threatened to act as gatekeepers, playing favorites with particular applications or content providers, demonstrating that this threat is all too real.”

Guess who?  That’s right—it’s from Google’s own policy blog from 2008. The post goes on: “It’s no stretch to say that such discriminatory practices could have prevented Google from getting off the ground — and they could prevent the next Google from ever coming to be.”

Well I think that’s an awfully big stretch—now, and in 2008.  Nonetheless, if the company continues to beat the drum for completely open gates, they will find themselves increasingly hard-pressed to justify peering arrangements, content restrictions on use of their applications, and other deals aimed at improving performance for Google applications.  “Such discriminatory practices” could just as easily prevent new services—competitors to Google—from “getting off the ground.”  AT&T’s complaints that Google is straddling both sides of the fence sound increasingly accurate, regardless of their motivation for saying so.

(At the end of 2008, recall, the company was similarly forced to beat a rhetorical retreat when it was revealed that it had been negotiating peering arrangements for edge-caching devices—that is, for co-locating Google servers with broadband provider equipment to ensure faster access to Google content when consumers called for it.  What seemed again a contradiction to Net Neutrality principles was explained weakly as a “non-exclusive” arrangement that any content provider could also negotiate.  Any content provider with money to spend on caching servers and unused fiber optic cables, that is.)

It’s just going to get worse.  The FCC can no more likely navigate its way through these murky waters than it can decide whether an errant nipple on a live broadcast violates its broadcast decency rules.  (An appellate court recently threw out the Janet Jackson fines.)  The Commission is quite simply the worst possible arbiter of these complex business and technical problems.

So here’s an open invitation to Google, AT&T, Apple, and everyone else in the Net Neutrality slugfest.  Let’s call the whole thing off, before someone—that is, consumers—really gets hurt.

The Nobel Prize in Disruption

williamsonThough most of the coverage of this year’s Nobel Prize in Economics focused on the work of Elinor Ostrom, I’m more interested in the award to Oliver Williamson, Prof. Emeritus at the Haas School of Business at UC-Berkeley.

Williamson is a leading scholar in the field of “Institutional Economics,” which studies the relative economic behaviors of organizations in the market. The field traces its origins to the pioneering work of a previous Nobel winner, Ronald Coase, who first observed that the existence of large, complex corporations suggested the existence of inefficiencies in the market that companies, an alternative to market transactions, could overcome, or at least reduce. Rather than negotiating with every person involved in the production of every car, for example, GM could internalize labor, production, sourcing and so on and achieve economies of scale.

I’ve written about Coase and the importance of his work in all of my books. I’ve had the distinct pleasure of knowing him since my days as a law student at the University of Chicago, where Coase was on the faculty of the law school. My view, still controversial in some quarters, is that information technology has been reducing transaction costs in the market faster than it does in organizations, resulting in a shift in the balance of power between the two.

In his seminal book, “Markets and Hierarchies,” Williamson enumerated different kinds of transaction costs and their impact on economic events. In particular, he describes how differences in information sources between two or more parties to a transaction (say a buyer and a seller, for example) will affect the structure and behavior of the transaction in important ways.

Coase, Williamson and others grew frustrated by the unwillingness of their economist colleagues to study the nature and causes of transaction costs, and in the mid-1990’s formed their own field, which is organized under the International Society for New Institutional Economics. Both Williamson and Ostrom are members of ISNIE.

Congratulations to both Prof. Ostrom and Williamson. Perhaps the award of the Nobel Prize will encourage more study of the nature and causes of transaction costs, and the ways in which disruptive technologies affect both.

LoD Reviewed in the Wall Street Journal

wall-street-journal-logo

Today’s Wall Street Journal has a long and thoughtful review of The Laws of Disruption by Jeremy Philips, Executive Vice President of News Corp.  Here is the link. Mr. Philips concludes, “Mr. Downes may well overstate the case when he says that our ‘industrial-age legal system’ will not survive, but there is no doubt that a lot more disruption lies ahead.”

The Year of Thinking Legally

The idea of “The Laws of Disruption” came to me when I noticed how news stories about information technology were increasingly stories about the interference of law and regulation with information technology.

A nice example from yesterday’s Wall Street Journal is Andrew LaVallee’s story, “For Tech Sector, It’s an Antitrust Year.”

Leading technology companies are faced with life-or-death decisions on products, services, operations and even their very existence based on the arcane rules of legal systems forged in the Industrial Revolution.

At the very least, doesn’t this suggest the need for better integration of the legal department with the rest of the executive team? Today, general counsel is the last great bastion of disconnection in most organizations.

Ten years ago, when the Information Revolution reached its tipping point, CIOs learned how to work directly on strategy and operations with their fellow executives. It was painful for everyone, but entirely necessary.

Now it’s time for the lawyers…

Not Again! IBM back in Antitrust Crosshairs

ibm pc

Sources cited by The New York Times indicate the U.S. Justice Department has once again opened an antitrust investigation against IBM.

Remember IBM?

The new investigation concerns allegations that the company has refused to license mainframe software products to third parties.  A refusal to license isn’t necessarily an illegal form of competition, but may be if coupled with other anticompetitive practices.

IBM has come under the gun from anticompetition regulators throughout its history.

Ironically, the case that it won did the most damage.  In 1983, the government dropped an investigation that started in 1969.   But by then IBM had already made significant and possibly life-altering modifications to its operations.

By the late 1970’s, for example, IBM had divided itself into three main divisions, one for mainframes, one for minicomputers, and one for copiers and typewriters.  The groups were kept apart in significant ways.  Indeed, in the event of a breakup of the company (similar to what did occur at AT&T during the same period), the divisions prepared to compete against each other.

The minicomputer division, for example, fully developed a visionary design known as Future Systems that was considered too radical for the mainframe group.  The FS architecture, realized in the IBM System/38 (1981), included virtual memory, built-in relational database management, scalability, security attached at the object level and other features still not fully realized in many computing environments.  FS was intended to compete directly with IBM mainframes in the event of a break-up.

When the government dropped its case, the company was badly splintered.   It found itself unable to move quickly as rapid technological improvements signaled a new computing revolution moving from large businesses to consumers.  The mainframe division ensured that the System/38 remained artificially undersized, for example, depriving the company of the full potential value of FS.

Another unintended consequence was even more disastrous.  When the IBM PC was introduced in 1983, it was marketed by a group that was part of neither the mainframe nor the minicomputer group.  The PC initially had no IBM operating system, but instead ran Microsoft’s DOS operating system (and others).  The computing divisions had no expectation of integrating PCs within networks of IBM mainframe and minicomputer installations, and no design features were included to accommodate that integration.

So, as PCs became more powerful and users demanded more processing capability on the desktop, IBM floundered badly, barely surviving the shift from mainframe-dumb terminal configurations to client-server computing.  IBM’s late-entry in the PC operating system market, OS/2, failed miserably.  What should have been tremendous advantage for the company in technology was undone by the company’s focus on litigation.  Even though it won its case, it nearly lost everything in trying to appease a government that couldn’t have understood the revolution in computing just over the horizon.

It is too soon to say what if any consequences will follow from the Justice Department’s new investigation.  But regulators and even IBM competitors would be wise to review the history in some detail.

The bottom line:  In conflicts between Moore’s Law and Antitrust Law, Moore’s Law always wins in the end.

But a lot of wasted effort and unnecessary carnage usually happens in the interim.

The PATRIOT Act: Last Refuge of Scoundrels

“Patriotism,” as Samuel Johnson famously said, “is the last refuge of a scoundrel.”  In that sense, perhaps the USA PATRIOT Act is appropriately named after all.

In the immediate aftermath of 9/11, most people (though not everyone) agreed that the government should be given additional investigative powers to reduce the risk of more terrorist attacks.  The fact that perfectly good intelligence was already available and ignored before 9/11 was considered water under the bridge.  The attacks signaled a new era in national defense.

Electronic communications bore the brunt of government complaints that the enemy had outpaced the government in an information arms race, and not surprisingly some of the most contentious features of the PATRIOT Act involved provisions to expand government powers of surveillance, information collection, and secrecy:

  • The use of wiretaps and other electronic collection methods was largely stripped of judicial oversight, especially with regard to foreign surveillance.
  • The range of information that could be collected without probable cause (including phone, financial and other records) was expanded.
  • The rampant misuse of National Security Letters ensured that the targets of information demands (including banks and communications and Internet providers) would be gagged from revealing just how extensively the government was using its new powers.

As I say in Law Three of The Laws of Disruption (“Social Contracts in Digital Life”), however, the PATRIOT Act’s expansion of surveillance powers didn’t just spring out of the national trauma of 9/11.  In fact, it was just a new act in a long-playing drama between investigators and civil rights activists.

Since at least the invention of the telephone, state and federal law enforcement agencies have complained about the unintended consequences of information technology’s accelerating pace of disruption.  Lawmakers and courts have struggled to strike the balance between the free flow of information as both an economic and personal imperative against the ability of government to protect its citizens from criminal activities.

The PATRIOT Act gave investigators their golden opportunity to leapfrog the competition.  Nearly every wish on the FBI’s Christmas List was granted, including powers that had been wisely refused for decades.  Both the First and Fourth Amendments have been severely battered, as some courts and even the FBI have acknowledged.  But no one wants to give up their presents.

In 2005, when some of the more dubious provisions came up for renewal, the Bush Administration lobbied hard for and won an unmodified PATRIOT Act.  As  the Cato Institute’s Julian Sanchez points out, the Obama Administration and key members of Congress including leading Democrats are now the ones singing the PATRIOT Act’s praises.  Hopes of real reform in this latest renewal process are fading fast.

To me the most dangerous aspect of anti-terror laws passed in the last decade has been the secrecy with which governments can now operate.  Meaningful judicial review of search and seizure has been cut out, gag orders have been abused, and Congress regularly tells us that if only we knew what they knew from secret briefings we’d understand why all of this other secrecy is so important.

Governments working in secret are working against the Law of Disruption.  Information wants to get out, and sooner or later it does.  That’s when we see the wisdom of the Founding Fathers in building in checks-and-balances to reduce the risk of overreaching and ultimately tyranny.

Crippling those limits, as we have done over the last decade, may or may not have made us much safer.  It has certainly made us less free.

Whether the costs outweigh the benefits is hard to say when both sets of data are being suppressed.