Skip to content

 

On CNET today, I have a longish post on the FCC's continued machinations over LightSquared and Dish Networks respective efforts to use existing satellite spectrum to build terrestrial mobile broadband networks. Both companies plan to build 4G LTE networks; LightSquared has already spent $4 billion in build-out for its network, which it plans to offer wholesale.

After first granting and then, a year later, revoking LightSquared's waiver to repurpose its satellite spectrum, the agency has taken a more conservative (albeit slower) course with Dish. Yesterday, the agency initiated a Notice of Proposed Rulemaking that would, if adopted, assign flexible use rights to about 40 Mhz. of MSS spectrum licensed to Dish.

Current allocations of spectrum have little to do with the technical characteristics of different bands. That existing licenses limit Dish and LightSquared to satellite applications, for example, is simply an artifact of more-or-less random carve-outs to the absurdly complicated spectrum map managed by the agency since 1934. Advances in technology makes it possible to successfully use many different bands for many different purposes.

But the legacy of the FCC's command-and-control model for allocations to favor "new" services (new, that is, until they are made obsolete in later years or decades) and shape competition to its changing whims is a confusing and unnecessary pile-up of limitations and conditions that severely and artificially limit the ways in which spectrum can be redeployed as technology and consumer demands change. Today, the FCC sits squarely in the middle of each of over 50,000 licenses, a huge bottleneck that is making the imminent spectrum crisis in mobile broadband even worse.

Even with the best of intentions, the agency can't possibly continue to micromanage the map. And, as the LightSquared and Dish stroies demonstrate yet again, the risk of agency capture and political pressure often mean the agency doesn't do the right thing even when it does act.

Who would be the more efficient and neutral regulator? According to Nobel Prize-winning economist Ronald Coase's seminal 1959 article, "The Federal Communications Commission," the answer is the market. In his trademark straight-forward, common sense style, Coase elegantly dismantles the idea that scarce spectrum resources demand a non-market solution of government management.

For one thing, Coase demonstrates how screwed up the system already was over fifty years ago. There's little doubt that the problems he describes have only gotten worse with time and increased demand on the airwaves by insatiable consumers.

Instead, Coase proposed to treat spectrum like any other industry input--as property. The FCC, he said, should auction spectrum rights to the highest bidder, without licenses, conditions, or limitations on use, and then stand back. (He acknowledged the risk of antitrust problems, but, as in any industry, such problems could be addressed by antitrust regulators and not the FCC.) Spectrum rights would efficiently change hands when new applications and devices created higher-value uses.

Potential interference problems--such as those raised by GPS device manufacturers in the case of LightSquared--would be resolved precisely as they are in other property contexts. Without an FCC to run to, the parties would be forced to negotiate against a backdrop of established liability rules and a safety net of potential litigation. Indeed, LightSquared and GPS offer a classic example of Coase's later work demonstrating that regardlesss of how property is initially allocated, liability rules ensure that parties will bargain to the most socially-efficient solution to interference.

Of course we'll never know if the socially-optimal solution here is for LightSquared to protect GPS devices from receiving its signal or for device manufacturers to change their designs to stay out of LightSquared's bands. The heavy hand of the regulator has foreclosed a market solution, or even an attempt at negotiations.

Instead, we have the disaster of the FCC's decision in January, 2011 to grant a conditional waiver to LightSquared and then, last month, indefinitely revoking it. Meanwhile, LightSqaured spent $4 billion on infrastructure it may never use, and lost its CEO and key customers including Sprint. No one is happy, and no one reasonably argue this is was an optimal outcome, or even close to one.

For Dish, the NPRM will ensure a more orderly process, but at the cost of months or perhaps longer delay before Dish can begin building its terrestrial network. And in the interim, all sorts of irrelevant issues may interfere with the orderly (and expensive) resolution.

When Coase proposed a property model for spectrum in 1959, the idea was considered too radical. Congress and the FCC have, slowly but surely, taken pieces of the proposal to heart, introducing auctions (but not property rights) in the 1990's. Yesterday's NPRM takes a small step toward more flexible use licenses, but this may be too little reform too late. We have all the evidence we need that micromanagement of spectrum can't possibly keep up with the pace of innovation. Time to try a new, fifty year old, approach.

Ceci c'est un meme.

On Forbes today, I look at the phenomenon of memes in the legal and economic context, using my now notorious "Best Buy" post as an example. Along the way, I talk antitrust, copyright, trademark, network effects, Robert Metcalfe and Ronald Coase.

It’s now been a month and a half since I wrote that electronics retailer Best Buy was going out of business…gradually.  The post, a preview of an article and future book that I’ve been researching on-and-off for the last year, continues to have a life of its own.

Commentary about the post has appeared in online and offline publications, including The Financial Times, The Wall Street Journal, The New York Times, TechCrunch, Slashdot, MetaFilter, Reddit, The Huffington Post, The Motley Fool, and CNN. Some of these articles generated hundreds of user comments, in addition to those that appeared here at Forbes.

(I was also interviewed by a variety of news sources, including TechCrunch’s Andrew Keen.)

Today, the original post hit another milestone, passing 2.9 million page views.

Watching the article move through the Internet, I’ve gotten a first-hand lesson in how network effects can generate real value.

Network effects are an economic principle that suggests certain goods and services experience increasing returns to scale.  That means the more users a particular product or service has, the more valuable the product becomes and the more rapidly its overall value increases.  A barrel of oil, like many commodity goods, does not experience network effects – only one person can own it at a time, and once it’s been burned, it’s gone.

In sharp contrast, the value of networked goods increase in value as they are consumed.  Indeed, the more they are used, the faster the increase--generating a kind of momentum or gravitational pull.  As Robert Metcalfe, founder of 3Com and co-inventor of Ethernet explained it, the value of a network can be plotted as the square of the number of connected users or devices—a curve that approaches infinity until most everything that can be connected already is.  George Gilder called that formula “Metcalfe’s Law.”

Since information can be used simultaneously by everyone and never gets used up, nearly all information products can be the beneficiaries of network effects.  Standards are the obvious example.  TCP/IP, the basic protocol that governs interactions between computers connected to the Internet, started out humbly as an information exchange standard for government and research university users.  But in part because it was non-proprietary and therefore free for anyone to use without permission or licensing fees, it spread from public to private sector users, slowly at first but over time at accelerating rates.

Gradually, then suddenly, TCP/IP became, in effect, a least common denominator standard by which otherwise incompatible systems could share information.  As momentum grew, TCP/IP and related protocols overtook and replaced better-marketed and more robust standards, including IBM’s SNA and DEC’s DECnet.  These proprietary standards, artificially limited to the devices of a particular manufacturer, couldn't spread as quickly or as smoothly as TCP/IP.

From computing applications, Internet standards spread even faster, taking over switched telephone networks (Voice over IP), television (over-the-top services such as YouTube and Hulu), radio (Pandora, Spotify)—you name it.

Today the TCP/IP family of protocols, still free-of-charge, is the de facto global standard for information exchange, the lynchpin of the Internet revolution.  The standards continue to improve, thanks to the largely-voluntary efforts of The Internet Society and its virtual engineering task forces.  They're the best example I know of network effects in action, and they've created both a platform and a blueprint for other networked goods that make use of the standards.

Beyond standards, network effects are natural features of other information products including software.  Since the marginal cost of a copy is low (essentially free in the post-media days of Web-based distribution and cloud services), establishing market share can happen at relatively low cost.  Once a piece of software—Microsoft Windows, AOL instant messenger in the old days, Facebook and Twitter more recently—starts ramping up the curve, it gains considerable momentum, which may be all it takes to beat out a rival or displace an older leader.  At saturation, a software product becomes, in essence, the standard.

From a legal standpoint, unfortunately, market saturation begins to resemble an illegal monopoly, especially when viewed through the lens of industrial age ideas about markets and competition.  (That, of course, is the lens that even 21st century regulators still use.)  But what legal academics, notably Columbia’s Tim Wu, misunderstand about this phenomenon is that such products have a relatively short life-cycle of dominating.  These "information empires," as Wu calls them, are short-lived, but not, as Wu argues, because regulators cut them down.

Even without government intervention, information products are replaced at accelerating speeds by new disruptors relying on new (or greatly improved) technologies, themselves the beneficiaries of network effects.  The actual need for legal intervention is rare.  Panicked interference with the natural cycle, on the other hand, results in unintended consequences that damage emerging markets rather than correcting them.  Distracted by lingering antitrust battles at home and abroad, Microsoft lost momentum in the last decade.  No consumer benefited from that "remedy."

For more, see "What Makes an Idea a Meme?" on Forbes.

 

After three years of politicking, it now looks like Congress may actually give the FCC authority to conduct incentive auctions for mobile spectrum, and soon.  That, at least, is what the FCC seems to think.

At CES last week, FCC Chairman Julius Genachowski largely repeated the speech he has now given three years in a row.  But there was a subtle twist this time, one echoed by comments from Wireless Bureau Chief Rick Kaplan at a separate panel.

Instead of simply warning of a spectrum crunch and touting the benefits of the incentive auction idea, the Chairman took aim at a House Republican bill that would authorize the auctions but limit the agency’s “flexibility” in designing and conducting them. “My message on incentive auctions today is simple,” he said, “we need to get it done now, and we need to get it done right.”

By "done right," Genachowski means without meaningful limits on how the agency constructs or oversees the auctions.  The Chairman's attitude now seems to be if the FCC can't have complete freedom, it would rather not have incentive auctions at all.  That's a strange stance given the energy the FCC has expended making the case that such auctions are critical for mobile broadband users.

What's the fight about?  The House bill would prohibit the agency from introducing bidder qualifications based on external factors, such as current spectrum holdings.  The FCC could not, in other words, directly or indirectly exclude carriers who already have significant spectrum licenses.  The agency would also be limited in its ability to attach special conditions to new licenses issued as part of particular auctions.  An amendment by Rep. Marsha Blackburn (T-Tenn.) that was approved last month would specifically forbid special net neutrality conditions.

This may sound like an inside-the-beltway spat, but the stakes are in fact quite high, going right to the core of what role the FCC should play in 21st century communications.  For the Chairman, these limits rise to the level of an existential crisis, casting doubt on the agency’s very nature as an expert regulator.  Congress should, he argued, authorize the auctions and let the agency’s staff of legal, economic and technical experts decide how best to organize them.  Tying the FCC’s hands by statute, he said, is “a mistake”:

because it preempts an expert agency process that’s fact-based, data-driven and informed by a broad range of economists, technologists and stakeholders on an open record. The proposals on the table to restrict the FCC’s flexibility in its area of technical expertise would be a significant departure from precedent.

Spectrum- and auction-related issues pose hard questions.  I believe they should be answered based on the evidence, on an open record, as close as possible to the time when they need to be made.

House leaders see it very differently.  They see an agency that badly bungled the recent 700 Mhz. auctions—the last major auctions the FCC has conducted.   As a pre-condition to bidding, for example, Google demanded “open access” conditions, which the FCC belatedly agreed to add.  Instead of answering “hard” questions based on “facts” and “data” in an open record, the agency simply gave in to pressure from a late and well-connected bidder.

There was no expertise applied here.  And the result, as I’ve noted elsewhere, was that bids for the C block (where the open access conditioned were applied) were discounted to the tune of billions of dollars that would otherwise have gone to the Treasury.

Verizon won the auction, but now faces uncertain application of the conditions, which differ materially from the open Internet rules the agency passed last year in the net neutrality rulemaking.  Meanwhile, the mobile marketplace is a very different place than it was when Google first stepped in, dominated by device  and operating system providers and proprietary app stores that didn't even exist in 2008.

Larger bidders, meanwhile, wary of the vaguely-defined new conditions, shifted to the A and B blocks, pushing out smaller carriers.  Precisely the opposite result to what the agency intended in designing the auctions in the first place.

Politically-driven choices on how the D block should be licensed for public safety turned out even worse.  That auction could not find a bidder willing to live with the FCC’s conditions.  The spectrum sits unused, even as public safety still has no interoperable network more than a decade after 9/11.

If that’s what an “expert” agency with does with its “flexibility,” then it’s no wonder House leaders are skeptical.   “Flexibility” should mean maximizing revenues and ensuring that limited and critical spectrum assets are licensed to those who can put them to the best and highest use.  Not trying to stack the deck in favor of some bidders--and still getting it wrong.

Nothing has changed.  The agency still seems determined to use its auction authority to shape mobile broadband competition in its own sclerotic image.  It wants to create a competitive market among carriers even as competition is increasingly driven by other players in the mobile ecosystem.  It wants a return to the failed practice of unbundling to create an abundance of phantom competitors who have no assets and no understanding of communications, created by financial engineers who recognize a good regulatory arbitrage when they see one.

Not so, says the Chairman.  Our view of the market is deeply analytical, the result of thorough technical and economic analysis conducted by the bureaus.  His evidence?  The agency’s annual competition reports.  Or so he told CEA Gary Shapiro following his speech, when asked for proof that the agency understands the markets with which it tinkers.

But the competition reports are hardly models of lucid analysis.  They are constrained by the bureaus’ crabbed view of the market, a view required by the statutory requirements that generate the reports.  They continue to emphasize obsolete proxies for measuring competition, including HHIs and the spectrum screen, even as actual data on market conditions is relegated to the back of the report.  For the last two years, the mobile competition report pointedly refused to say whether the agency thought the market was competitive or not.

Yet the agency deliberately forfeited even the limited value of the competition reports by rejecting out-of-hand the AT&T/T-Mobile USA deal.  Rather than focusing on declining prices for voice, text, and data over the last ten years, or the regulatory constraints that make mergers necessary to expand coverage and service (both amply documented in the reports), the staff report on the T-Mobile deal largely swallowed the simplistic mantra of opponents of the deal that taking out one “national” carrier was per se anti-competitive.  The report’s principal objection seemed to be that any horizontal merger of two companies would result in one fewer competitor.  True, but irrelevant.

There was no sign of expert regulator at work here; nothing to suggest an analysis that was “fact-based, data-driven and informed by a broad range of economists, technologists and stakeholders.”  The analysis started with a conclusion and worked backwards.  And when even the old formulas didn’t come out right, at least in the case of the spectrum screen, the books were simply cooked until they did.

Well, that's all water under the bridge in 2012.  “This is an incredibly fast-moving space,” the Chairman said of the need for flexibility, “and any policy that pre-judges or predicts the future runs a great risk of unintended and unfortunate consequences.”

That's a good point.  But it's also a perfect description of last year’s Net Neutrality rulemaking.  During a year of proceedings, the FCC turned up next to no evidence of an actual problem, let alone a market failure.  Still, the agency stuck doggedly to its first principals, insisting after-the-fact that “prophylactic” rules limiting network management technologies of the future were essential to maintaining a “level playing field.”  Never mind that the playing field showed no signs of imbalance, or that it continued to evolve dramatically (iPhone, iPad, Android and Verizon’s LTE introduction, for starters) as deliberations dragged on in a regulatory vacuum.

One “unintended and unfortunate consequence” of that and similar missteps has already become clear—Congress doesn’t trust the Chairman to follow the law.

Which is, I suspect, the main reason incentive auction authority hasn’t yet passed, even though nearly everyone agrees it’s the best short-term solution to a spectrum crisis of the government’s own making.  And why, when it does come, there are likely to be plenty of strings attached.

Which is too bad.  Because, if the FCC really acted as the expert agency it is chartered to be, Genachowski would be right about the value of flexibility.

We've recently added another dozen posts to the Media page. These include several articles and interviews related to Larry's efforts to help stop dangerous copyright legislation pending in Congress, known as SOPA and Protect IP. Larry also provides detailed analysis of a more sensible alternative proposal from Sen. Ron Wyden and Rep. Darrell Issa, the OPEN Act.

Larry appeared last month on the PBS Newshour to debate domain name seizures and the pending bills with a representative of the U.S. Chamber of Commerce. Video and a transcript are on the Media page.

Larry has also been busy with FCC stories, including the clumsy release of the staff report on the failed AT&T/T-Mobile merger and the subsequent revelations about manipulation of the agency's important "spectrum screen," a key metric in merger reviews that was grossly mishandled. (Congress is looking into how the agency fumbled so badly.) With the collapse of the deal, watch for louder cries about the spectrum crisis and the lack of any solution to it.

A New Year's Day post at Forbes on the dimming prospect of consumer electronics retailer Best Buy generated tremendous response, including over 2.3 million page views on the site and thousands of comments there and elsewhere, most by customers with horror stories to tell. Follow-up and fall-out to come.

I’ve written several articles in the last few weeks critical of the dangerously unprincipled turn at the Federal Communications Commission toward a quixotic, political agenda.  But as I reflect more broadly on the agency’s behavior over the last few years, I find something deeper and even more disturbing is at work.  The agency’s unreconstructed view of communications, embedded deep in the Communications Act and codified in every one of hundreds of color changes on the spectrum map, has become dangerously anachronistic.

The FCC is required by law to see separate communications technologies delivering specific kinds of content over incompatible channels requiring distinct bands of protected spectrum.  But that world ceased to exist, and it's not coming back.  It is as if regulators from the Victorian Age were deciding the future of communications in the 21st century.  The FCC is moving from rogue to steampunk.

With the unprecedented release of the staff’s draft report on the AT&T/T-Mobile merger, a turning point seems to have been reached.  I wrote on CNET  (see “FCC:  Ready for Reform Yet?”) that the clumsy decision to release the draft report without the Commissioners having reviewed or voted on it, for a deal that had been withdrawn, was at the very least ill-timed, coming in the midst of Congressional debate on reforming the agency.  Pending bills in the House and Senate, for example, are especially critical of how the agency has recently handled its reports, records, and merger reviews.  And each new draft of a spectrum auction bill expresses increased concern about giving the agency “flexibility” to define conditions and terms for the auctions.

The release of the draft report, which edges the independent agency that much closer to doing the unconstitutional bidding not of Congress but the White House, won’t help the agency convince anyone that it can be trusted with any new powers.   Let alone the novel authority to hold voluntary incentive auctions to free up underutilized broadcast spectrum.

What is the Spectrum Screen Really Screening, Anyway?

One particularly disturbing feature of the report was what appears to be a calculated jury-rigging of the spectrum screen, as I wrote in an op-ed for The Hill.  (See “FCC Plays Fast and Loose with the Law…Again”)  For the first time since introducing the test as a way to simplify merger review, the draft report lowers the amount of spectrum it believes available for mobile use, even as technology continues to make more spectrum usable.  The lower total added 82 markets in which the screen would have been triggered, though the staff report in any case never actually performs the analysis of any local market.

The rationale for the adjustment is hidden in a non-public draft of an order on the transfer of Qualcomm’s FLO-TV licenses to AT&T, an order that is only now just circulating among the Commissioners.   Indeed, the Qualcomm order was only circulated a day before the T-Mobile report was released to the public and (in unredacted form) to  the DoJ.

(Keeping draft documents private is the normal course of business at the agency—the T-Mobile report being the rare and disturbing exception of releasing a report before even the Commissioners have reviewed or voted on it, here in obvious hopes of influencing the Justice Department’s antitrust litigation).

In the draft Qualcomm order, according to a footnote in the draft T-Mobile report, agency staff propose a first-time-ever reduction in the total amount of usable spectrum that forms the basis of the screen.  (Under the test, if the total spectrum of the combined entity in a market is less than a third of the usable spectrum, the market is presumed competitive and no analysis is required.)

For purposes of the T-Mobile analysis, the unexplained reduction is assumed to be acceptable to the Commission and applied to calculations of spectrum concentration in each of the local Cellular Market Areas.  (The calculation also assumes AT&T has the pending Qualcomm spectrum.)  Notably, without the reduction the number of local markets in which the screen would be triggered goes down by a third.

Asked in a press conference today about the curious manipulation, FCC Chairman Genachowski refused to comment.

The spectrum screen, by the way, never made much sense.  Its gross oversimplification of total usable spectrum, for one thing, hides a ridiculous assumption that all bands of usable spectrum are equally usable, defying the most basic physics of mobile communications.  With a wink to the apples-and-oranges nature of different bands, since 2004 the agency has decided more or less arbitrarily to increase the total amount of “usable” spectrum by including some new bands of usable spectrum and not others, with little rhyme or reason.

The manipulation of the spectrum screen’s coefficients, in fact, have no rationale other than to fast-track some preferred mergers and create regulatory headaches for others.  In truth, a screen that counted all spectrum actually being used for mobile communications, and counted it equally, would suggest that Sprint, in combination with its subsidiary Clearwire, is the only dangerously monopolistic holder of spectrum assets.  As Chart 38 of the FCC’s 15th Annual Mobile Competition Report suggests, Sprint and Clearwire hold more “spectrum” than any other carrier—enough to trigger the screen in most if not all CMAs.  That is, if it was all counted.

 

That isn’t necessarily the right outcome either.  Much of Clearwire’s spectrum is in the >1 GHz. Bands, and, at least for now, those bands are usable but not as attractive for mobile communications as other, lower bands.

As the Mobile Competition Report notes, “these different technical characteristics provide relative advantages for the deployment of spectrum in different frequency bands under certain circumstances. For instance, there is general consensus that the more favorable propagation characteristics of lower frequency spectrum allow for better coverage across larger geographic areas and inside buildings, while higher frequency spectrum may be well suited for adding capacity.”

So not all spectrum is equal after all.  What, then, is the point or usefulness of the screen?  And what of this unmentioned judo move in the staff report, which suddenly changed the point of the screen from one that simplified merger review to a conclusive presumption against a finding of “public interest”?  The original point of the screen was to quickly eliminate competitive markets that don’t require detailed analysis.  In the AT&T/T-Mobile staff report, for the first time, it’s used to reject a proposed transaction if too many market (how many is not indicated) are triggered that would require that analysis.

But why continue to compare apples and oranges for any purpose, when the real data on CMA competition is readily available?  The only answer can be that the analysis wouldn’t yield the result that the agency had in mind when it started its review.  For in painstaking detail, the 15th Mobile Competition report also demonstrates that adoption is up, usage is off the charts, prices for voice, data, and text continue to plummet, investments in infrastructure continue at a dramatic pace despite the economy, and new source of competitive discipline are proliferating, in the form of device manufacturers, mobile O/S providers, app developers, and inter-modal competitors.  For starters.

To conclude that AT&T’s interest in T-Mobile’s spectrum and physical infrastructure—an effort to overcome the failure of the FCC and local regulators to provide alternative spectrum or to allow infrastructure investments to proceed at an even faster pace—isn’t in the public interest requires the staff to ignore every piece of data the same staff, in another part of the space-time contiuum, collected and published.  But so long as HHIs and spectrum concentration are manipulated and relied on to foreclose real analysis, it all makes sense.

 

A Rogue Agency Slips into Steampunk

That is largely the point of Geoff Manne’s detailed critique of the substance of the report posted here at TLF, and of my own ridiculously long post on Forbes.  (See “A Strategic Plan for the FCC.”)

The Forbes piece tries to put the staff report into the context of on-going calls for agency reform that were working their way through Congress even before the release.  In it, I conclude that the real problem for the agency is that even with the significant changes of the 1996 Communications Act, the agency is still operating in a stovepipe model, where different communications technologies (cable, cellular, wire, satellite, “local”) are still regulated separately, with different bureaus and in many cases different regulations.

The model assumes that audio and video programming are different from data communications, offered by different industries using incompatible, single-purpose technologies.  A television is not a phone or a radio or a computer.  Broadcast is only for programming, cellular only for voice, satellites only for industrial use.  Cable is an inconveniently novel form of pay television, and data communications are only for large corporations with mainframe computers.

Those siloed regulations are further fragmented by attaching special regulatory conditions to individual license transfers and individual bands of spectrum as part of auctions. Dozens of unrelated and seemingly random requirements were added to Comcast-NBC Universal, for example.  At the last minute the agency added an eccentric version of the net neutrality rules to the 2008 auction for 700 Mhz. spectrum, but only for the C block.

The agency continues to operate under an anachronistic view that distinct technologies support distinct forms of communications (radio, TV, cable, data).  But the world has shifted dramatically under their feet since 1996.  The convergence of nearly all networks to the Internet’s single, non-proprietary standard of packet-switching, digital networks operating under TCP/IP protocols has been nothing short of a revolution in communications.  But it’s a revolution the agency sat out.  It has no idea what role it ought to play in the post-apocalyptic world; nor has Congress given them one.

As different kinds of communications technologies have all (or nearly all) converged on IP, communications applications have blurred beyond the ability to distinguish them.  Voice communications are now offered over data networks, data is flowing over the wires, TV is everywhere, and mobile devices that were unimaginable in 1996 now do everything.

Quite simply, the mismatch between the agency’s structure and the reality of a single digital, virtual network treating all content as bits regardless of the technology or the source that transports it has left the agency unable to cope or to regulate rationally.  Consider some of the paradoxes the agency has been forced to wrestle with in recent years:

  • Is Voice over IP to be regulated as a traditional voice service, with barnacled requirements for Universal Service contribution and 911 services applied and, if so, applied how?
  • Is TV on the Internet, delivered using any and every possible technology including wireless, fiber, copper, and cable, subject to the same Victorian standards of decency as broadcast TV, itself now entirely digital?
  • Is the public interest served when mobile providers combine spectrum and infrastructure assets, largely to overcome the agency’s own paralysis in moving the deeply fractured spectrum map into even the 20th century and the incompetent and corrupt local zoning agencies that hold up applications for new towers and antennae until the proper tribute is rendered?

In the face of these paradoxes, the FCC has become ungrounded; a victim of its own governing statute, which in many respects requires it to remain anachronistic.  Left without clear guidance from Congress on how or whether to regulate what applications (that’s really all we have now—applications, independent of technology), the agency increasingly improvises.

It’s like the wonderful genre of animation known as “steampunk,” where modern technology is projected anachronistically into the past, exploring what life would have been like if the 19th century had robots, flight, information processing, and modern armaments, all powered by the steam engine.  (The concept of steam punk has now become a popular design genre, including some functioning devices wrapped in steampunk elements, as in the photo below.)

A Steampunk Computer

It's cute on film, but applied to the real world it's simply dangerous.  The FCC is required by law to keep its head in the sand with respect both to the realities of digital technology and the economics of the modern communications ecosystem.  Yet its natural desire to regulate something leaves the Commission flailing wildly in the dark for a foothold for its ancient regulatory structure in a world it doesn’t inhabit.

The Open Internet Notice of Proposed Rulemaking, for example, asked helplessly in over 80 separate paragraphs for education and update on the nature of the revolution spurred by the deployment of broadband Internet. ("We seek more detailed comment on the technological capabilities available today, as offered for sale and as actually deployed in providers’ networks.")  Of course it had to ask these questions – the agency never regulated broadband.  Under the 1996 Act, as the 2005 Brand X case emphasizes, it never could.

Consider just a few of the absurd counterfactuals that the agency’s steampunk policies have led it in just the last few years (more examples greatly appreciated, by the way):

  • Broadband isn’t being deployed  in a “reasonable and timely fashion” (2011 Section 706 Broadband Report)
  • The mobile communications market is not “effectively competitive” (14th and 15th Mobile Competition Report)
  • High concentrations of customers and spectrum, calculated using rigged HHIs and spectrum screens, are sufficient to raise presumptive antitrust concerns regardless of actual competitive and consumer welfare (AT&T/T-Mobile draft memo)
  • Spectrum suitable for mobile use is decreasing (AT&T/Qualcomm memo)
  • Despite a lack of any examples, broadband providers  “potentially face at least three types of incentives to reduce the current openness of the Internet” (Open Internet order)
  • Encouraging competition and protecting consumer choice “cannot be achieved by preventing only those practices that are demonstrably anticompetitive or harmful to consumers.” (Open Internet order)
  • The agency” expect[s] the costs of compliance with our prophylactic rules to be small”  (Open Internet order)
  • Absent a mandatory data roaming regime for mobile broadband, “there will be a significant risk that fewer consumers would have nationwide access to competitive mobile broadband services….”  (Data Roaming order).

Not that there isn’t considerable expertise within the agency, and glimmers of understanding that manage to escape in whiffs from the steam pipes.  The 2010 National Broadband Plan, developed with a great deal of both internal and external agency expertise, does an admirable job of describing the current state of the broadband environment in the U.S.  More impressive, the later chapters predict with considerable vision the application areas that will drive the next decade of broadband deployment and use, including education, employment, health care and the smart grid.

The NBP, unfortunately, is the exception.  More and more of the agency’s reports, orders, and decisions instead bury the expertise, forcing ridiculous conclusions through an implausible lens of nostalgia and distortion.  The agency’s statutorily mandated hold on a never-realistic glorious communications past is increasingly threatening the health of the real communications ecosystem--an even more glorious (largely because unregulated) communications present.

 

I Love it When a Plan Comes Together

The FCC’s steampunk mentality is threatening to wreak havoc on the natural evolution of the Internet revolution.  It’s also turning the FCC from a respected and Constitutionally-required “independent” agency that answers to Congress and not the White House into a partisan monster, pursuing an agenda that’s light on facts and heavy on the politics of the administration and favored participants in the Internet ecosystem.  The agency relies on clichés and unexamined mantras rather than data—even its own data.  Mergers are bad, edge providers are good, and the agency doesn’t acknowledge that many of the genuine market failures that do exist are creatures of its own stovepipes.

As I note in the long Forbes piece, there was a simple, elegant way to avoid the steampunk phenomenon --an alternative that would have saved the FCC from increased obsolescence and the rest of us from its increasingly bizarre and disruptive regulatory behavior.   And in came from within the walls of FCC headquarters.

In 1999, in the midst of the first great Web boom, then-chairman William Kennard (a Democratic appointee) had a vision for the future of communications that has proven to be entirely accurate.  Kennard created a short, straightforward “strategic plan” for the agency that emphasized breaking down the silos.  It also took a realistic view of the agency’s need and ability to regulate an IP world, encouraging future Chairmen to get out of the way of a revolution that would provide far more benefit to consumers if left to police itself than with an FCC trying to play constant catch-up.

Kennard also proposed dramatic reform of spectrum policy, recognizing as is now obvious that imprinting the agency’s stovepiped model for communications like a tattoo on the radio waves was unnecessarily limiting the uses and usefulness of mobile technology, creating artificial scarcity and, eventually, a crisis.

In just a few pages of the report, the strategic plan lays out an alternative, including flexible allocations that wouldn’t require FCC permission to change uses, market-based mechanisms to ensure allocations moved easily to better and higher uses (no lingering conditions), even the creation of a spectrum inventory (still waiting).  The plan called for incentive systems for spectrum reallocation, an interoperable public safety network, and expanded use of unlicensed spectrum.  All reforms that we’re still violently agreeing need to be made.

We’ve arrived, unfortunately, at precisely the future Kennard hoped to avoid.  And we’re still moving, at accelerating speeds, in precisely the wrong direction.  Instead of working to ease spectrum restrictions and leave the “ecosystem” (the FCC’s own term) to otherwise police itself, recent NPRMs and NOIs suggest an agency determined to leverage its limited broadband authority into as many aspects of the converged world as possible.  As the Free State Foundation’s Seth Cooper recently wrote,  today’s FCC has developed a “proclivity to import legacy regulations into today's IP world when doing so makes little or no sense.”

Fun’s fun.  I like my steampunk as well as anybody.  But I’d prefer to see it on a mobile broadband device, or over Netflix streamed through my IP-enabled television or game console.  Or anywhere else other than at the FCC.

 

We've recently added over two dozen new posts to the Media page. Most have to do with SOPA, the Stop Online Piracy Act, introduced a few weeks ago in Congress to cheers from the entertainment industry and jeers from Silicon Valley. The bill would make it easier--too easy--for copyright and trademark holders to turn on and off Web content they don't like.

Larry's early analysis of the bill for CNET, and his on-going work on the poor relations between Hollywood and Palo Alto, led to a great deal of press coverage and speaking engagements. His detailed review of the bill was praised across the political spectrum, including by TechDirt's Mike Masnick and the National Review's Reihan Salam.

Larry participated in a Capitol Hill debate on SOPA and other pending piracy legislation sponsored by the Congressional Internet Caucus, debating the bill against industry representatives. He also appeared on CNET's Reporters' Roundtable and This Week in Law, as well as podcasts by the Heartland Institute.

Net neutrality also stayed in the news, as did the AT&T/T-Mobile merger, privacy, spectrum reform and online human rights. Larry was quoted in a wide range of publications on these topics, including Politico, Reason, NPR's Marketplace, the Wall Street Journal and the Daily Caller.  Conference footage from this year's Compass Summit panels on privacy and tech policy are also available.