I don’t have a great deal to add to coverage of last week’s big patent story, which concerned the filing of a complaint by Microsoft co-founder Paul Allen against major technology companies including Apple, Google, Facebook and Yahoo. Diane Searcey of The Wall Street Journal, Tom Krazit at CNET News.com, and Mike Masnick on Techdirt pretty much lay out as much as is known so far.

But given the notoriety of the case and the scope of its claims (the Journal, or at least its headline writer, has declared an all-out “patent war”), it seems like a good opportunity to dispel some common myths about the patent system and its discontents.

And then I want to offer one completely unfounded theory about what is really going on that no one yet has suggested. Which is: Paul Allen is out to become the greatest champion that patent reform will ever know.

...continue reading Paul Allen: When a Patent Troll is an Enigma


Emotions ran high at this week’s Privacy Identity and Innovation conference in Seattle.  They usually do when the topic of privacy and technology is raised, and to me that was the real take-away from the event.

As expected, the organizers did an excellent job providing attendees with provocative panels, presentations and keynotes talks—in particular an excellent presentation from my former UC Berkeley colleague Marc Davis, who has just joined Microsoft.

There were smart ideas from several entrepreneurs working on privacy-related startups, and deep thinking from academics, lawyers and policy analysts.

There were deep dives into new products from Intel, European history and the metaphysics of identity.

But what interested me most was just how emotional everyone gets at the mere  mention of private information, or what is known in the legal trade as  “personally-identifiable” information.  People get enervated just thinking about how it is being generated, collected, distributed and monetized as part of the evolution of digital life.  And pointing out that someone is having an emotional reaction often generates one that is even more primal.

Privacy, like the related problems of copyright, security, and net neutrality, is often seen as a binary issue.  Either you believe governments and corporations are evil entities determined to strip citizens and consumers of all human dignity or you think, as leading tech CEOs have the unfortunate habit of repeating, that privacy is long gone, get over it.

But many of the individual problems that come up are much more subtle that that.  Think of Google Street View, which has generated investigations and litigation around the world, particularly in Germany where, as Jeff Jarvis pointed out, Germans think nothing of naked co-ed saunas.

Or how about targeted or personalized or, depending on your conclusion about it, “behavioral” advertising?  Without it, whether on broadcast TV or the web, we don’t get great free content.  And besides, the more targeted advertising is, the less we have to look at ads for stuff we aren’t the least bit interested in and the more likely that an ad isn’t just an annoyance but is actually helpful.

On the other hand, ads that suggest products and services I might specifically be interested in are “creepy.”  (I find them creepy, but I expect I’ll get used to it, especially when they work.)

And what about governments?  Governments shouldn’t be spying on their citizens, but at the same time we’re furious when bad guys aren’t immediately caught using every ounce of surveillance technology in the arsenal.

Search engines, mobile phone carriers and others are berated for retaining data (most of it not even linked to individuals, or at least not directly) and at the same time are required to retain it for law enforcement purposes.  The only difference is the proposed use of the information (spying vs. public safety), which can only be known after data collection.

As comments from Jeff Jarvis and Andrew Keen in particular got the audience riled up, I found myself having an increasingly familiar but strange response.  The more contentious and emotional the discussion became, the more I found myself agreeing with everything everyone was saying, including those who appeared to be violently disagreeing.

We should divulge absolutely everything about ourselves!  No one should have any information about us without our permission, which governments should oversee because we’re too stupid to know when not to give it!  We need regulators to protect us from corporations; we need civil rights to protect us from regulators.

Logical Systems and Non-Rational Responses

I can think of at least two important explanations for this paradox.  The first is a mismatch of thought systems.  Conferences, panel discussions, essays and regulation are all premised on rational thinking, logic, and reason.  But the more the subject of these conversations turns to information that describes our behavior, our thoughts, and our preferences, the more the natural response is not rational but emotional.

Try having a logical conversation with an infant—or a dog, or a significant other who is upset--about its immediate needs.  Try convincing someone that their religion is wrong.  Try reasoning your way out of or into a sexual preference.  It just doesn’t work.

Which raises at least one interesting problem.  Privacy is not only an emotional subject, it’s also increasingly a profitable one.  According to a recent Wall Street Journal article, venture capitalists are now pouring millions into privacy-related startups.  Intel just offered $8 billion for security service provider McAfee.  Every time Facebook blinks, the blogosphere lights up.

So the mismatch of thought systems will lead to more, not fewer, collisions all the time.

Given that, how does a company develop a strategic plan in the face of unpredictable and emotional response from potential users, the media, and regulators?  Strategic planning, to the extent anyone really does it seriously, is based on cold, hard facts—as far from emotion as its practitioners can possibly get.  The patron saint of management science, after all, is Frederick Winslow Taylor who, among other things, invented time-and-motion studies to achieve maximum efficiency of human “machines.”

But the rational vehicle of planning simply crumples against the brick wall of emotion.

As I wrote in an early chapter of “The Laws of Disruption,” for example, companies experimenting with early prototypes of radio frequency ID tags (still not ready for mass deployment ten years later) could never have predicted the violent protests that accompanied tests of the tags in warehouses and factories.

Much of that protest was led by a woman who believes that RFID tags are literally the technology prophesied by the Book of Revelations as the sign of the Antichrist.  Assuming one is not an agent of the devil, or in any case isn’t aware that one is, how do you plan for that response?

The more that intimacy becomes a feature of products and services, including products and services aimed at managing intimate information, the more the logical religion of management science will need to incorporate non-rational approaches to management, scenario planning and economics.

It won’t be easy—the science of management science isn’t very scientific in the first place and, as I just said, changing someone’s religion doesn’t happen through rational arguments—the kind I’m making right now.

The Bankruptcy of the Property Metaphor for Information

The second problem that kept hitting me over the head during PII 2010 was one of linguistics.  Which is:  the language everyone uses to talk about (or around) privacy.  We speak of ownership, stealing, tracking, hijacking, and controlling.  This is the language of personal property, and it’s an even worse fit for the privacy conversation than is the mental discipline of logic.

In discussions about information of any kind, including creative works as well as privacy and security, the prevailing metaphor is to talk about information as a kind of possession.  What kind?  That’s part of the problem.  Given the youth of digital life and the early evolution of our information economy, most of us really only understand one kind of property, and that is where our minds inevitably and often unintentionally go.

We think of property as the moveable, tangible variety—cattle, collectibles, commodities--that in legal terminology goes by the name “chattels.”

Only now has that metaphor become a serious obstacle.  While there has been a market for information for centuries, the revolutionary feature of digital life is that is has, for the first time in human history, separated information from the physical containers in which it has traditionally been encapsulated, packaged, transported, retailed, and consumed.

A book is not the ideas in the book, but a book can be bought, sold, controlled, and destroyed.  A computer tape containing credit card transactions is not the decision-making process of the buyers and sellers of those transactions, but a tape can be lost, stolen, or sold.

When information could only be used by first reducing it to physical artifacts, the property metaphor more-or-less worked.  Control the means of production, and you controlled the flow of information.  When Gutenberg perfected movable type, the first thing he published was the Bible—but in German, not Latin.  Hand-made manuscripts and a dead language gave the medieval Catholic Church a monopoly on the mystical.  Turn the means of production over to the people and you have the Protestant Reformation and the beginning of censorship--a legal control on information.

The digital revolution makes the liberation of information all the more potent.  Yet in all conversations about information value, most of us move seamlessly and dangerously between the medium—the artifact—and the message—the information.

But now that information can be used in a variety of productive and destructive ways without ever taking a tangible form, the property metaphor has become bankrupt.  Information is not property the way a barrel of oil is property.   The barrel of oil can only be possessed by one person at a time.  It can be converted, but only once, to lubricants, gasoline, or remain in crude form.  Once the oil is burned, the property is gone.  In the meantime, the barrel of oil can be stolen, tracked, and moved from one jurisdiction to another.

Digital information isn’t like that.  Everyone can use it at the same time.  It exists everywhere and nowhere.  Once it’s used, it’s still there, and often more valuable for having been used.  It can be remixed, modified, and adapted in ways that create new uses, even as the original information remains intact and usable in the original form.

Tangible property obeys the law of supply and demand, as does information forced into tangible containers.  But information set free from the mortal coil obeys only the law of networks, where value is a function of use and not of scarcity.

But once the privacy conversation (as well as the copyright conversation) enters the realm of the property metaphor, the cognitive dissonance of thinking everyone is right (or wrong) begins.  Are users of copyrighted content “pirates”?  Or are copyright holders “hoarders”?  Yes.

("Intellectual property,” as I’ve come to accept, is an oxymoron.  That’s hard for an IP lawyer to admit!)

It’s true that there are other kinds of property that might better fit our emerging information markets.  Real estate (land) is tangible but immovable.   Use rights (e.g., a ticket to a movie theater, the right to drill under someone’s land or to block their view) are also long established.

But both the legal framework and the economic theory describing these kinds of property are underdeveloped at the very least.  Convincing everyone to shift their property paradigm would be hard when the new location is so barren.

Here are a few examples of the problem from the conference.  What term would make consumers most comfortable with a product that helps them protect their privacy, one speaker asked the audience.  Do we prefer “bank,” “vault,” “dossier,” “account” etc.?

“Shouldn’t consumers own their own information?” an attendee asked, a double misuse of the word “own.”   Do you mean the media on which information may be stored or transferred, or do you mean the inherent value of the bits (which is nothing)?  In what sense is information that describes characteristics or behaviors of an individual that person’s “own” information?

And what does it mean to “own” that information?  Does ownership bring with it the related concepts of being bought, sold, transferred, shared, waived?  What about information that is created by combining information—whether we are talking about Wikipedia or targeted advertising?  Does everyone or no one own it?

And by ownership, do we mean the rights to derive all value from it, even when what makes information valuable is the combining, processing, analyzing and repurposing done by others?  Doesn’t that part of the value generation count for something in divvying up the monetization of the resulting information products and services?  Or perhaps everything?

Human beings need metaphors to discuss intangible concepts like immortality, depression, and information.  But increasingly I believe that the property metaphor applied to information is doing more harm than good.  It makes every conversation about privacy a conversation of generalizations, and generalizations encourage the visceral responses that make it impossible to make any progress.

Perhaps that’s why survey after survey reveals both that consumers care very much about the erosion of a zone of privacy in their increasingly digital lives and, at the same time, give up intimate information the moment a website asks them for it.  (I agree with everything and its opposite.)

There’s also a more insidious use of language and metaphor to steer the conversation toward one view of property or another—privacy as personal property or privacy as community property.  Consider, for example, how the question is asked, e.g.:

“My cell phone tracks where I go”


“My cell phone can tell me where I am.”

A recent series of articles in The Wall Street Journal dealing with privacy (I won’t bother linking to it, because the Journal believes the information in those articles is private and property and won’t share it unless you pay for a subscription, but here is a “free” transcript of a conversation with the author of the articles on NPR’s “Fresh Air”) made many factual errors in describing current practices in on-line advertising.  But those aside, what made the articles sensational was not so much what they reported but the adjectives and pronouns that went with the facts.

Companies know a lot “about you,” for example, from your web surfing habits (in fact they know nothing about “you,” but rather about your computer, whoever may be using it), cookies are a kind of “surveillance technology” that “track” where “you” go and what “you do,” and often “spawn” themselves without “your” knowledge.

Assumptions about the meaning of loaded terms such as ownership, identity and what it means for information to be private poison the conversation.  But anyone raising that point is immediately accused of shilling for corporations or law enforcement agencies who don't want the conversation to happen at all.

A User and Use-based Model – Productive and Destructive Uses

So if the property metaphor is failing to advance an important conversation—both of a business and policy nature—what metaphor works better?

As I wrote in “Laws of Disruption,” I think a better way to talk about information as an economic good is to focus on information users and information uses.  “Private” information, for starters, is private only depending on the potential user.  Whether it is our spouse, employer, an advertiser or a law-enforcement agent, in other words, can make all the difference in the world as to whether I consider some information private or not.  Context is nearly everything.

Example:  Is location tracking software on cell phones or embedded chips an invasion of privacy?  It is if a government agency is intercepting the signals, and using them to (fill in the blank).  But ask a parent who is trying to find a missing child, or an adult child trying to find a missing and demented parent.  It’s not the technology; it’s the user and the use.

Use, likewise, often empties much of the emotional baggage that goes with conversations about privacy in the abstract.  A website asks for my credit card number—is that an invasion of my privacy?  Well not if I’m trying to pay for my new television set from Amazon with a credit card.   On the other hand, if I’m signing up for an email newsletter that is free, there’s certainly something suspicious about the question.

To simplify a long discussion, I prefer to talk about information of all varieties through a lens of “productive” (uses that add value to information, e.g., collaboration) and “destructive” (uses that reduce the value of information, e.g., “identity” “theft”).  Though it may not be a perfect metaphor (many uses can be both productive and destructive, and the metrics for weighing both are undeveloped at best), I find it works much better in conversations about the business and policy of information.

That is, assuming one isn’t simply in the mood to vent and rant, which can also be fun, if not productive.

The Progress and Freedom Foundation has just published a white paper I wrote for them titled "The Seven Deadly Sins of Title II Reclassification (NOI Remix)."  This is an expanded and revised version of an earlier blog post that looks deeply into the FCC's pending Notice of Inquiry regarding broadband Internet access. You can download a PDF here.

I point out that beyond the danger of subjecting broadband Internet to extensive new regulations under the so-called "Third Way" approach outlined by FCC Chairman Julius Genachowski, a number of other troubling features in the Notice indicate an even broader agenda for the agency with regard to the Internet.

These include:

  • Pride: As the FCC attempts to define what services would be subjected to reclassification, the agency runs the risk of both under- and over-inclusion, which could harm consumers, network operators, and content and applications providers.
  • Lust: The agency is reaching out for additional powers beyond its reclassification proposals — including an effort to wrest privacy enforcement powers from the Federal Trade Commission and putting itself in charge of cybersecurity for homeland security.
  • Anger: The "Third Way" may dramatically expand the scope of federal wiretapping laws, requiring law enforcement "back doors" for a wide range of products and services.
  • Gluttony: Reclassifying broadband opens the door to state and local government regulation, which would overwhelm Internet access with a deluge of conflicting, and innovation-killing, laws, rules and new consumer taxes.
  • Sloth: As the FCC looks for a legal basis to defend reclassification, basic activities — such as caching, searching, and browsing — may for the first time be included in the category of services subject to "common carrier" regulation.
  • Vanity: Though wireless networks face greater challenges from the broadband Internet than wireline networks, the FCC seems poised to impose more, not less, regulation on wireless broadband.
  • Greed: Reclassification of broadband services could vastly expand the contribution base for the Universal Service Fund, adding new consumer fees while supersizing this important, but exceedingly wasteful, program.

I'm grateful to PFF, especially Berin Szoka, Adam Marcus, Mike Wendy and Adam Thierer, for their interest and help in publishing the article.

Larry has an op-ed titled "Net Neutrality is a Technical, not a Political Problem" in today's San Francisco Chronicle.  He responds to an editorial that ran earlier this week in the paper calling for the FCC to pull the trigger on reclassifying broadband as a "common carrier" service.

I’ve just published a long analysis for CNET of the proposed legislative framework presented yesterday by Google and Verizon.

The proposal has generated howls of anguish from the usual suspects (see Cecilia Kang, “Silicon Valley criticizes Google-Verizon accord” in The Washington Post; Matthew Lasar’s “Google-Verizon NN Pact Riddled with Loopholes” on Ars Technica and Marguerite Reardon’s “Net neutrality crusaders slam Verizon, Google” at CNET for a sampling of the vitriol).

But after going through the framework and comparing it more-or-less line for line with what the FCC proposed back in October, I found there were very few significant differences.  Surprisingly, much of the outrage being unleashed against the framework relates to provisions and features that are identical to the FCC’s Notice of Proposed Rulemaking (NPRM), which of course many of those yelling the loudest ardently support.

At the outset, one obvious difference that many reporters and commentators keep missing (in some cases, intentionally), is that the Google-Verizon framework has absolutely no legal significance.  It’s not a treaty, accord, agreement, deal, pact, contract or business arrangement—all terms still being used to describe it.  It doesn’t bind anyone to do anything, including Google and Verizon.

All that was released yesterday was a legislative proposal they hope will be taken up by lawmakers who actually have the authority to write legislation.  But you’d think from some of the commentary that this was the Internet equivalent of the secret treaty between Germany and Russia at the start of World War II.  Some commentators sound genuinely disappointed that something more nefarious, as had been widely and wildly reported last week, didn’t emerge.

Summary – Compare and Contrast

Let’s start with the similarities, described in more detail in the CNET piece:

  • Both propose neutrality rules that are nearly identical, including the no blocking for lawful content, no blocking lawful devices, network management transparency, and nondiscrimination.  Of these, only the wording of the nondiscrimination rule is different (more on that below).
  • Both limit the application of the rules to principles of reasonable network management.
  • Both exclude from application of the rules certain IP-based services that may run on the same infrastructure but which are offered to business or consumer customers as paid services, such as digital cable or digital voice today and others perhaps tomorrow.  The NPRM calls these “managed or specialized services,” the framework refers to them as “differentiated services.”
  • Both propose that the FCC enforce the rules by adjudicating complaints on a “case-by-case” basis.
  • Both recognize that some classes of Internet content (e.g., voice and video) must receive priority treatment to maintain their integrity, and don’t consider such prioritization by class to be a violation of the rules.
  • Both encourage the resolution of network management and other neutrality related disputes through technical organizations, engineering task forces, and other kinds of self-regulation, much as the Internet protocols have always been developed and maintained.

Again, much of the ire raised at the framework relates to aspects for which there is no material difference with the NPRM.

Now let’s get to the differences:

  • The Google-Verizon framework would exclude wireless broadband Internet from application of the rules, at least for now.  Though the NPRM recognized there were significant limits to the existing wireless infrastructure (spectrum, speed, coverage, towers) that made it more difficult to allow customers to use whatever bandwidth-hogging applications they wanted, the NPRM came down on the side of applying the rules to wireless.  This was perhaps the most contentious feature of the NPRM, judging from the comments filed.

Google has notably changed its tune on wireless broadband.  In the joint filing with the FCC on the NPRM, the companies acknowledged this was an area where they held opposite views—Google believed the rules should apply to wireless broadband, Verizon did not.  Now both agree that applying the rules here would do more harm than good, if only until the market and technology evolve further.

  • The framework would deny the FCC the power to expand or enhance the rules through further rulemakings.  Though the framework is admittedly not at its clearest here, what Google and Verizon seem to have in mind is that Congress, not the FCC, would enact the neutrality rules into law and give the FCC the power to enforce them.

But the FCC would remain unable to make its own rules or otherwise regulate broadband Internet access, the current state of the law as was most recently affirmed by the D.C. Circuit in the Comcast case.  The framework, in other words, joins the chorus arguing against the FCC’s effort to reclassify broadband under Title II and also imagines the NPRM would not be completed.

Reasonable Network Management

Let me just highlight one area of common wording that has received a great deal of negative feedback as applied to the framework and one area of difference.

Consider the definitions of “reasonable network management” that appear in both documents.

First, the NPRM:

Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.

We understand the term “nondiscriminatory” to mean that a broadband Internet access service provider may not charge a content, application, or service provider for enhanced or prioritized access to the subscribers of the broadband Internet access service provider, as illustrated in the diagram below. We propose that this rule would not prevent a broadband Internet access service provider from charging subscribers different prices for different services.

Reasonable network management consists of: (a) reasonable practices employed by a provider of broadband Internet access service to (i) reduce or mitigate the effects of congestion on its network or to address quality-of-service concerns; (ii) address traffic that is unwanted by users or harmful; (iii) prevent the transfer of unlawful content; or (iv) prevent the unlawful transfer of content; and (b) other reasonable network management practices.

Now, the Google-Verizon framework:

Broadband Internet access service providers are permitted to engage in reasonable network management. Reasonable network management includes any technically sound practice: to reduce or mitigate the effects of congestion on its network; to ensure network security or integrity; to address traffic that is unwanted by or harmful to users, the provider’s network, or the Internet; to ensure service quality to a subscriber; to provide services or capabilities consistent with a consumer’s choices; that is consistent with the technical requirements, standards, or best practices adopted by an independent, widely-recognized Internet community governance initiative or standard-setting organization; to prioritize general classes or types of Internet traffic, based on latency; or otherwise to manage the daily operation of its network.

Note here that the “unwanted by or harmful to users” language, for which the framework was skewered yesterday, appears in nearly identical form in the NPRM.


Here’s how the FCC’s “nondiscrimination” rule was proposed:

Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.

And here it is from the framework:

In providing broadband Internet access service, a provider would be prohibited from engaging in undue discrimination against any lawful Internet content, application, or service in a manner that causes meaningful harm to competition or to users.  Prioritization of Internet traffic would be presumed inconsistent with the non-discrimination standard, but the presumption could be rebutted.

That certainly sounds different (with the addition of “undue” as a qualifier and the requirement of a showing of “meaningful harm”), but here’s the FCC’s explanation of what it means by nondiscrimination and the limits that would apply under the NPRM:

We understand the term “nondiscriminatory” to mean that a broadband Internet access service provider may not charge a content, application, or service provider for enhanced or prioritized access to the subscribers of the broadband Internet access service provider....We propose that this rule would not prevent a broadband Internet access service provider from charging subscribers different prices for different services..

We believe that the proposed nondiscrimination rule, subject to reasonable network management and understood in the context of our proposal for a separate category of “managed” or “specialized” services (described below), may offer an appropriately light and flexible policy to preserve the open Internet. Our intent is to provide industry and consumers with clearer expectations, while accommodating the changing needs of Internet-related technologies and business practices. Greater predictability in this area will enable broadband providers to better plan for the future, relying on clear guidelines for what practices are consistent with federal Internet policy. First, as explained in detail below in section IV.H, reasonable network management would provide broadband Internet access service providers substantial flexibility to take reasonable measures to manage their networks, including but not limited to measures to address and mitigate the effects of congestion on their networks or to address quality-of-service needs, and to provide a safe and secure Internet experience for their users. We also recognize that what is reasonable may be different for different providers depending on what technologies they use to provide broadband Internet access service (e.g., fiber optic networks differ in many important respects from 3G and 4G wireless broadband networks). We intend reasonable network management to be meaningful and flexible.

Second, as explained below in section IV.G, we recognize that some services, such as some services provided to enterprise customers, IP-enabled “cable television” delivery, facilities-based VoIP services, or a specialized telemedicine application, may be provided to end users over the same facilities as broadband Internet access service, but may not themselves be an Internet access service and instead may be classified as distinct managed or specialized services. These services may require enhanced quality of service to work well. As these may not be “broadband Internet access services,” none of the principles we propose would necessarily or automatically apply to these services.

In this context, with a flexible approach to reasonable network management, and understanding that managed or specialized services, to which the principles do not apply in part or full, may be offered over the same facilities as those used to provide broadband Internet access service, we believe that the proposed approach to nondiscrimination will promote the goals of an open Internet.

Though the FCC doesn’t use the words “undue” and “meaningful harm,” the qualifying comments seem to suggest something quite similar.  So are the differences actually meaningful in the end?  Meaningful enough to generate so much sturm and drang?  You make the call.

At ten A.M. this morning, CNET News.com asked if I could write an article unraveling the legal implications of a rumored deal between Google and Verizon on net neutrality.  I didn't see how I could analyze a deal whose terms (and indeed, whose existence) are unknown, but I thought it was a good opportunity to make note of several positive developments in the net neutrality war this summer.

Just as I was finishing the piece a few hours later, another shocker came when the FCC announced it was concluding talks it had been holding since June with the major net neutrality stakeholders.  It's possible the leaked story about Google and Verizon, and the feverish response to it whipped up by the straggling remnants of a coalition aimed at getting an extreme version of net neutrality into U.S. law by any means necessary, soured the agency on what appeared to be productive negotiations.  Or maybe they've just gone as far as they can for now.

So I started over, and added emphasis to the outside-the-beltway developments that, in the end, may offer the best hope for a resolution to what is, after all is said and done, a technical problem requiring a technical solution.

I'll let the piece speak for itself, in part out of necessity--I'm pooped.   (I now have renewed sympathy and appreciation for the work of real journalists, which I am not.)  But had I had more time and more column inches, I would have emphasized one point I hope comes across in the story.  And that is that the politicization of problems of network management has done nothing to solve them.  It has done the opposite.

What's become even clearer in the last 24 hours is how the extremists in this largely-choreographed fight are determined not to have it end.  They don't care about free enterprise, consumers, or respect for the rule of law--though these are the principles they make the most noise about.  But that's just what it is, noise.

Memo to Silicon Valley:  you're wise to avoid as much as possible the politics of technology.  But the best way to take issues away from politicians is to solve them with engineering.