Category Archives: Criminal

The Fallacy of “e-personation” Laws

I was interviewed yesterday for the local Fox affiliate on Cal. SB1411, which criminalizes online impersonations (or “e-personation”) under certain circumstances.

On paper, of course, this sounds like a fine idea.  As Palo Alto State Senator Joe Simitian, the bill’s sponsor, put it, “The Internet makes many things easier.  One of those, unfortunately, is pretending to be someone else.  When that happens with the intent of causing harm, folks need a law they can turn to.”

Or do they?

The Problem with New Laws for New Technology

SB1411 would make a great exam question of short paper assignment for an information law course.  It’s short, is loaded with good intentions, and on first blush looks perfectly reasonable—just extending existing harassment, intimidation and fraud laws to the modern context of online activity.  Unfortunately, a careful read reveals all sorts of potential problems and unintended consequences.

A number of states have passed new laws in the wake of highly-publicized cyberstalking and bullying incidents, including the tragic case involving a young girl’s suicide after being dumped by her online MySpace boyfriend, who turned out to be a made-up character created for the purpose of hurting her feelings.  (I’ve written about the case before, see “Lori Drew Verdict Finally Overturned.” )

Missouri passed a cyberbullying law when it turned out there was no federal law that covered the behavior in the MySpace case.  Texas and New York recently enacted laws similar to SB 1411, though the Texas law applies only to impersonation on social media sites.

The problem with all these laws generally is that the authors aren’t clear what behaviors exactly they are trying to criminalize.  And, mindful of the fact that the evolution of digital life is happening much faster than any legislative body can hope to keep up with, these laws are often written to be both too specific (the technology changes) and too broad (the behavior is undefined).  As a result, they often don’t wind up covering the behavior they intend to deter, and, left on the books, can often come back to life when prosecutors need something to hang a case on that otherwise doesn’t look illegal.

Given the proximity to free speech issues, the vagueness of many of these laws makes them good candidates for First Amendment challenges, and many have fallen on that sword.

California’s SB 1411 as a Case in Point

SB1411, which last week passed in the State Senate, suffers from all of these defects.  It punishes the impersonation of an “actual person through or on an Internet Web site or by other electronic means for purposes of harming, intimidating, threatening or defrauding another person.”  It requires the impersonator to knowingly commit the crime and do so without the consent of the person they are imitating.  It also requires that the impersonation be credible.  Punishment for violation can include a year in jail and a suit brought by the victim for punitive damages.

First let’s consider a few hypotheticals, starting with the one that inspired the law, the MySpace case noted above.  Since the boy whose profile lured the victim into an online romance that was then cruelly terminated was a made-up person (the perpetrators found some photo of a suitably shirtless teen and built a personality around it), SB 1411 would not apply had it been the law in Missouri.  The boy was not an “actual person,” and, except perhaps to a thirteen year old with existing mental health problems, may not have been credible either.  (The determination of “credibility” under SB 1411 would presumably be based on the “reasonable person” standard.)  Likewise, law enforcement agents creating fake Craigslist ads to smoke out drug buyers, child molesters, or customers of sex workers would also not be violating the law.

Also excluded from SB 1411 would probably be those who use Craigslist to get back at exes or other people they are angry at by placing ads promising sex to anyone who stops by, and then gives the address of the person they are trying to get even with.  In most cases, these ads are not credible impersonations of the victim; they are meant to offend them but not to convince a reasonable third person that they really speak for the victim.   A fake Facebook page for a teacher who proceeds to make cruel or otherwise harmful statements about her students, likewise, would not be a credible impersonation.

The Twitter profiles being created to issue fake press releases purportedly on behalf of BP would also not be illegal under SB 1411.  First, BP is not an “actual person.”  Second,  Twitter profiles such as BPGlobalPR are clearly parodies—they are issuing statements they believe to be what BP would say if it were telling the truth about its actions in relation to the gulf spill.  (“We’re on a seafood diet- When we see food, we eat it! Unless that food is seafood from the Gulf. Yuck to that.”)   Again, not a credible impersonation.

You also do not commit the crime by confusing people inadvertently.  There are several people I am aware of online named Larry Downes, including a New Jersey state natural resources regulator, a radio station executive and conservative commentator, a cage fighter and a veterinarian who lives in a nearby community.  (The latter is a distant cousin.)  Facebook alone has 11 profiles with my name.  Only one of them is actually me, but the others are not knowingly impersonating me just because they use the same name, even if some third person might be confused to my detriment.

Likewise, the statute doesn’t reach out to those who help the perpetrator, intentionally or otherwise.  The “Internet Web sites” or providers of other electronic means aren’t themselves subject to prosecution or civil cases brought by the victims of the impersonation.  So Craigslist, MySpace, Facebook, and Twitter aren’t liable here, nor are the ISPs of the perpetrators, even if made aware of the activity of their users and/or customers.

For one thing, a federal law, Section 230, immunizes providers against that kind of liability under most circumstances.  Last week, Craigslist lost its bid to preclude a California lawsuit using Section 230 as its defense when sued by the victim of fake posts soliciting sex and offering to give away his possessions.  The victim informed Craigslist of the problem, and the company promised to take action to stop future posts but did not succeed.  But it lost its immunity only by promising to help which, of course, the site won’t do in the future!  (See Eric Goldman’s analysis of the case.)

So there are important limitations (some added through recent amendments) to SB 1411 that reduce the possibility of its being applied to speech that is otherwise protected or immunized by federal law.  (In the BP example, the company might have a trademark case to bring.)  Most of these limits, however, seem to take any teeth out of the statute, and seem to exclude most of the behavior Sen. Simitian says he is concerned about.

Unintended Consequences

What’s left?  Imagine a case where, angry at you, I create a fake Facebook profile that purports to represent you.  I post material there that is not so outrageous that the impersonation is no longer credible, but which still has the intent of harming, intimidating, threatening or defrauding you.  Perhaps I report, pretending to be you, about all of my extravagant purchases (but not so extravagant that I am not credible), leading your friends to believe you are spending beyond your means.  You find out, and find my actions intimidating or threatening.

Perhaps I announce that you have defaulted on your mortgage and are being foreclosed, leading your creditors to seek security on your other debts.  Perhaps I threaten to continue posting stories of your sexual exploits, forcing you to pay me blackmail to save you embarrassment.

Would these cases be covered under SB 1411?  Perhaps, unless of course the claims that I am making as you turn out to be true.  In the U.S., truth is a defense to defamation, so even if my intent is to “harm” you by revealing these facts, if they are facts then there is no action for defamation.  That I say the facts pretending to be you, under SB 1411, would appear to turn a protected activity into a crime, perhaps not what the drafters intended and perhaps not something that would stand up in court.  (The truth-as-defense in defamation cases rests on First Amendment principles—you can’t be prosecuted for saying the truth.)

Of course, much of the other behavior I described above is already a crime in California—in particular, various forms of intimidation, harassment and, by definition, fraud.  The authors of SB 1411 believe the new law is needed to extend those crimes to cover the use of “Internet Web sites” and “other electronic means,” but there’s no reason to believe that the technology used is any bar to prosecutions under existing law.  (Indeed, the use of electronic communications to commit the acts would extend the possible criminal laws that apply, since electronic communications are generally considered interstate commerce and thus subject to federal as well as state laws.)

For the most part, then, SB 1411 covers very little new behavior, and little of the behavior its drafters thought needed to be criminalized.  For an impersonation to be damaging would, in most cases, mean that it was also not credible.  Pretending to be me and telling the truth could be harmful, but probably a form of protected speech.  Pretending to be me in order to defraud a third party is already a crime-that is the crime of identity theft.

Which is not to say, pun intended, that the proposed law is harmless.  For in addition to categories of behavior already covered by existing law, SB 1411 makes it a crime to impersonate someone with the purpose of “harming” “another person.”  There is, not surprisingly, no definition given for what it means to have the purpose of “harming,” nor is it clear if “another person” refers only to the person whose identity has been usurped, or includes some third party (perhaps a family member or friend of that person, perhaps their employer.)

Having a purpose of “harming” “another person” is incredibly vague, and can cover a wide range of behaviors that wouldn’t, in offline contexts, be subject to criminal prosecution.  The only difference would be that the intended harm here would be operationalized through online channels, and would take the form of a credible impersonation of some actual person.

Why those differences ought to result in a year in jail doesn’t make much sense.  Consequently, an attempt to use the law to prosecute “harmful” behavior would be met with a strong constitutional objection.

That’s my read of the bill, in any case.  Since I posed this as an exam question, I’m offering extra credit for anyone who can come up with examples—there are none given by the California State Senate—of situations where the law would actually apply and that would not already be illegal and which would not be subject to plausible Constitutional challenges.

There’s Something About ECPA

I write in “The Laws of Disruption” of the risk of unintended consequences that regulators run in legislating emerging technologies.  Because the pace of change for these technologies is so much faster than it is for law, the likelihood of defining a legal problem and crafting a solution that will address it is very slim.  I give several examples in the book of regulatory actions that quickly become not just obsolete but, worse, wind up having the opposite result to what regulators intended.

An unfortunate example of that problem in the news quite a bit lately is the Electronic Communications Privacy Act or ECPA.   (My first published legal scholarship, in 1994, was an article about a provision of ECPA that allowed law enforcement officers to use evidence they came across by accident in the course of an otherwise lawful wiretap, see “Electronic Communications and the Plain View Exception:  More ‘Bad Physics.’”)

Passed in 1986, ECPA at the time was a model of smart lawmaking in response to changing technologies.  It updated the federal wiretap statute, known as Title III, to take into account the rise of cellular technologies and electronic messages–which didn’t exist when the original law was passed in 1968.

In essence, ECPA brought these new forms of communications under the legal controls of the wiretap law, meaning for example that police could not intercept cell phone transmissions without a warrant, just as under Title III they needed a warrant to intercept wireline calls.  Private interception was also made illegal.

Lost in the Clouds

A lot has happened since 1986, and unfortunately for the most part ECPA hasn’t kept up.  Most significant has been the explosion of new data sources of all varieties, and in particular the now billions (trillions?) of messages sent and received each day by individuals communicating through the Internet.  The potential evidence those messages contain for a variety of investigations—criminal, civil, terror-related—has made them an irresistible target for law enforcement as well as civil litigants.

In addition to the sheer volume of new data sources, the other significant change undermining ECPA’s assumptions has been the movement to cloud-based services, particularly for email.  In the early days of email (say, 1995), ISPs kept messages on their servers only until the user, through a client email program such as Eudora, downloaded the message to his or her personal computer.  Once downloaded, the message was immediately or soon after deleted from the server, if for no other reason than to save storage space.

Storage, however, has gotten cheap, and the potential uses of stored data for a variety of purposes has made it attractive for ISPs and other services (e.g., Google’s Gmail) to retain copies of messages and other user data on a permanent basis.

The drafters of ECPA had great foresight, but they couldn’t have imagined these changes.

Here come the unintended consequences.  Under the law, law enforcement agents hoping to get access to your emails as part of an investigation are required to obtain a warrant, just as they would need a warrant to search your home and seize your computer.

But for data stored on a third party computer—an ISP or other cloud provider—the warrant requirement applies only for “unopened” messages and only for 180 days after receipt.  Once the message is opened and 180 have passed, any stored data can be obtained without a warrant based on the much lower standard of a subpoena.

In some sense, this means that as users move to cloud computing they are inadvertently and unknowingly waiving protections against law enforcement uses of their data. Keep your data only locally on equipment in your home or office, and the police need a warrant to look at or take it.  Leave it in the cloud somewhere, and they can get at it without much fuss at all.

This turn of events, the result not of any secret conspiracy so much as the random confluence of technological inventions since 1986, is almost certainly not what the drafters of ECPA had in mind.  It is more likely to be just the opposite.  For ECPA, like the wiretap law it amended, was intended to give greater protection to communications than what the Fourth Amendment to the U.S. Constitution would otherwise have provided.

A Very Brief History of the Fourth Amendment in Cyberspace

The Fourth Amendment, recall, protects citizens from “unreasonable searches and seizures” by the government.  (We are, it bears emphasizing, talking ONLY about government access here—employers, parents, friends and companies are not subject to the Fourth Amendment.)

Which is to say, the Fourth Amendment is the absolute floor of citizen protections from government.  Title III and ECPA were intended to raise that floor for telephone and later data communications to something that gave citizens more, not less, privacy.

At some point, indeed, technology may push the law below the standards of the Fourth Amendment, making it unconstitutional.  That’s been a concern all along, from the beginning of the wiretap statute itself in 1968.  The passage of Title III followed landmark Supreme Court decisions in the Katz and Berger cases, in which the Court reversed the 1928 Olmstead case, which allowed the police to intercept phone calls of a suspect without a warrant.

The Olmstead decision, Justice Harlan wrote in his concurrence to Katz, was “bad physics as well as bad law, for reasonable expectations of privacy may be defeated by electronic as well as physical invasion,” 398 U.S. at 362 (1967).

Harlan’s phrasing has proven prophetic.  In order to avoid the metaphysical problem of explaining how electronic interception could constitute a “search” or a “seizure” when no physical property of the subject is involved, the Court focused instead on the “reasonable” part of the Fourth Amendment.

Search and seizure, the Court has held over the last fifty years, is really about privacy, and a “reasonable” expectation of privacy for any information law enforcement agents want to gather requires a warrant.  What part of a wiretap is a search and what part a seizure are questions neatly elided (though perhaps too neatly as we’ll see) by the “reasonable expectation of privacy” standard.

The privacy standard has proven at least somewhat resilient to changing technologies.  But with mainstream adoption of revolutionary information technologies comes changing expectations of what is reasonably expected to be “private” information.  Indeed, Olmstead can be seen as a perfectly understandable decision in light of the fact that in 1928 nearly all telephones were connected through party lines, where no caller had any expectation of privacy.

But that also means there is no absolute baseline for Fourth Amendment challenges (usually by a criminal defendant) to evidence collected by the government.  Again, Title III and ECPA can and did set a higher bar than was required as a constitutional minimum, but even as those intentions have been reversed by technology it does not automatically follow that ECPA is now below what the Fourth Amendment requires.

Absent special protections citizens may have had from ECPA, the question under Fourth Amendment jurisprudence becomes:  Do users who keep email and other data archived with ISPs and other cloud providers have an expectation of privacy?  Is that expectation reasonable?

The Ugly Details

Not surprisingly, courts are increasingly asked to weigh in on those questions, and the results are also not surprisingly inconclusive.  (David Couillard at Ars Technica reviewed some of the case law in a recent article, “The Cloud and the Future of the Fourth Amendment.”)

Earlier this month, the Department of Justice abandoned an attempt to avoid a search warrant even for mail messages less than 180 days old in a case that involved Yahoo mail.  (See Declan McCullagh, “DOJ Abandons Warrantless Attempts to Read Yahoo E-mail.”)

Google, which came to Yahoo’s defense, has begun disclosing just how many requests for information about its users it receives from various government agencies.  (See Jessica Vascellaro, “Google Discloses Requests on Users.”)

It’s also worth noting that sometimes technology goes the other way—making it harder for law enforcement officials to collect evidence and conduct investigations.  Encryption is a good example here—stronger encryption protocols make it easier for criminals to hide activity from the police.

Indeed, law enforcement and privacy advocates are in some sense always engaged in a complicated dance.  As technology constantly changes the delicate balance between the sanctity of private activity and the need for effective law enforcement, lawmakers are regularly asked by one side or the other (or both) to change the law to bring it back into something that satisfies both groups.

The Digital Due Process Coalition

The cloud computing problem has inspired the creation of an interesting coalition aimed at returning ECPA where its drafters intended to set the scales.  The group, called Digital Due Process, was launched in March and is calling for specific reforms of ECPA to take into account the reality of digital life in 2010.  (For those who want the legal details, the site includes an excellent analysis by my one-time boss Becky Burr, see “The Electronic Communications Privacy Act of 1986: Principles for Reform.”)

The Digital Due Process group is a remarkable coalition of organizations and corporations who might not otherwise be thought to agree on too many issues of technology policy.  It includes advocacy groups normally thought to be on the right or the left, including the ACLU, the Center for Democracy and Technology, the Progress and Freedom Foundation, the Electronic Frontier Foundation and the American Library Association.  Corporate members include Google, AT&T, Microsoft, eBay, and Intel.

One might think that with such specific recommendations and such a wide coalition of support from across the ideological spectrum that ECPA reform would be a slam dunk.  But of course that would ignore one very powerful lobby not represented by Digital Due Process–the lobby of law enforcement agencies.

These agencies almost certainly recognize that the move to cloud computing has given them unintended and unprecedented access to information otherwise protected by the law, but naturally they are loathe to let go of any advantage in the fight against crime.

Though there have been some calls in Congress for enacting the reforms called for by the coalition, the success of Digital Due Process is far from certain.  And even if the group does succeed, there’s no telling how long it will be before the scales become unbalanced yet again, or in whose favor, by the next set of disruptive information technologies to become mainstream.

As Thomas Jefferson said, “The price of freedom is eternal vigilance.”

The Other Side of Privacy

After attending last week’s Federal Trade Commission online privacy roundtable, I struggled for several days to make some sense out of my notes and my own response to calls for new legislation to protect consumer privacy. The result was a 5,000 word article—too long for nearly anyone to read. More on that later.

Even as the issue of privacy continues to confound much brighter people than me, however, the related problem of securing the Internet has also been getting a great deal of attention. This is in part due to the widely-reported announcement from Google that its servers and the Gmail accounts of Chinese dissidents had been hacked, leading the company to threaten to leave China altogether if its government continues to censor search results.

Both John Markoff of the New York Times and Declan McCullagh of CBS Interactive have also been back on the beat, publishing some important stories on the state of American preparedness for cyberattacks (not well prepared, they conclude) and on the continued tension between privacy and law enforcement. See in particular Markoff’s stories on Jan. 26 and on Feb. 4th and McCullagh’s post on Feb. 3.

Markoff reports a consensus view that the U.S. does not have adequate defensive and deterrent capabilities to protect government and critical infrastructure from cyberattacks. Even worse, after years of effort and studies, the author of the most recent effort to craft a national strategy told him “We didn’t even come close.”

Markoff reports that Google has now asked the National Security Agency to investigate the attacks that led to its China announcement and the subsequent exchange of hostile diplomacy between the U.S. and China. Dennis C. Blair, director of the Office of National Intelligence, told Congress earlier this week that “Sensitive information is stolen every daily from both government and private-sector networks….”

That finding seems to be buttressed by findings in a new study sponsored by McAfee. As Elinor Mills of CNET reported, 90% of survey respondents from critical infrastructure providers in 14 countries acknowledged that their enterprises had been the victim of some kind of malware. Over 50% had experienced denial of service attacks.

These attacks and the lack of adequate defenses are leading companies and law enforcement agencies to work more closely, if only after the fact. But privacy advocates, including the Electronic Frontier Foundation and the Electronic Privacy Information Center, are concerned about increasingly cozy relations between major Internet service providers and law enforcement agencies including the NSA.

They are likely to become apoplectic, however, when they read McCullagh’s post. He reports that a federal task force is about to release survey results that suggest law enforcement agencies would like an easier interface to request customer data from cell phone carriers and rules that would require Internet companies to retain user data “for up to five years.” The interface would replace the time-consuming and expense paper warrant processes now necessary for investigators to gain access to customer records.

Privacy advocates and law enforcement agencies are simply arguing past each other, with Internet companies trapped in the middle. Unmentioned at the FTC hearing—largely because law enforcement is out of the scope of the agency’s jurisdiction—is the legal whipsaw that Internet companies are currently facing. On the one hand, privacy and consumer regulators in the U.S., Europe and elsewhere are demanding that information collectors, including communications providers, search engines and social networking sites, purge personally-identifiable user data from their servers within 12 or even 6 months.

At the same time, law enforcement agencies of the very same governments are asking the same providers to retain the very same data in the interest of criminal investigations. Frank Kardasz, who conducted the law enforcement survey, wrote in 2009 that ISPs who do not keep records long enough “are the unwitting facilitators of Internet crimes against children.” Kardazs wants laws that “mandate data preservation and reporting,” perhaps as long as five years.

ISPs and other Internet companies are caught between a rock and a hard place. If they retain user data they are accused of violating the privacy interests of their consumers. If they purge it, they are accused of facilitating the worst kinds of crime. This privacy/security schizophrenia has led leading Internet companies to the unusual position of asking for new regulations, if only to make clear what it is governments want them to do.

The conflict becomes clear just by considering one lurid example (the favorite variety of privacy advocates on both sides) that was raised repeatedly at the FTC hearing last week. As long as service providers retain data, the audience was told, there is the potential for the perpetrators of domestic violence to piece together bits and pieces of that information to locate and continue to terrorize their victims. Complete anonymization and deletion, therefore, must be mandated.

But turn the same example around and you reach the opposite conclusion. While the victim of the crime is best protected by purging, capturing and prosecuting the perpetrator is easiest when all the information about his or her activities has been preserved. Permanent retention, therefore, must be mandated.

This paradox would be easily resolved, of course, if we knew in advance who was the victim and who was the perpetrator. But what to do in the real world?

For the most part, these and other sticky privacy-related problems are avoided by compartmentalizing the conversation—that is, by talking only about victims or only about perpetrators. As Homer Simpson once said, it’s easy to criticize, and fun too.

Unfortunately it doesn’t solve any problem, nor does it advance the discussion.

The Real Privacy Paradox

ftc logoTwo stories in the news today about online privacy suggest a paradox about user attitudes. But not the one everyone always talks about, in increasingly urgent terms.

One story from CNET’s Don Reisinger reports on a study conducted by an Australian security firm. The company created two phony Facebook users and tried to “friend” 100 random Facebook users. Forty-one to 46% of the users “blindly accepted” (to quote the firm) the requests, giving the fake users access to the users’ birth date, email address, and other personal information.

“This is worrying,” the company’s blog reported, “because these details make an excellent starting point for scammers and social engineers.”

The other story, reported by the New York Times’ Stephanie Clifford, involves the raucous start today of a Federal Trade Commission conference on privacy and technology. The conference began with a full day of anxious hand-wringing. Quotes from two academics caught my eye. Penn’s Joseph Turow told a panel “Generally speaking, [consumers] know very very little about what goes on online, under the screen, under the hood. The kinds of things they don’t know would surprise many people around here,” he said.

Then there were even more ominous words from Columbia’s Alan Westin. Speaking of the relationship between users of free services from Yahoo, Google, Facebook, Twitter and other Internet giants in which access to information (and therefore to targeted advertising) is a pre-condition to use “free” services, Westin reported “that bargain is now long gone, and people are not willing to trade privacy for the freebies on the Internet.”

As I write in Law Two of The Laws of Disruption (“Personal Information”), researchers, advocacy groups and their colleagues in the mainstream media have for years been describing what they call “the privacy paradox.” User surveys consistently find that consumers are concerned (even “very concerned”) about their privacy online, and yet do nothing to protect it. They don’t read privacy policies, they don’t protect their information even when given the tools to do so, and they merrily click on targeted advertisements and even buy things that online merchants deduce they might want to buy.

Oh, the humanity.

I see no paradox here. Much of the research conducted about consumer concerns over privacy is of extremely poor quality—surveys or experiments conducted by interested parties (security companies) or legal scholars with little to no appreciation for the science of polling. Of course consumers are concerned about privacy and are uncomfortable with concepts like “behavioral” or “targeted” advertising. No one ever asks if they understand what those terms really mean, or if they’d be willing to give up free services to avoid them. And consumers when they’re being surveyed are very likely to think differently about their “attitudes” than when they are busily transacting and navigating their information pathways.

What, for example, is the basis for Prof. Westin’s claim that people are no longer willing to make the trade of information for service? The 350,000,000 users now reported by Facebook, perhaps, or the zillion Tweets a day?

And where does the Australian security firm get the idea that scammers are sophisticated enough to use birthdates and other personal data to fashion personalized scams? The completely unspecific Nigerian variations seem to work just fine, thank you. How’s this for a series of non sequiturs, again from the Australian experimenters: “10 years ago, getting access to this sort of detail would probably have taken a con-artist or an identity thief several weeks, and have required the on-the-spot services of a private investigator.”

Huh? To get someone’s email address, birthday, and the name of the city they lived in? Most of that data is freely accessible in public records. Yes, even in the innocent by-gone days of ten years ago.

The real paradox—and a dangerous one at that–is between the imminent privacy apocalypse preached with increased hysteria by a coalition of legal scholars, security companies, journalists and a small fringe of paranoid privacy crazies (not necessarily separate groups, by the way) and the reality of a much more modest set of problems which for most users present little to no problem at all. Which is to say, as CNET’s Matt Asay put it, “It’s not that we don’t value our privacy. It’s just that in many contexts, we value other things as much or more. We weigh the risks versus the benefits, and often the benefits trump the privacy risks.”

That is not to say there is no privacy problem. It is a brave new world, where new applications create startling new ways of interacting, not all of them pleasant or instantly comfortable. Consider some recent examples:

    – Photo applications can now use pattern matching algorithms to take “tagged” faces from one set of photos and find matches across their very large dataset.
    – Facebook is in the process of settling a series of lawsuits over its ill-fated Beacon service, which reported back to Facebook actions taken by Facebook users elsewhere in the Infoverse for posting on their Facebook pages.
    – A recent survey found that a significant number of companies have not made compliance with the Payment Card Industry’s Data Security Standard a priority.
    – Loopt, which makes use of GPS data to tell cell phone users where their friends are, introduced a new service, Pulse, to provide real-time information about businesses and services based on a user’s physical location.
    – The EU recently adopted stricter rules requiring affirmative opt-in for cookies.

    What these and other examples suggest is that, as so often happens, the capacity for information technology to connect the dots in interesting and potentially valuable (and potentially embarrassing) ways regularly outpaces our ability to adjust to the possibility.  It is only after the fact that we can decide if, how, and when we want to take advantage of these tools.

    There are real privacy issues to be considered, but they are far more subtle and far more ambiguous than the frenzied attendees of the FTC’s conference would have us—or themselves, more likely—believe.

    It’s not, in other words, like we need to militarize consumers to reflect their privacy “attitudes” in their doggedly contrary online behavior. Rather, we need to study the behavior, as only a few researchers (notably UC Riverside marketing professors Donna Hoffman and Tom Novak) actually bother to do. It is, after all, much easier to design self-congratulatory surveys and pontificate abstract privacy theory than it is to study consumer behavior in large-scale. (More fun, too.)

    Until we can begin to talk sanely and sensibly about the costs and benefits of information generation, collection, and use, regulators are well-advised to do very little by way of remedies for the wrong set of problems. (So far, the FTC and other U.S. agencies have, thankfully, done very little privacy legislating and rulemaking.) Businesses would be smart to adopt information security practices that should have been standard a generation ago, and educate their customers about their commitment to doing so.

    As for consumers—well, consumers will do what they always do—vote with their wallets.

    And please, pay no attention to the frantic man behind the screen. Even if he insists on giving you his name, email address, and, heaven forbid, his birthday.

Identity Theft: Not Dead Yet

lifelock logoJulia Angwin’s column in The Wall Street Journal argues that identity theft is nothing but a “fear campaign.”

Not exactly.

I also have some strong words about the overuse and abuse of the term “identity theft” in The Laws of Disruption, and have written elsewhere in this blog on the subject. But I don’t think the problem is, as Angwin writes, merely a linguistic construct “designed to get us to buy expensive services that we don’t need.”

Let’s start with where I agree. By and large, “identity theft” is a term that is being kept alive by organizations with a vested interest in making the problem sound as severe and dangerous as possible. Angwin mentions credit bureaus and companies such as Lifelock who sell insurance against the problem. I would add to that list traditional insurers who are also selling identity theft policies, software companies such as McAfee and Norton who sell anti-malware products and services, and the U.S. Federal Trade Commission, which reports every information theft as identity theft even when it is only credit card companies who are at risk. Each of these groups has its own reasons for keeping the problem at the forefront of consumer fears about Internet commerce.

Angwin is also right to point out that the true problem–which for the most part are unauthorized purchases–is not a problem for consumers. Credit card companies and banks, by law, bear nearly all of the actual losses. (Of course the losses–still some $48 billion in 2008–are ultimately paid by consumers in the form of higher interest rates and other card and merchant fees.) Most consumers pay nothing when their card is stolen and used by thieves. Even when new accounts are opened in your name (the truer example of identity theft), the average loss to consumers is less than $600. The scale of identity theft in both frequency and cost has been steadily declining since the FTC began keeping records in 1999.

But I wouldn’t go as far as Angwin in saying there’s no problem here. Because while it’s true that consumers have no legal responsibility to pay for unauthorized charges on credit cards and bank withdrawals, many victims of Internet-related fraud do pay a significant price.

Once consumers stop the unauthorized charges and close the fraudulent accounts, many encounter a demonic maze of obstacles trying to clear the criminal activity from their credit reports, scores, and credit card accounts. And ignoring the errors is not an option. Keeping an accurate profile is essential for everything from applying for a mortgage to getting a job or apartment–basic life activities, in other words. Yet these financial records are in the hands of shadowy third parties–who charge, when they can, just to divulge what inaccurate information they have on file. Either by design or ineptitude, these organizations make correcting their own errors nearly impossible for consumers.

The victims of identity theft are victimized not so much by the information criminals, but by the information managers.

Federal and state regulations are supposed to protect consumers from this kind of abuse, too, but enforcement is poor.

Accurate financial data is critical in the development of the information economy. We need more transparency in the operation of credit bureaus, agencies, credit card companies and others who have appointed themselves the guardians of consumer financial records. We need a Federal Trade Commission that is interested more in protecting consumers than protecting the markets for consumer protection products that are in part unnecessary and in part insurance against the incompetence of the industry the FTC supposedly regulates.

As Mark Twain once said, “A lie can travel halfway around the world while the truth is putting on its shoes.”

We still need tools to give the truth a fighting chance.

FTC to Bloggers: Drop that Sample!

The Federal Trade Commission has announced plans to regulate the behavior of bloggers.  Unfortunately, not their terrible grammar, short attention spans or inexplicably short fuses.

Instead, the FTC announced updates to its 1980 policy regarding endorsements and testimonials, first developed to reign in the use of celebrity endorsers with no real connection or experience with products they claimed to use and adore.

The proposed changes require bloggers who recommend products or services to disclose when they have a “material connection” to the provider—that is, that they were paid to write positive reviews or given freebies to encourage them to do so.  (The FTC, of course, is limited to activities in the U.S.)

You might think bloggers would be flattered to be put in the same category as celebrities, but no.  The response has been universal outrage, as noted by Santa Clara University Law Professor Eric Goldman in his detailed analysis of the proposed changes. (The complete FTC report is available here, but it is 81 pages of mostly mush.)

The principal objection is that the changes, which take effect December 1st, continues to exempt journalists in traditional media but not those in what the agency quaintly refers to as “new media”—that is, those whose content appears online, whether in blogs, social networking, email, or other electronic communications.  While professional journalists can be trusted to speak truthfully about products even when they are provided sample or review copies, bloggers cannot.

L. Gordon Crovitz’s column in today’s Wall Street Journal nicely dismantles the faulty reasoning in the Commission’s analysis.  Moreover, Eric Goldman’s post (cited above) argues persuasively that the one example the FTC gives of a violation of the policy as applied to bloggers is directly at odds with Section 230 of the Communications Act, which provides broad immunity to third parties for content posted by someone else through any Internet service.  So it may be that the proposed change is pre-empted by the broad and sensible provisions of Section 230, which creates a wide breathing space for interactive communications to develop. (The FTC makes no mention of Section 230 in its report.)

To me, in any event, this is a classic problem of the poor fit between traditional legal systems and rapidly-evolving new information technologies.  Legal change, as I write in The Laws of Disruption, relies heavily on the process of “reasoning by analogy.”  When confronted with new situations, lawmakers, regulators and judges will look for analogous situations elsewhere in the law and apply the rules that most closely match the new circumstances.

In times of radical transformation at the hands of disruptive technologies, however, reasoning by analogy is a terrible way to develop a  body of law for new activities. Bloggers are not like journalists and they are not like celebrity endorsers.  They are like bloggers—a new form of communication, still very much in its early stages of development, that uses new technology to engage in a new kind of conversation.

No old rule, extended and mangled until it is unrecognizable, is likely to fit the new situation.  And rather than try to guess at a new rule, regulators should fight their natural tendencies and just wait.  For now, the Web has been developing a variety of self-correcting mechanisms and reputational metrics that may do an effective and efficient job of policing abuses of the trust between bloggers and their readers.  Sorry folks, but we may not need the FTC and its cumbersome enforcement mechanisms to save the day this time.

What’s more, the risk of applying ill-considered old-world regulations to new situations is that regulations (even if lightly or not at all enforced) will retard, skew, or otherwise chill the development of new ways of interacting at the heart of digital life.

That doesn’t seem to worry the FTC.   “[C]ommenters who expressed concerns about the future of these new media if the Guides were applied to them,” they say, “did not submit any evidence supporting their concerns.”

Let’s turn that objection around to the right direction.

The FTC did not submit any evidence of a problem that needs to be solved, or of their ability to solve it.