What to Do About Do Not Track
Defenders of Facebook and other social networking sites note that there is a good reason to wall off the personal information of its netizens: Privacy. By granting me control over who sees what information I post, I am free to post things about myself that I don’t want everyone to know, especially identity thieves.
Several colleagues called me about that blog post, asking me to elaborate on this problem. I was responding to a critique by Tim Berners-Lee that Facebook is not open enough. But some people still believe it is too open. When it comes to protecting their users’ privacy, Facebook has a checkered past. The current policy is the result of correcting several missteps. And the current policy is far from perfect. For example, I don’t own the data I post on Facebook, even though it’s all about me and my friends and family.
Facebook is just a public example of what every company faces with user data. It is especially tricky in light of several recent legal actions by governments around the globe trying to protect their citizens from cyber criminals. For example, the United States has several laws that constrain the way companies use and distribute personal information of their users, including HIPPA, the Patriot Act, and the Graham-Leach-Bliley Act. A set of thriving consulting practices has sprung up around helping companies comply with these regulations.
The problem is, as well-intended as these laws may be, they do have unintended consequences. The biggest unintended consequence is how they constrain commerce on the web. If the cost of asking users to register for events or fill out contact information in a call-back form is greater than the potential benefit in attracting these users to a business, companies will be averse to doing business through digital channels. I don’t think we’re quite there yet. But as precedent grows around these regulations, the costs will begin to constrain digital commerce.
Do Not Track
The latest challenge is the FTC’s proposed “Do Not Track” legislation, which is similar to the Do Not Call legislation. The proposed legislation would give consumers the option to tell online advertisers not to drop a cookie when an ad loads on a site, just as they can tell telemarketers to put them on a no-call list.
The proposal has some merit. A lot of online advertisers push the boundaries and drop tracking cookies that capture more data than they need to better target ads. In themselves, tracking cookies are usually benign. But if a consumer has a spyware infection, this kind of data can be used in nefarious ways. This is why programs such as Spyware Doctor remove tracking cookies and other adware from subscribers’ machines. It can easily feed malware in the wrong hands.
The problem is, like other legislation before it, the Do Not Track proposal will have lots of unintended consequences. For one thing, it will inhibit advertising from targeting ads to consumers. As bad as malware is, being fed a bunch of irrelevant ads (or worse) can be more than just annoying. Many readers will recall the deluge of direct mail that came after Do Not Call was implemented.
Is there a way to avoid overloading the web with regulation? Pundits suggest that the web works best when it is self-regulated. Sure, you will have a certain black-hat element. But the white-hat vigilantes on the web usually take care of them. Dealing with those who merely push the boundaries with gray-hat tactics is a small price you pay for free and open information. Craig McDonald from Covario expresses this opinion very well in his post “FTC: 1 Cookie Monster: 0”:
My viewthis type of regulation is dumb. It creates inefficiencies in the market. Advertisers that abuse their use of personal information will be punished by the market. Same way telemarketers are.
The other side of this argument is the punishment doesn’t come until after a lot of people have been harmed. Anybody who has ever had a major spyware attack like those allegedly perpetrated by the Russian mob will say that we need baseline legislation that protects consumers from these black-hat tactics. As a victim of this kind of malware, I have some sympathy with their views.
I’m perfectly happy to let the market decide how to deal with gray-hat behavioral targeting. But I don’t really see how giving me the choice to opt out of behavioral targeting on a case-by-case basis is a bad thing. When Facebook serves misleading or offensive ads to my profile page, I have the option of deleting them from the page and indicating why. This only helps Facebook better target ads to me, which helps them sell more ads. Rather than being an inhibitor to ad sales, being able to opt out of some ads enables better, more targeted ads. Perhaps technology like what appears in Figure 1 can help ease the FTC’s concerns while promoting more relevant online advertising.
Figure 1 The simple survey Facebook serves users when they mark the ad for deletion from their profile pages.