Friday, May 15, 2009

Facebook "censoring" revisited, and some musings on configurability

This is a follow-up of sorts to my recent post "#fixyourhead The existing customer is often wrong," but I'm looking at it from a different angle. I was checking my analytics on Thursday evening and noticed that this Empoprise-BI business blog was getting some hits because of Google searches on phrases that included the words "Facebook" and "censor." You'll recall that I last addressed this issue in a March 16, 2009 post entitled "Facebook censoring mentions of Twitter? I doubt it." This is what I said:

On Sunday, Louis Gray shared a post from Ari Herzog that referenced a tweet by Craig Thomler.

Thomler and Herzog suspected that Facebook was censoring mentions of Twitter. In fact, Herzog conducted a test in which he issued a series of status updates, most of which mentioned Twitter.

When he was done, none of the status updates that mentioned Twitter showed up, so Herzog wrote his post and called it "Proof That Facebook is Censoring You."...

I made five rapid-fire status updates in Facebook, only three of which have made it to FriendFeed as I type this. 1 of those 3 did mention Twitter. My guess is that Facebook just gets overwhelmed by frequent status updates, regardless of content, and loses the middle updates.


But obviously there's some current (mid-May) interest in the whole Facebook censoring topic, so perhaps this is a good time to revisit it - not because of mentions of Twitter, but because of what David Sarno calls a "heavy-handed" way to protecting Facebook users from phishing attempts.

I had a ... reaction this morning when Facebook reached its fingers into my inbox and deleted two messages without asking me. Granted, they were both phishing messages -- malicious spam, essentially -- from today's attack. For many unsuspecting people, the mere presence of these messages would constitute a security threat, so Facebook's eradicate-first-ask-questions-last approach is understandable. Nuke the virus before it causes more damage. But still, those messages had already been in my mailbox for hours. I had opened and examined them. They were my mail.

More here in the L.A. Times blog section.

Sarno is correct in noting that different Facebook users may have different preferences regarding the way in which the service treats harmful material. Some figure that they're smart enough to know not to open the messages, but may want to retain them anyway (perhaps I need material to write a Nigerian scam parody). Some would rather than Facebook protect them from such items, or would prefer for other reasons not to be bothered with them.

In my day job in marketing I often run into conflicting customer requirements, in which one customer wants our software to do item A, while another customer wants our software to do the exact opposite. Using my fount of marketing wisdom, I turn to the appropriate engineers and say, "Make it configurable." (If marketing can say it in three words, isn't it really easy for engineering to implement? But I digress.)

You could apply that same principle to Facebook's treatment of spam e-mails, or to Twitter's display of @replies from people you don't know. (Incidentally, Jake Kuramoto has addressed the latter.) However, in the spam e-mail case, Facebook has decided to not make this configurable; they know what's best, so they'll take care of it for you.

I hope that you figured out that when I previously said "isn't it really easy for engineering to implement," I wasn't being serious. To add configurability options to an application, you need to develop a user interface that gives the customer configurability without being completely overwhelming. Too much configurability can be a very bad thing.

So whether you're talking about Facebook, Twitter, Connect, or the application for which I am responsible, you are faced with a challenge - which items do you want to make configurable (i.e. under user control), and which items do you want to let the developer decide (i.e. no user control)? Even the hugest tech-weenies wouldn't want a completely configurable Twitter - for example, imagine what would happen if each user could set the maximum length of messages. Some would stick with 140 characters, some with 80, some with 6,000. The result would be a mess.

However, I do have one more comment about Twitter, in which an item that was previously configurable was subsequently not configurable any more. While a user may not cry too much about missing functionality, a user will often scream about functionality that is taken away. Even if the user never used the functionality in question, the user could very well be incensed about losing something that he/she formerly had. Steven Hodson, in his quiet and reserved way, has expressed his thoughts on this and other #fixreplies issues in the Inquisitr.

So, while one can debate whether a particular type of functionality can be added, you have to have a REALLY good reason to DELETE an existing functionality.

Do you have any war stories in this regard?

P.S. You may also want to see Louis Gray's thoughts on the matter. Here's a brief excerpt:

The [#fixreplies] response, which loudly came from all corners, mirrored that of previous blowups, which have also included Facebook and Digg as victims - the first around its terms of service and Beacon, and the second, around its blocking of illegal series of numbers that could unlock DVD region codes. Even Google Reader faced a backlash last year from users who expected a different interpretation of what friends were and who could see what....

Every single case dealt with a Web 2.0 service driven largely by user generated or selected content, where the mob was reacting to changes handed down unilaterally from a seeming all-knowing company, without first communicating potential changes, or accurately foreseeing downstream effects.


Read the rest of Gray's thoughts here.
blog comments powered by Disqus