In November 2009, I quit Facebook. I had had an account for about a year, but with my 30th birthday and a cross-country move both looming, I felt it would be a symbolic way to shed my youth, a gesture that signified “I’m grown up now.”
I didn’t miss it — I didn’t think much of it at all, in fact — until I took this job writing about cybersecurity.
After six months of reporting on Facebook scams and the privacy infringements its users face daily, I came to a conclusion:
You should quit Facebook. And you should do it now.
To begin with, Facebook’s business model guarantees that its 600 million users will stay on shaky privacy ground.
Let’s break down the numbers: Most of Facebook’s income — estimated at $2 billion for 2010 — comes from advertising.
Facebook netted $1.21 billion in ad revenue in 2010, according to a Jan. 17 article in AdAge. That’s 4.7 percent of the nearly $26 billion spent on online advertising overall.
And that’s just in the United States. Worldwide, Facebook's 2010 ad revenue shoots to $1.86 billion, and is expected to reach nearly double that by 2012.
As in any online business, more advertisers means more money. And the more member data Facebook is willing to hand over to advertisers, the more they will want to buy ads. It’s an advertiser’s dream, and a security nightmare.
“The very nature of Facebook is premised on the sale of individual information,” said Andrew Keen, author of "The Cult of the Amateur," in a Jan. 20 presentation at the Digital Privacy Forum in New York called “Digital Vertigo: An Anti-Social Manifesto.” “What we need to understand is that nothing is free.”
On Friday, Jan. 14, Facebook announced that its third-party app developers would be given access to users’ home addresses and mobile-phone numbers. The following Monday (Jan. 17), amid widespread condemnation from security and privacy advocates, the company backpedaled, announcing that it had decided to “temporarily” disable the feature.
“Over the weekend, we got some useful feedback that we could make people more clearly aware of when they are granting access to this data,” wrote Facebook Director of Developer Relations Douglas Purdy on on the company's Developer Blog. “We agree, and we are making changes to help ensure you only share this information when you intend to do so. We’ll be working to launch these updates as soon as possible, and will be temporarily disabling this feature until those changes are ready. We look forward to re-enabling this improved feature in the next few weeks.”
Free-range rogue apps
That's a privacy issue. Even more alarming are Facebook's security problems: survey scams, bogus antivirus software, phishing attacks, shortened URLs leading to malicious websites and even flat-out malware like Trojans.
For example, a rogue app moonwalked through Facebook in November claiming that Michael Jackson had faked his own death. Clicking on the link sent users to a survey page that attempted to steal their personal information.
Dozens of similar scams have popped up in the past few months, luring in victims with "scandalous" news about Miley Cyrus, Justin Bieber, Harry Potter, Tupac Shakur and Suge Knight, or promises of free Jet Blue flights, free iPhones, a Christmas tree app, videos of a nearly naked girl and videos that put you to sleep.
This situation could be easily remedied, say security experts, if only Facebook did what Apple already does: screen its app developers and vet all apps before they're made available to millions of eager consumers.
Graham Cluley, senior technology consultant for the British security firm Sophos, covers Facebook’s foibles on the company’s Naked Security blog. He said that as long as Facebook wants to continue turning huge profits, screening for rogue apps will not happen.
“I’m not convinced it’s in their DNA and mindset to police such a thing,” Cluley told SecurityNewsDaily. “They’re very much a ‘make mistakes, ask for apology later’ kind of company.”
Even a volunteer program, in which app developers go through an approval process and users can choose apps that have an official seal of approval, would be an improvement, Cluley said.
The company's current approach to policing apps is reactive: Developers release their apps, and Facebook cleans up after them if and when users discover rogues. It treats apps as just another form of user-generated content, like photos or event postings.
Sophos’ 2011 Security Threat Report sums up the situation.
“Facebook founders and operators insist that keeping users safe from spam and scams is a top priority, and they use large teams of security experts to remove suspect applications as soon as they’re detected or pointed out by users," the report says. "Yet, the problem continues to grow as the site’s growing user base makes it an ever richer target for the bad guys.”
“The scale of malicious activity on Facebook appears to be out of control,” it adds.
In an e-mail to SecurityNewsDaily , Facebook's public-relations firm said that the company forces app developers to follow "a rigorous set of guidelines and enforcement processes."
So what exactly are those processes?
"Every developer has to verify his or her Facebook account to create new applications," the e-mail read. "This is the same process that users go through when they want to do things like upload large videos."
Facebook does have some commendable security features. On Jan. 26, it announced a new one that lets members access the site using an encrypted "https" connection, similar to the level of security granted by banks in online transactions. It recently added others, such as one-time passwords for use in public places and the ability to log out remotely.
Its "social authentication" feature, introduced last fall, asks you to name people who appear in your friends' photos and is a brilliant improvement on the "captcha" text-recognition system used by many websites to prevent intrusion by software "bots."
While these features improve security for people accessing Facebook over public Wi-Fi networks in cafes, libraries or airports, none addresses the problem of rogue apps.
It's Facebook's world, and we just live in it
But who are we, the users (former and current) and the security professionals, to make decisions on how Facebook should conduct itself? Facebook has grown so large in the social-networking sphere that it’s allowed to set its own rules — and force its users to live by them.
“Facebook’s incredibly rapid development and growth has much to do with this process, and these kinds of ‘adjustments’ are a sign of a company continually pushing its own business forward,” Kiss wrote in a Jan. 18 article. “The tension arises where that business overlaps with our sense of what is public and what is private — an area where Facebook is on the front line, redefining what privacy means to us.”
As I write about the daily scams and tricks contained in rogue Facebook apps — the girl falling into the fountain, the Miley Cyrus sex tape — and the fact that Facebook is the top source for malware infections (according to Panda Security), I often wonder why people don’t just quit Facebook and avoid this minefield, this treasure trove of information that can be illegally obtained.
It turns out it’s not that easy.
“I’ve encountered plenty of Facebook users who have had a bittersweet experience of rogue apps, stolen accounts, being spammed, but still regularly log in to the site,” Cluley told SecurityNewsDaily. “My guess is that they feel they have to be on Facebook to stay in touch with their friends — even if they don’t always feel comfortable with it. Facebook has users hooked.”
That’s exactly how Cheryl Penaskovic, a sales manager for a Boston-based nonprofit, feels. A Facebook user since 2004, Penaskovic has her privacy settings set “as high as I can,” yet in recent months she has received a slew of spam messages and rogue friend requests.
“I’ve had this account with no spam for ages and now I get about 90 spam e-mails per week,” she told SecurityNewsDaily. “Other than the spam I get in my personal e-mail, I have recently gotten friend requests from people I don’t know, who have some odd foreign name, no mutual friends and barely any friends in general.”
She added, “I believe my security settings are about as high as they can go at this point and the weird things haven’t made me quit Facebook yet. I guess I’m hooked.”
The only thing that would make her quit, she said, would be if Facebook posed a threat to her personal safety.
Such a scenario isn’t so far-fetched. For example, George Bronk of California faces six years in state prison for using personal information posted on women’s Facebook profiles to hijack their e-mail accounts, steal nude pictures of them and blackmail them.
I got away from Facebook before anything bad happened to me. I went through the process of deactivating my account, and then not logging in – from any portal, be it a computer, smartphone or embedded "Like" buttons on other websites – for two weeks to make sure it was truly off the grid.
At this point, there’s nothing that could persuade me to sign back up.