In the wake of the death of a 17-year old girl, Facebook has committed to improving their safety protocols. Many wanted to social networking site to add a “panic button” to flag profiles of suspected pedophiles. Facebook is instead expanding their current reporting system.
Richard Allan, director of policy for Facebook Europe reaffirmed the company’s focus on protecting its users, but said the idea of a panic button was unworkable. Commenting on the reporting system Allan said, “The system effectively handles all manner of potential abuse we see on the site, ranging from the common minor breaking of the rules, such as embarrassing pictures, to the extremely rare serious matters that are quickly escalated to law enforcement."
Facebook has managed to avoid looking out of though here, but is this enough? Some groups are still pushing for more aggressive tools to protect users. Facebook has not completely ruled out a panic button, but says more consideration is needed.
The report (PDF) reveals that 95% of comments that appear on blogs, chat rooms and online forums fall into two broad categories: spam and malicious content. Cyber scoundrels now seem more focused on targeting Web 2.0 websites with user-generated content than ever before. Many of the most frequented internet properties are sites that tolerate user-generated content. And 61% of the top 100 sites either host malicious content or link to it, according to the report.
Spam and malicious content seem to go hand in hand, for Websense Security Labs found that 85.6 of spam mails in circulation during the first half of 2009 contained links to malicious sites.