In 2010, Twitter introduced a new default profile picture for users who had not uploaded their own to the social media platform. The avatar: an egg on a colored background, a reference to the site's familiar bird symbol. But over time, internet trolls and other online abusers began to adopt the anonymity of the default egg in order to harass other users.
Now, almost seven years later, Twitter has changed the default picture again. The new default avatar will be a minimalist, gray outline of a genderless human head and torso, which the site hopes will encourage new users to upload their own photo – and distance themselves from the egg's accumulated baggage.
The change comes at a time of increased pressure on social media sites to police online abuse and hate messages on their platforms. Many people now believe that popular sites like Twitter and Facebook have a responsibility to protect their users in a culture that is increasingly dominated by online interactions.
"Online harassers opt for an egg or other symbols as their icon because they can hide," says Janet Johnson, clinical assistant professor for the School of Arts and Humanities at the University of Texas at Dallas.
"It's much different if someone insults you to your face," she tells The Christian Science Monitor via email. "The person who insulted you might find a fist in their face. A screen protects the troll."
Because of this anonymity, Dr. Johnson says, conventional methods for dealing with online harassment can often fall short.
Last year, African-American actress and comedian Leslie Jones became one of many celebrities to temporarily abandon her Twitter account after a wave of misogynistic and racist messages were sent to her page, many of which were hidden behind a default egg.
"We've noticed patterns of behavior with accounts that are created only to harass others – often they don't take the time to personalize their accounts," said Twitter in a Friday blog post. "This has created an association between the default egg profile photo and negative behavior, which isn't fair to people who are still new to Twitter and haven't yet personalized their profile photo."
Trolling of the magnitude directed at Ms. Jones has become an increasingly familiar scenario for internet users over the past few years. But this kind of online hate is nothing new. In fact, it's been around for nearly as long as the internet itself, says Brian Reich, a communications strategist and business media author.
"The public pressure on tech platforms to address bad behavior happens from time to time – usually in response to a high-profile event, or series of events, that attract more widespread media and other attention to the less desirable parts of the internet world," Mr. Reich tells the Monitor in an email. "The 2016 election cycle and the first few months of the Trump administration in particular seem to have emboldened a lot of trolls online, or at least made the actions of trolls much more apparent to a larger segment of the population."
With the election came a spike of political symbolism among trolls – notably, the popular "Pepe the Frog" meme came to be used by far-right hate groups so extensively during the campaign that the Anti-Defamation League declared it an official hate symbol in September. And the Twitter egg, while not connected as specifically with hate speech as Pepe, carries many of the same associations as other pieces of troll iconography.
"The symbols and iconography provide anonymity ... and they are often created by groups with some basic sense of shared ideology, so they provide that sense of us vs. them community that many trolls seem to desire," says Reich.
Not everyone is pleased at the removal of the egg, even considering its negative associations. Some users have expressed dissatisfaction for aesthetic reasons, while others say the change will do little to stem the tide of online hate. But even if the loss of the egg temporarily changes things, says Reich, it is all but certain that trolls will simply adopt another symbol in its place.
"Most of the ways that tech companies deal with harassment creates almost a game of whack-a-mole, where you keep trying to shut down the activities only to find them popping up somewhere else again soon enough," he adds.
While Twitter has been criticized for a lack of definitive action against online abuse, this is far from the first time the site has taken on trolls and hate speech. In February, for example, the company announced a more aggressive program to permanently ban online abusers and hide offensive comments from comment threads and searches. But with hundreds of millions of users, it is difficult – if not impossible – to find a catchall solution to the problem, especially for platforms that ascribe to basic free speech principles.
"To fully address online harassment you have to deal with the underlying behaviors that drive these actions, both online and offline," says Reich. "Until we accept that bullying and harassment happen online and offline, and stop looking just to Twitter and other platforms to solve these problems, we'll at best be able to manage online harassment, keep it somewhat contained online, but we won't make a real dent in how much is happening."