Elon Musk had a plan to buy Twitter and undo its content-modulation policies. On Tuesday, just a day after reaching his $44 billion deal to buy the company, Musk was already at work on his agenda. He tweeted that past moderation decisions by a top Twitter lawyer were “obviously incredibly inappropriate.” Later, he shared a meme mocking the lawyer, sparking a torrent of attacks from other Twitter users.
Musk’s personal critique was a rough reminder of what faces employees who create and enforce Twitter’s complex content-moderation policies. His vision for the company would take it right back to where it started, employees said, and force Twitter to relive the past decade.
Twitter executives who created the rules said they had once held views about online speech that were similar to Musk’s. They believed Twitter’s policies should be limited, mimicking local laws. But more than a decade of grappling with violence, harassment and election tampering changed their minds. Now many executives at Twitter and other social media companies view their content-moderation policies as essential safeguards to protect speech.
The question is whether Musk, too, will change his mind when confronted with the darkest corners of Twitter.
“You have said that you want more ‘free speech’ and less moderation on Twitter. What will this mean in practice?” Twitter employees wrote in an internal list of questions they hoped to ask Musk and which was seen by The New York Times.
Another question asked, “Some people interpret your arguments in defense of free speech as a desire to open the door back up for harassment. Is that true? And if not, do you have ideas for how to both increase free speech and keep the door closed on harassment?”
Musk has been unmoved by warnings that his plans are misguided. “The extreme antibody reaction from those who fear free speech says it all,” he tweeted Tuesday.
He went on to criticize the work of Vijaya Gadde and Jim Baker, two of Twitter’s top lawyers. Gadde has led Twitter’s policy teams for more than a decade, often handling complicated moderation decisions, including the decision to cut off Donald Trump near the end of his term as president. A former general counsel for the FBI, Baker joined Twitter in 2020.
Twitter CEO Parag Agrawal did not directly respond to the criticism, but in a tweet, he wrote, “Proud of our people who continue to do the work with focus and urgency despite the noise.”
Employees of Twitter and other social media companies said that Musk seemed to understand little about Twitter’s approach to content moderation and the problems that had led to its rules — or that he just did not care. Some of the suggestions he has made, including labeling automated accounts, were in place before Musk launched his bid.
“He’s basically buying the position of being a rule-maker and a speech arbiter,” said David Kaye, a law professor at the University of California, Irvine, who worked with the United Nations on speech issues. “That has been really fraught for everybody who’s been in that position.”
In its early years as a small startup, Twitter was governed by one philosophy: The tweets must flow. That meant Twitter did little to moderate the conversations on its platform.
Twitter’s founders took their cues from Blogger, a Google-owned publishing platform that several of them had helped build. They believed that any reprehensible content would be countered or drowned out by other users, said three employees who worked at Twitter during that time.
“There’s a certain amount of idealistic zeal that you have: ‘If people just embrace it as a platform of self-expression, amazing things will happen,'” said Jason Goldman, who was on Twitter’s founding team and served on its board of directors . “That mission is valuable, but it blinds you to think certain bad things that happen are bugs rather than equally weighted uses of the platform.”
The company typically removed content only if it contained spam or violated US laws forbidding child exploitation and other criminal acts.
In 2008, Twitter hired Del Harvey, its 25th employee and the first person it assigned the challenge of moderating content full time. The Arab Spring protests started in 2010, and Twitter became a megaphone for named, reinforcing many employees’ belief that good speech would win out online. But Twitter’s power as a tool for harassment became clear in 2014 when it became the epicenter of Gamergate, a mass harassment campaign that flooded women in the video game industry with death and rape threats.
“If there are no rules against abuse and harassment, some people are at risk of being bullied into silence, and then you don’t get the benefit of their voice, their perspective, their free expression,” said Colin Crowell, Twitter’s former head of global public policy, who left the company in 2019.
In response, Twitter began expanding its policies. But new threats emerged. In September 2016, a Russian troll farm quietly created 2,700 fake Twitter profiles and used them to sow discord about the upcoming presidential election between Trump and Hillary Rodham Clinton.
The profiles went undiscovered for months, while complaints about harassment continued. In 2017, Jack Dorsey, CEO at the time, declared that policy enforcement would become the company’s top priority. Later that year, women boycotted Twitter during the #MeToo movement, and Dorsey acknowledged the company was “still not doing enough.”
He announced a list of content that the company would no longer tolerate: nude images shared without the consent of the person pictured, hate symbols and tweets that glorified violence.
In 2018, Twitter banned several accounts linked to the hack-and-leak operation that exposed Clinton’s campaign emails, and it began suspending right-wing figures such as Alex Jones from its service because they repeatedly violated policies.
The next year, Twitter rolled out new policies that were intended to prevent the spread of misinformation in future elections, banning tweets that could dissuade people from voting or mislead them about how to do so. Dorsey banned all forms of political advertising but often left difficult moderation decisions to Gadde.
Twitter also developed a strategy that would allow it to keep up more tweets: Rather than remove them, it labels to tweets that contained misinformation about elections and limited their ability to spread quickly across the platform.
In preparation for the 2020 US presidential election, Twitter banned manipulated videos known as “deepfakes” and forbade users to share material obtained through hacking campaigns.
That policy was tested when the New York Post published an article containing emails purportedly obtained from the laptop of President Joe Biden’s son Hunter. Fearing that the materials came from a hack-and-leak operation, Twitter blocked the article from being shared on its platform.
Dorsey publicly disagreed with the decision. Days later, Gadde announced that the policy had been changed and that Twitter would allow the Post article to appear in tweets.
The episode has become a linchpin in conservative critiques of Twitter and was echoed in Musk’s critique of Gadde.
Musk said he wanted to return Twitter to its early days, when only illegal content was removed. “I am against censorship that goes far beyond the law,” Musk tweeted Tuesday.
Musk’s plans could also face legal problems in Europe. On Saturday, European policymakers reached an agreement on landmark legislation called the Digital Services Act, which requires social media platforms such as Twitter to more aggressively police their services for hate speech, misinformation and illicit content.
The new law will require Twitter and other social media companies with more than 45 million users in the European Union to conduct annual risk assessments about the spread of harmful content on their platforms and outline plans to combat the problem. If they are not seen as doing enough, the companies can be fined up to 6% of their global revenue or even be banned from the EU for repeat offenses.
Inside Twitter, frustrations have mounted over Musk’s moderation plans, and some employees have wondered if he would really halt their work during such a critical moment, when they are set to begin moderating tweets about elections in Brazil and another national election in the United States.
This article originally appeared in The New York Times.