Embattled right-wing social media firm Parler infamously promises its users a laissez-faire approach to “free speech” on its service. As the company now tells Congress, however, Parler apparently does warn federal authorities when it discovers certain kinds of violent content on its platform—and users who flock to the site for its anything-goes attitude are mad.
Parler’s attorneys explained in a letter (PDF) to the House Oversight Committee that it apparently does have limits on what it finds acceptable and did take seriously some of the violent content posted to its platform ahead of the January 6 events at the US Capitol.
Parler “has acted to remove incitement and threats of violence from its platform and did so numerous times in the days before the unlawful rioting at the Capitol,” the letter explains. It goes on:
As Parler grew substantially in the latter half of 2020, the company took the extraordinary initiative to develop formal lines of communication with the Federal Bureau of Investigation (FBI) to facilitate proactive cooperation and referrals of violent threats and incitement to law enforcement. In fact, in the days and weeks leading up to January 6th, Parler referred violent content from its platform to the FBI for investigation over 50 times, and Parler even alerted law enforcement to specific threats of violence being planned at the Capitol.
“Today,” Parler adds, it “continues to work closely with law enforcement, and the company has also implemented enhanced processes and procedures with the assistance of artificial intelligence, computerized filters, and manual reviews to better screen and remove incitement from the platform.”
Parler alleges in the letter that it began to reach out to the FBI about “alarming content that included specific threats of organized violence at the US Capitol” as early as December 24, including a post from a user who explicitly called for an armed force of 150,000 to gather to “react to” what Congress did that day.
On January 2, Parler said, it likewise forwarded to the FBI a series of posts from a user writing that the planned event on January 6 “is not a rally and it’s no longer a protest. This is the final stand… I trust the American people will take back the USA with force and many are ready to die.”
Warnings without moderation?
Although Parler says it warned the FBI about threats made on its platform, it didn’t do much of anything else with many of those threats and calls to violence before they boiled over into real-world harms.
Parler rapidly gained popularity leading up to and in the wake of the November 2020 US presidential election as Republican, conservative, and fringe far-right extremists spreading false claims of election fraud swarmed to the platform.
The company wrote the letter in response to an information request that committee chairwoman Rep. Carolyn Maloney (D-N.Y.) sent to Parler in the wake of the January 6 insurrection. Rhetoric spreading unchecked on Parler was heavily implicated in the formation of the mob, and hundreds of images and videos were posted to the service live from the event, showing events as they unfolded.
The reaction against Parler was swift. The attack at the Capitol unfolded on a Wednesday afternoon, and by Friday, Google banned the app from its Google Play store. Apple followed suit a few hours later, booting Parler from the iOS App Store. Both companies cited Parler’s failures to moderate “harmful or dangerous content encouraging violence and illegal activity,” as Apple specifically wrote, in violation of the distributors’ terms.
By that Sunday, Amazon had suspended Parler’s AWS hosting service, taking the platform completely offline. Parler sued Amazon, arguing that the ban was designed to benefit Twitter, its competitor, and was “motivated by political animus.” Amazon in turn brought receipts, showing more than 100 times it had specifically warned Parler about violent threats that nobody on the platform seemed to be moderating or managing.
Former Parler CEO John Matze, who was abruptly fired from the company in February, also claims that he was dismissed in part because he wanted to add more moderation to the platform. Matze alleged in a lawsuit against Parler and its board that he proposed “that Parler bar any identifiable extremist groups,” including neo-Nazis, from the platform but “was met with dead silence.” His lawsuit also asserts that Parler has since been “hijacked to serve the personal political interests” of its principal investor, Rebekah Mercer.
Parler’s admission that it conveyed warnings to the FBI was reportedly met with severe displeasure from many of its users.
The company shared an article about its response to Congress on its official account calling for “an investigation into big tech collusion,” arguing that larger social media companies, including Facebook and Twitter, did not face the same censure and deplatforming that Parler did in the wake of the January 6 riot, even though participants also used those services.
“I guess when a company says they are a free-speech platform I would not expect them to turn folks over to the corrupt FBI,” one user wrote. Another said, “So you are saying you ratted on a bunch of us.” Others pledged to bail on Parler for a proposed social media platform that former President Donald Trump allegedly plans to launch.
Amid the criticism, Parler tried to explain its position. “The First Amendment does not protect violence-inciting speech, nor the planning of violent acts,” the company wrote. “Such content violates Parler’s TOS. Any violent content shared with law enforcement was posted publicly and brought to our attention primarily via user reporting. And, as it is posted publicly, it can properly be referred to law enforcement by anyone. Parler remains steadfast in protecting your right to free speech.”
Apparently, users were not mollified. “Snitches get stitches or end up in ditches,” one user replied.