Click on the 'Listen' button above to hear this interview.
This week, women are coming out to share stories of sexual assault as allegations swirl around GOP nominee Donald Trump. Though online platforms can be empowering, issues of harassment continue to be a problem on Twitter, Facebook, even the comment section of The New York Times.
Related: Pussy Grabs Back: The Movement
A number of media organizations like NPR have gotten rid of their comment sections entirely, saying that the volume of comments and aggressive speech made these forums simply too difficult for individuals to moderate.
But Google is trying to change things. Their new subsidiary, Jigsaw, has begun to pilot a set of tools called Conversation AI, designed to use machine learning to automatically spot and moderate hate speech online more accurately and efficiently than humans ever could. They've already partnered with The New York Times to moderate their comment section.
Are we entering a new age of machine moderation? Whitney Phillips, author of "This is Why We Can't Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture," weighs in today on The Takeaway.
Correction: In the audio portion of this interview, The Takeaway incorrectly states that online users use the term 'Skittles' to covertly refer to gay men. Users use the term to refer to Muslims and/or Syrian refugees.