By Andrea Grant
Democracy, says Jonas Kaiser, requires a few basic conditions to thrive: That citizens can address the issues they face together constructively, and that there is room for marginalized communities to be heard. Most of all, he adds, “We have to agree on certain basic facts.”
Disinformation throws all those things into doubt, and puts democracy at risk. That’s why Kaiser, an assistant professor of communication, journalism & media at Suffolk (as well as a faculty associate at Harvard’s Berkman Klein Center for Internet & Society), has made disinformation a key part of his academic research, studying how it proliferates online and what that means for society. Now, he offers perspective on why toxic content thrives online—and what can be done about it.
Return to Table of Contents
features
How did you first become interested in studying social media?
Going to college in Germany in the aughts, I could tell things were changing. The first study I did on climate change denialism took a very classical communications and journalism approach: You look at mass media, and then you go from there. But at the same time, I also saw that there was so much conversation happening online, and not in the ways that scholars were typically studying. It wasn’t what politicians were saying that caught my attention, but rather what people were saying on Twitter and Facebook. It was what people wrote in online comments under articles. Suddenly, people were finding ways to speak their minds.
Photograph: Michael J. Clarke
Our society embraced social media platforms without having a clear vision of where things would go. People were putting all their information on Facebook and never questioning it. Then they started to realize “this company knows all about me and maybe I’m not really comfortable with that.” We’re continuing to improve our knowledge and ask what is the impact of social media on people’s well-being. But there are no clear answers. With any new feature or platform, and with new generations growing up on the internet, in a lot of ways we are flying blind regarding the effects of social media—we are just trying to shine a light through the fog.
Over the years, how has that initial interest sharpened?
Negativity is something that we as humans tend to reward. We’re more likely to interact with negative content than with positive content. That is true if we look at which stories are more likely to end up in the news. It’s often what makes people comment online under other people’s comments and under videos. We are more likely to engage with scandalous and negative content because, at the end of the day, we really like to tell people on the internet that they’re wrong.
Why does inflammatory content thrive on social media?
Freedom of speech inherently protects people from governmental interference, not from getting banned from Twitter or YouTube. Private companies can choose what they want to see on their platforms.
The danger is that as more extreme voices spread their disinformation and their hate unchallenged—alienating, harassing, or threatening users—they are also “mainstreaming” much more extreme talking points in political discussions.
There are multiple direct consequences to this. When Twitter granted formerly banned accounts “amnesty” to return to the platform, there have been reports that the volume of extreme speech went up. The risk of letting formerly banned accounts back on the platform is threefold: one, you let more extreme voices on the platform that spew falsehoods and hate; two, they often harass other users relentlessly; and three, the platform itself risks alienating mainstream users and becoming a megaphone for extremist views.
In your view, what responsibility should social media companies have to regulate the content and protect users on their platforms?
First of all, it’s important to understand that hate speech online can lead to offline action. There are also very real consequences to the spread of disinformation. For democracy to function, we need people to trust in institutions, in other people, and in journalism. Disinformation erodes that trust. Disinformation sows doubt, for example, on elections, on the electoral process, and inherently on democracy itself. Democracy needs to be legitimized over and over again, every day. If people don’t trust the processes that reproduce and reinforce democracy, then that’s a problem for all of us.
Does this negativity have impacts beyond the platforms?
Some of the hate speech that goes on in the US doesn’t fly in Germany because it’s illegal. You can’t deny the Holocaust in Germany. If you do, then by law a social media platform has to remove that content. The EU is in the process of implementing its Digital Services Act, which addresses online misinformation and transparency.
In the US, free speech is obviously sacrosanct. However, there is bipartisan interest in reforming Section 230 of the Communications Decency Act. Broadly speaking, Section 230 gives social media companies the freedom to host content, while not necessarily being responsible for it. This legal freedom allowed the internet to grow and companies to try things out. But as time has passed and we’ve seen the incredible power of social media, both parties agree in general that Section 230 needs to put some guardrails on those companies, outlining their legal responsibilities. And in the context of generative AI in particular—which amplifies a host of issues, including making it easier for users to create realistic fake images, videos, and content—we need those guidelines basically yesterday.
Is there a role for governments to play in regulating online platforms?
It’s important to understand when generative AI is helpful and when it’s not. In my Intro to Communication class, we look at a short paragraph of text created by ChatGPT that includes quotes. There are citations, but they’re missing page numbers, authors. We discuss each case through several ChatGPT-generated versions until the students are satisfied with the citations. But then I ask them: Have you checked whether those sources actually exist? And when they do, they discover the sources are just made up. So that’s just one example of how [using generative AI uncritically] can really harm your work by sounding authoritative, but actually being very wrong.
How are you addressing generative AI technologies like ChatGPT in your classes?
I can’t overstate the importance of journalism for the health of democracy. Trust in journalism is down among certain cohorts of people, certainly. But it’s not a blanket distrust. And across the spectrum there is strong trust in local journalism. Local journalism allows people to stay connected to their communities. People also kind of forget about political polarization on a local level because most issues aren’t “left” or “right.” If the community needs a new sewage plant, no one is asking, “Is this woke?” It’s just a question of a shared need. You have different opinions and you must come to an agreement.
Local news outlets closing, or being taken over by big corporations, is creating a crisis for democracy. There are now news deserts where you basically have no local news. Instead you have Facebook groups that are obviously not operating up to journalistic standards. And then you have national outlets that filter issues through identity politics, which is not very helpful. People need to be able to contribute to democracy every day. If local governments are not being held accountable, local businesses aren’t being held accountable, then you lose that sense of community.
You mentioned that people are losing trust in institutions, including journalism. How are you preparing your students to start their careers in that media environment?
Check your sources, of course, but also just remember that you might be wrong. We always want to be correct—and often we’re just not. We might remember something differently. We might have picked up something that has since been reviewed or disproven. I think the key part is just staying humble. I tell my students that if I’m wrong I appreciate a correction. Also, “I don’t know” is often an acceptable answer.
Any tips for helping users vet the information they share?
I think it's very much a case-by-case basis. If you ask that question to [Facebook and Instagram parent company] Meta, they will cite movements like the Arab Spring and Black Lives Matter that were aided by social media. And that is all correct. But there are so many negative examples as well. We have seen political interference, dangerous public health misinformation. Facebook has even been accused of fueling the genocide in Myanmar.
So if you ask me whether I think social media is a net positive or a net negative for the world? The jury’s still out, because we still don’t know where we’re going.
Your research takes you to the very worst corners of the internet. Do you believe that social media can be a force for good?
“We are more likely to engage with scandalous and negative online content because, at the end of the day, we really like to tell people on the internet that they’re wrong.”
—Jonas Kaiser