不良研究所

A new study from 不良研究所 suggests that artificial intelligence recommendation algorithms on sites like YouTube and TikTok can play a role in political radicalization. The research team trained 鈥渟ock puppets鈥 鈥 artificial entities that act like users. Each sock puppet was given a series of right- or left-leaning videos to watch every day, and then the team would compare the recommendations on the sock puppet鈥檚 homepage to see if its recommended videos gradually became more biased.

In this episode, 不良研究所 computer science Ph.D. student Muhammad Haroon, who led the study, discusses how the study was designed, what the team found, and a new digital tool they created to mitigate the radicalizing effect of social media platform AI algorithms.

More Information:

Muhammad Haroon Now the system is trying to keep you engaged by pushing you more stuff that's aligned with your political interests. And if you're a person who's already on the extreme, the only step to go is further extreme.

Soterios Johnson Polarization is nothing new in American politics. What is new is how deep and pervasive the divisions have become. A new study from 不良研究所 finds that the artificial intelligence algorithms used by social media platforms can play a role in fostering extremist views and political radicalization. Platforms, including YouTube and TikTok, use those algorithms to recommend more content to users based on what they've already seen. This is The Backdrop for 不良研究所 podcast exploring the world of ideas. I'm Soterios Johnson. How big a factor are these algorithms and what, if anything, can be done to help people keep a broader perspective? Computer Science Ph.D. student Mohammed Haroon led the study on those AI algorithms. Welcome to The Backdrop, Haroon.

Muhammad Haroon Hi, Soterios. Thank you so much for having me.

Soterios Johnson Sure. So let's dive right into the study. What exactly did you find and how did you go about it?

Muhammad Haroon Yeah. So the study we conducted was primarily on YouTube, but I do a lot of research on recommendation systems and their algorithms in general. This particular research, we were interested in the question of online radicalization via YouTube recommendations, and there has been prior anecdotal evidence to show that some people have been led into these rabbit holes of polarization and radicalization. But in this particular research, we wanted to tease apart the role of users and recommendations from the platform itself.

Soterios Johnson Right. So how did you define radicalization for the purpose of this study?

Muhammad Haroon So in this study, we defined radicalization as the content that was basically recommended on YouTube. Who are the people tweeting about that content on Twitter? And we defined tweets on Twitter about a particular video as its level of slant towards a particular political ideology. If a video was tweeted only by Republican audiences, we would conclude that this is a video that's popular with people who have Republican ideologies. And the same goes for people who hold Democrat ideologies. We defined radicalization, particularly as the movement from content that was shared by a mixture of Republican and liberal users to content that was primarily shared by one side of the spectrum.

Soterios Johnson So these are people who are self-identifying as either Republican or Democratic or one way or the other.

Muhammad Haroon We do not have self-identification from these users. We were looking at basically who they were following on Twitter. So if you're a person who tweets a particular video and we look at your -- you look at the list of accounts that you're following, it gives us an idea based on if you're only following Republican accounts such as Candace Owens or Sean Hannity or some particular well-known person that has been involved with the Republican identity, that gives us a pretty good idea of what your political leanings are.

Soterios Johnson Right. Okay. So how did you how did you conduct the study?

Muhammad Haroon So the study was purely systematic. We did not rely on actual users because prior research, a lot of people have dealt with either two things. They've either looked at real user watch histories or they've either looked at what the general recommendations on YouTube are you going to do an assessment of just what the algorithmic recommendations are without any user volition. And we ran the study by training several hundred thousand sock puppets, which are basically fake user accounts that browse the content on YouTube, watch videos go from recommendation to recommendation. And we try to identify like based on what we were having these accounts, watch, what were the recommendations generated from the system.

Soterios Johnson So these sock puppets, are they like bots or something like that?

Muhammad Haroon You could call them as bots. They were not actual Google or YouTube accounts. They were just random browser sessions that were in parallel interacting with the platform, watching videos for a set amount of time, and just practically crawling through the entire interface of YouTube.

Soterios Johnson So it's almost like you had a cohort of like what, thousands of kind of virtual people going through, seeing what these algorithms on these platforms where they would direct these these sock puppets.

Muhammad Haroon Exactly. And this was different from prior research because we were interested only, again, in just what the algorithm was doing for us. But here we could actually control what we're actually doing on the platform. So all these sock puppets, we had them set to watching a bunch of videos that we had pre-identified as either far left leaning, far right leaning left, moderate center, just right. And this taxonomy of whatever ideologies we had identified for YouTube channels and the videos from those channels, we were able to tell what the recommendations for a user who was only a particular ideology were on the platform on any given day.

Soterios Johnson Did you kind of quantify the bias of these various videos in some way, or did you just have this far right, far left description?

Muhammad Haroon These these categorizations were built on top of our underlying quantification of whatever the bias were in these videos. That quantification was based on, again, who was tweeting about that video. So if a video was tweeted by 50% Republicans and. 50% liberals. We would say that this video has a slant of zero, but if a video was only tweeted by people who had ideologies that for Republican leaning, this would indicate the video has a score of one and vice versa minus one for liberal or Democratic votes.

Soterios Johnson Okay, so what exactly did you find?

Muhammad Haroon The overall  study lasted several months, but if you only looked at a particular sock puppet, its life was a little under 2 hours where it spent one hour watching 100 videos on YouTube for 30 to 40 seconds and then a couple of minutes just trying to look at what the homepage for those sock puppets were. And then a couple of minutes just to look at what the recommendations in the "Up Next" autoplay feature of YouTube were.

Soterios Johnson Do those recommendations change depending upon how long you were watching a video?

Muhammad Haroon There have been some studies that showed what watching what duration of the video leads to what effect on the recommendations. And in one study, someone identified that it's like 22 or 23 seconds, at which point where you realize that YouTube registers that a view has been made for this video. Basically the increment in the view counts for that video happens around like the 20 to 30 second mark, which is where we ended up with the 30 to 40 second watch. Time for our sock puppet experiment.

Soterios Johnson So what did you conclude in the end?

Muhammad Haroon So we conclude in the end that we had three tests overall that you want to look at. The first one was, do these sock puppets exhibit ideological bias in their recommendations after we have trained them on this particular watch history? And that basically showed that based on the user's prior exposure to whatever content they are inclined with politically, it actually led to a statistically significant difference in what the recommendations on the homepage were. So if you're a user who watches far left or left or even centric content, your recommendations on the homepage generally reflect that ideology of yours. And that led us to conclude that there is indeed some ideological bias in the recommendations just based on your prior watch histories. But that's not the key question of radicalization that we were trying to answer. That question was that if you continue to watch the recommendations from the system after you've built this watch history of prior exposure, do these recommendations gradually become more radical? I.e., do they move from your current political leaning to a one that's more extreme or polarizing? And our findings dictated that if you started off on videos that were at a score of a slant of minus 0.7, which is still is pretty high, just following the recommendations without any user volition led YouTube videos that were at a score of minus 0.78. And similarly, for the right leaning sock puppets, the score went from plus 0.7 to plus 0.8, which indicates that you moved from a video that was primarily a mixture of some Democrat and Republican audiences to a video that had a higher degree of just Republican audiences.

Soterios Johnson So either way, I mean, the the algorithm kind of figures out where you're at and it kind of pulls you a little further to that side in the stuff it recommends to you.

Muhammad Haroon Right. I think this has to do with what the algorithm is trying to optimize for here. So recommendation systems, their main purpose is to maximize user engagement. Right? And with user engagement, it involves showing people more of what they like. And that's all fine if you're just watching innocuous content, for example, with cat videos or cooking videos like it would recommend you more and more of cat or cooking videos. But if you're a person who's watching political videos instead, that's where this becomes problematic because now the system is trying to keep you engaged by pushing you more stuff that's aligned with your political interests. And if you're a person who's already on the extreme, the only step or the only way to go is further extreme.

Soterios Johnson So you're basically recommending as a way to help people not become politically radicalized on these social media platforms is to kind of change the algorithm so that it doesn't pull them toward, you know, the far extreme, but somehow mixes in content that is more, I guess, in the middle.

Muhammad Haroon Right. Another aspect of this is content that is diverse. We want people to experience both sides of the spectrum.

Soterios Johnson I would imagine that, you know, as you said, I mean, social media companies, they they create these algorithms to keep people engaged so that they can, you know, sell eyeballs, sell the attention spans of people to companies that advertise. So I would imagine that they wouldn't be very willing to change the algorithm is as have you even approached or has have have there been any discussions with these companies about changing the algorithm and how willing might they be to do that?

Muhammad Haroon At one of the conferences that I attended earlier this year, I was actually eating lunch with someone who at that moment I did not know was actually from YouTube. So he was talking to me about my research and asking questions about what I found and stuff like that and recommendations. And then at the end, he told me that he actually works at YouTube. So that is a very interesting scenario that I recently experienced. But I think the consensus that I determined from my conversation with him was and this is something that's a recurring issue, I believe in a lot of these deep learning models that are being used to train recommendation systems online. It's just that all of these are black boxes to even the developers. Like they are all data-driven. They get in some data from the users who use the platform and based on that, make decisions about what recommendations to show to other users. So I think the issue here is not that these companies are either willingly creating systems that are radicalizing users. It's just how the model of that data-driven machine learning model is that's making these decisions based on what data it has seen coming from other users. That black box nature of the system itself leads to these issues.

Soterios Johnson These are AI algorithms, so it almost sounds like they set it and it evolves on its own. At one point, you almost don't even have control over it anymore. Is that accurate at all?

Muhammad Haroon Yes, you could say that. Like a lot of studies have been performed on actual production systems determining what biases that they possess. So a lot of loan-based systems have issues targeting minorities. And this kind of stuff comes in again from the data-driven nature of the platform or the system itself.

Soterios Johnson Do you think that it's that these algorithms speed up the process of radicalization, or do they just make this extreme content more accessible?

Muhammad Haroon It could be argued that this content generally already existed on the platforms themselves and that the system actually brings these videos to light based on whatever the user's watch history is. So I guess you could say that both things are simultaneously true. There is this interesting research that was performed recently on YouTube talking about the supply and demand of the problematic content on the platform where they talked about that. Because some users crave this kind of content, creating the demand for the content it inadvertently leads leads to a bunch of content creators who are coming in to fill in that supply for that content.

Soterios Johnson So did you find that this this potential radicalization was more prominent on either side of the political spectrum?

Muhammad Haroon So we identified that both sides of the spectrum, left and right, experienced this push towards the extreme to similar extents. But when you tried to de-radicalize the users by manipulating their recommendations, that part was much harder for the right leaning users.

Soterios Johnson And by users, you're talking about the sock puppets.

Muhammad Haroon The sock puppets, right.

Soterios Johnson So so how exactly did you try to manipulate it to de-radicalize them?

Muhammad Haroon So the deradicalization was part of a principled approach that we were trying to develop, which was to inject into the user's, watch history videos of a variety of different types. We tried to include videos in the moderate category that we had, videos in the left category for the right users. And we developed this principled system, which would look at your current set of homepage recommendations and decide based on the bias it was seeing existing currently over there, which video to inject, which would optimally reduce that bias on the homepage. So this would just identify that video and then watch that video for the user in a background tab, for example, if you're a right-leaning user, this would identify that if I watched a left-leaning video, your algorithm or your recommendations would change in a certain way.

Soterios Johnson So that's that's pretty fascinating. So so you were going in and you were you were manipulating the sock puppet to instead of just following the recommendation that the algorithm would give it, you would actually make it play, say, a more left leaning video. But the algorithm kept on dragging it towards the right, whereas the same thing didn't happen when you did the manipulation to the left leaning sock puppet. Right?

Muhammad Haroon Right. That would be a good way of saying this. But one of the things that we noticed was that you had already established yourself as a person who is right-leaning, and in that case, watching a bunch of left-leaning videos did not remove that bias that the system had already created for you. As a ranking user.

Soterios Johnson Right. But I thought you you were kind of saying that when you did the same manipulation to the left-leaning sock puppets and you you injected watching some right-leaning videos, the algorithm wasn't as resistant, still pulling the left leaning sock puppet further to the left.

Muhammad Haroon Yes, for the left-leaning user variable to move them towards the moderate category much easily than we were for the right leaning software.

Soterios Johnson That's that's interesting. So those algorithms somehow are again, AI so they kind of evolved on their own. So what do you think is behind that?

Muhammad Haroon We can't really comment on what exactly the reasons are. Again, these are all black boxes to not just us, but actually the developers of the system. So you can't really comment on what exactly the issue is. But I think one of the major differences between the left- and right-leaning content is the abundance of right content compared to left content. So there are a lot more channels that appear or appeal to right-leaning category than they are for the left-leaning users. So all those channels about the intellectual dark web, for example, Ben Shapiro, Sean Hannity, all those people that fall in that category generally identify as right -eaning users. So they all end up falling into the same category as what we would identify as a right-leaning user.

Soterios Johnson Yeah. So it might just be that the sheer quantity of the right-leaning content just makes it easier to kind of stay on that side of the spectrum as a user being drawn into it.

Muhammad Haroon Right. You could say that the supply for the right content far outweighs the supply for the left content.

Soterios Johnson Interesting. So now I understand you are still running some of these sock puppets. So you're still collecting data every day for months on end here. Did you notice any changes in the recommendations during the run up to the midterm elections?

Muhammad Haroon Not for the midterms specifically, because a lot of these platforms I realized, even tick tock and especially they made some changes to the platform leading up to the midterms. So I'm curious if one of those changes involved decreasing recommendations for midterm related content. But I do have another interesting anecdote that I observed, which was around the time of the shooting in Texas. Around the time I was looking at what the recommendations after the shooting were for the right and left users. And over there I noticed that for the left users, while we were particularly seeing recommendations about content related to gun reform or criticism on law enforcement, the recommendations for the right were more about healing and recovering after the fact. And less about actual criticism on gun laws or gun reform. And more about thoughts and prayers in general. So we could clearly see a difference in the narratives for the left and right sock puppets, which you could say translates to if you're a right-leaning user who watches right videos, mostly like Fox News or something, you will end up seeing a completely different narrative on your homepage recommendations than users on the left.

Soterios Johnson Right.

Muhammad Haroon And as to your question about the midterm elections, I am still crunching through the data, but I did not see some of that narrative spill over there in that particular topic. But I'm curious if, like, they made specific changes to the algorithm that made it the case. So another interesting thing that I noticed was when we first started this experiment, this was, I believe, sometime early 2021. We were seeing much higher degrees of radicalization than we do currently with the amount of data that we have now. And that sort of troubled me for quite a while until I went back and looked at exactly what is happening to these sock puppets under the hood. And that's what I observed, that a lot of the channels that we had initially identified as far right. So Joe Rogan's YouTube channel and a couple of other channels, they actually ended up getting banned sometime in the middle of our study. And that really drove down the radicalization values we were originally seeing for some of these sock puppets. So I guess that tends to the dynamic nature of the algorithm as well. Like it's constantly evolving. More data is coming in, channels are getting removed, channels are getting added. And that makes it difficult to conclude for, say, the study we conducted last year, that the findings that we had then would translate to what the platform is working as currently, which is why it makes sense to have what we're doing currently, this longitudinal analysis of how is the algorithm actually evolving over time.

Soterios Johnson Now I know you're a computer scientist, but I know also this study, you know, you're leading this study and it's kind of an interdisciplinary approach that you're that you're working with the communications department and researchers. Do you have any recommendations for either policymakers or individuals to to mitigate the spread of of, you know, radicalization or extremism and what kind of free society do without impinging on, say, free speech or freedom of thought and still try to limit the political radicalization?

Muhammad Haroon I think that's a question best left for the communications people in the project. But I would say this, that the first step to that is becoming a bit more involved in what the content that you're trying to consume online is, a bit more awareness actually about whether the content that you're watching does it reflect the sentiments of the wider public and just being informed about fact checking all those render processes about whatever information you're consuming online viewed on YouTube or Twitter or TikTok? How much truth is there to that?

Soterios Johnson Yeah. So checking the source of the content you're consuming and maybe comparing it to other sources and seeing, you know, if what you're seeing is could possibly really be true or not.

Muhammad Haroon Definitely. And a lot of the times that the content that is most problematic is somewhat partizan in nature. So it's either going to push the view that the other side is evil, basically, and just trying to perceive or like understanding the perspectives of that other side. I think it's one of the better things that can be done where you're trying to familiarize yourself with what their viewpoint is. It's okay to disagree with that point, but just being aware of and respecting what that point is and what it means to the other side, I think is one of the key things that we are currently lacking and should be promoting for regular users of the platforms.

Soterios Johnson Right. I guess, you know, one of the luxuries of living in a free society is that you have access to all these different points of view. And so you shouldn't limit yourself. It's not a bad idea to at least expose yourself to what other people are thinking, not that you necessarily need to agree with them like you were saying, or feel like you know you need to change your views, but just at least try to understand other perspectives.

Muhammad Haroon Definitely.

Soterios Johnson So when you went into this for you, were you also hoping to maybe kind of develop some sort of solution?

Muhammad Haroon Yes. So our goal with all of this research is to come up with possible interventions to help mitigate these problems that we identified in our research. We actually ended up creating this tool, which for now we're just referring to as CenterTube whose goal was to, again, look at your recommendations on the homepage and identify which videos would optimally reduce the bias, if any, that currently exists on your homepage recommendations. And this system would then inject in your watch history a bunch of false video watches just to manipulate them into thinking that you are a user who is not as biased as others. Or the system would have thought you so.

Soterios Johnson Interesting. I mean, have you have you tested this tool?

Muhammad Haroon Yes. So this tool was built on top of our already results that we had already seen in our study regarding intervention. And we are hoping to release a version of this, at least a simpler version of this tool out there for users to use and then see what changes to the recommendations can we get this tool to make. There are some limitations to this system that we've identified that are something that we have no obvious answer for is that people who are already too far gone, like they are already extreme, they're the ones who need the system the most. But how are you going to convince them to install this tool and change the recommendations?

Soterios Johnson And I also wonder like, how long will it take for the eye to catch up to the tool and figure out how to work around it?

Muhammad Haroon Yeah, most definitely. It's always been deemed an arms race. It's always been an escalating effort on both ends.

Soterios Johnson Well, it's really great to get a scientific, quantitative handle on all this, especially as social media seems to be here to stay. Thank you so much for sharing your work, Haroon.

Muhammad Haroon Thank you so much for having me.

Soterios Johnson Mohammed Haroon is a computer science PhD student at 不良研究所. He led a study that found the artificial intelligence algorithms used by social media platforms can play a role in fostering extremist views and political radicalization. If you like this podcast, check out another 不良研究所 podcast to unfold. Season four explores the most cutting edge technologies and treatments that help advance the health of both people and animals. Join Public Radio veterans and unfold hosts Amy Quinton and Marianne Russ Sharp as they unfold stories about the people and animals affected the most by this research. I'm Soterios Johnson and this is The Backdrop 不良研究所 podcast exploring the world of ideas.