What is your definition of radicalisation?
I’m not going to rely on political science definitions and pretend that this is my area of expertise. Instead, I’ll focus directly on what I feel is important from the perspective of my own research, which is the fact that any radicalisation is always perceived as a radicalisation in the specific, current context – it relates to mainstream politics, the media and so on. In short, what we call “radical” must always be compared with some “centre” – something that’s considered to be an acceptable position.
Could this be applied to something current?
I’ve been thinking about this a lot in today’s context of the political and media framing of the genocide in Gaza. Journalists have asked me several times, “Why is pro-Palestinian radicalism spreading on TikTok?” The problem is that many people have become “radicalised” precisely because social media has given them access to information from the scene of events. This information has starkly contrasted with the view presented by the media and politicians as normal. We should therefore perceive radicalisation as a “neutral” phenomenon rather than a purely negative one. I myself feel radicalised by this contradiction between the reality in Gaza and its media and political coverage in our country. A radical position – and thus in a certain sense an anti-systemic position – is appropriate when the system is failing in some way. A radical position only appears radical from the perspective of a system whose own norms have ceased to be sustainable.
So yes, some might say that TikTok videos have radicalised me. But I see these videos in the context of everything else that’s happening, including all the other information. The mainstream in our country perceives the pro-Palestinian position as radical. And at one time, it was almost radical to say that refugees are human beings. So if we want to talk about radicalisation in a negative sense, I would lean more towards the term extremism, which refers to a situation where radicalism threatens social cohesion or specific people, often through the use of violence.
What has fundamentally changed in this regard with the advent of social media, and what role do their algorithms play in this?
I feel that the current debate on the phenomenon of online radicalisation focuses too much on the role of algorithms. We’re hearing this from both politicians and journalists. However, this position lacks solid grounding in data and ignores what we already know about radicalisation, which can ultimately be dangerous. Pointing the finger at impersonal algorithms when looking for culprits is politically the easiest thing to do. However, it ignores the real causes, which would require reforms at the systemic level.
What is it that makes the effects of social networks and online space so frightening to society?
Social media algorithms are an organic part of our communication space. People around the world communicate and establish contacts there. But this is nothing new. Logically, any such environment – thanks to its internal mechanisms – can support certain forms of radicalisation. The difference with social networks is that every effect is many times faster – this can be positive in terms of communication possibilities, but also negative in terms of radicalisation. When a boy grows up in an environment where it’s common to curse at Roma people, he’s unlikely to read Mein Kampf, because it’s too long. Instead, he might watch a few short videos on TikTok that explain to him “clearly” how white supremacy works.
An important element in this process is the fact that activity on social media is highly performative, allowing strongly politicised or radical groups to form very quickly without the outside world noticing. The broadly-discussed manosphere is a good example of this. Social media allows people to perform their attitudes, and political representatives are also adopting a similar “identity game”.
Creating the identity of an incel who hates girls would take some time in the collective of a school classroom, and it would be met with contempt from others. On the contrary, in an online forum, a boy like this can instantly gain support and admiration. So it’s not the case that the algorithm wakes up in the morning and says, “Okay, now I’m going to radicalise Little Johnny.” Algorithms wouldn’t do that on their own if we weren’t creating the content. They offer us what we “want to see” – so if you have incel content on the internet, in a way it’s also your problem.
In other words, algorithms don’t inherently have a political agenda and they don’t operate in a vacuum...
Yes, despite all the criticism of algorithms, we shouldn’t forget about the broader context. If I grow up in an environment that’s reflective and where I can openly share my values or thoughts, if I can show my parents what I saw on TikTok and they explain to me why it’s nonsense, I can process content like this more easily. If I grow up in a vulnerable environment, the algorithm will more easily “exploit” my loneliness.
I keep asking myself Jacques Lacan’s psychoanalytical question: How do I know what I want? Isn’t the main problem of our “digital totality” precisely the fact that we’re trapped by what we ourselves think we want? If social networks constitute the basic space for our communication and thus become a public good, isn’t it a fundamental problem that they’re owned by oligarchs and programmed to transform our increasingly instinctive and short-term needs into screen attention and sell it to advertisers?
The political economy of platforms is an absolutely fundamental part of the whole problem. Social networks, which significantly influence entire areas of social life, are privately owned and designed primarily to maximise profit. To them, their impact on everything else – if they’re even addressing it at all – is secondary. Society, however, can’t ignore it.
I’ve been talking about regulating social networks for a long time. For many years, I’ve been labelled a “communist” because of this. Today, the situation is partially reversing, as political representatives have realised that some regulation is inevitable if we don’t want to just stand by and watch political and social reality corrode.
Why is this changing?
There are several reasons. I’m using a bit of hyperbole here, but when I was warning people about sexism and the cyberbullying of women and girls, no one at the top was particularly interested. But as soon as boys were at risk, this slowly became problem number one. I’m glad that it’s finally being addressed, but I also think I can take the liberty of being a bit sarcastic here.
As far as regulations are concerned, some are being gradually implemented. In this respect, I’m quite a fan of the European Union; thanks to the Digital Services Act, we are globally ahead in terms of legislation pertaining to platforms. The effort to remedy the imbalance of powers is a step in the right direction. But these are still private companies, big players, who will circumvent whatever they can. So the question remains whether the whole thing could be designed differently. We see attempts like Mastodon, but so far it simply isn’t working as an alternative. So we should be asking ourselves more and more often about alternative platforms.
And if we stick with private digital platforms, do you yourself have any ideas from an anthropological perspective about a specific regulation that might improve the situation?
The transparency of algorithms, which is covered by the Social Services Act, is definitely important. Ordinary users don’t have the capacity to exercise rights like this to their advantage. In this regard, I was intrigued by the idea I heard in a debate at last year’s Inspiration Forum, where the director of the Humboldt Institute in Berlin spoke about a kind of “data union” – agencies offering their services to people for hire. They would be like digital ombudsmen who would help me understand the digital environment and find out what rights I have in relation to a specific network and how to enforce those rights.
The second important thing is anonymity, which is extremely valuable in the digital environment. The internet would be useless for authoritarian regimes if it didn’t allow anonymity. That’s why I see any attempts at removing this anonymity – which we encounter from time to time, for example, in the name of protecting children – to be dangerous.
In any case, communication on social networks is changing and inevitably being reflected in everyday communication and social reality. Do you think we’re witnessing the normalisation of extremist language?
We already have a number of high-quality studies on how online expression influences offline communication. An interesting concept is “algospeak”, which is a way we can “trick” the algorithm when talking about topics such as sex, drugs or suicide. Younger people in particular won’t say “I’m gonna kill myself” on TikTok; they’ll say “I’m gonna unalive myself”.
When it comes to the language of radical communities such as the manosphere, I see the process by which some prominent public figures are adopting this language and even bringing it into serious political discourse as much more problematic than the algorithm itself. My favourite example is Petr Fiala’s infamous TikTok video, in which the Prime Minister refers to himself as a “sigma”, while Andrej Babiš is an “alpha”. The moment politicians start normalising a terminology of evolutionary hierarchy like this, a problem arises. This is also supported by research and evidence in the field of disinformation: the problem isn’t when someone happens upon disinformation on the internet; the problem is when a politician subsequently appears on television and legitimises it. Only then does the perception of what is considered normal shift, and it begins to be used commonly.
The common denominator of all these progressions are undoubtedly emotions. Does this mean that emotions always lead to polarisation? Why are emotions such a powerful driver of radicalisation?
I think we somewhat idealise the pre-social media era when it comes to the rationality we use when accessing public content. Shocking videos have simply replaced the space that was long occupied by, for example, tabloid newspapers.
In other words, there was never a time when we evaluated information purely rationally. The main difference is that all these shortcomings of social discourse are now transparent.
So it’s just as messed up as it’s always been, but at least today we can see it. I get that. I also try to take the best from every situation...
What I mean is that emotions have always been omnipresent wherever people have gathered – from politics and the media, to debates in pubs.
I’d like to return to the topic of the radicalisation of young people once more. In terms of the increasing number of incidents of physical assault in public spaces, which are also confirmed by statistics on violent crimes, there’s often talk of a kind of lost generation marked by Covid lockdowns. During their formative years, this generation was torn away from school and other external groups and left to the mercy of the often toxic online world in their bedrooms. Do you also see this as a major contributor to the radicalisation of young people?
Social networks have undoubtedly been and continue to be a catalyst for various processes. But are they really responsible for radicalisation on their own? During Covid, social networks were also a welcome way to stay in touch with friends – without them, the isolation and its consequences could have been much worse.
If we have a problem with radicalisation among young men, I sense that the causes are much deeper, namely in the entire system, which, without beating around the bush, presents us with a reality in which a global pandemic followed by another crisis is the new norm. Capitalism today has completely ceased to operate with the idea of hope. Older people may find it easier to get used to this, but for young people, growing up in a system that shows them no path to anything positive is in itself a fundamental source of frustration, a feeling of lostness and, as a result, a search for false alternatives and aggression. This is all the more powerful in a country like the Czech Republic, where they’re told at the same time that there are no structural problems and that they themselves are therefore responsible for their hopelessness and failure.
Who else are they supposed to listen to but false male role models who promise them that if they try hard and follow the path of “traditional masculinity”, they’ll have a pretty girlfriend, a car and respect? This is, of course, a completely false narrative, but in their situation it gives them much more hope than what is much closer to the truth: that they’ll never be able to afford their own home, and that everything here will burn due to climate change.
The aggression is caused by ever-present hopelessness, and the hopelessness is caused by the social situation. Social media isn’t responsible for this; it only reflects this and offers quick solutions. Even if we shut down the internet, capitalism won’t just go away. Nor will the frustration and hopelessness that are slowly becoming its main products.
So social networks and their algorithms are more of a scapegoat in all this?
For politicians, it’s at least easier to say “Let’s talk about algorithms” than “Let’s talk about the fact that people have no future”. The former allows them to scare and regulate, while the latter would require a just transformation to something more liveable and sustainable at the systemic level.
Let’s also touch on the topic of the radicalisation of women and girls, which has so far remained in the shadow of the manosphere and the radicalisation of boys and men. What aspect would you highlight and what have you researched yourself?
I’ve noticed the topic of anti-feminism among women, which, in my opinion, shows that the causes are mainly socio-economic, even in the era of social media. During Covid, I noticed changes among female influencers focusing on lifestyle, wellness or parenting. Even these things have become political issues: who is my child playing with outside? This shift has been greatly accelerated by the issue of vaccination. Vaccination has become a highly political issue, and these women have often found the confidence to express their views on it because they see it as their domain, as they’re usually the ones who are responsible for childcare.
From a gender perspective, the difference lies in the platforms on which radicalisation occurs. For women, Instagram – which was home to female lifestyle influencers and remained apolitical for a long time – is dominant. Among the female influencers who were respondents in my research, I often encountered the formative experience of childbirth. I found that it was precisely the negative experience of childbirth in the Czech Republic that constituted a strong “radicalisation pipeline”. This negative experience, in which the system failed to respect their physical or personal integrity, usually led women like this to a Facebook group where they could share this experience. The group would usually already have some anti-vaxer voices, and a complete distrust of the system would arise there, radical language was adopted, and so on.
Again, we can use this example to point out all the things that had to occur in offline reality for us to even begin to talk about social networks radicalising these women. If a woman had criticised the system on Twitter, a group of men would have very quickly sent her back to the kitchen. Instagram or Facebook provided them with a much safer “echo chamber”.
Is it different for men?
I think we should approach both gender paths to radicalisation as connected vessels. A shared element is the idea of a kind of failure of feminism, whose achievements in terms of opportunities, professional growth and other freedoms have been overshadowed by the problems and forms of oppression that this feminism has put on the backburner. That’s why many women are so attracted to the path that says: “Feminism has failed, we have to return to traditional roles, because at least you’ll be protected in this traditional submissive role”. And the exact same story about the failure of feminism and progressivism is told in the manosphere, where it points to the path of traditional male dominance as the right direction.
I’d be interested in your outlook: what trend in the online world concerns you the most today?
I’m concerned that the efforts to do something about the situation are being exhausted by fantasies about “evil” algorithms, and thus we’re missing the main target, which is socio-economic inequality. It seems to me that all political and social initiatives are moving in the direction of “we must protect our children from the internet”, and that is a blind and potentially dangerous alley.
I’m not blaming people whose main and understandable concern is that their son is spending too much time staring at his mobile phone – I’m blaming the politicians and journalists who aren’t looking for the roots of the problem as they ride on the wave of an easy – albeit secondary – target along with everyone else.
Finally, what hope do you see for the future?
As a millennial, I encounter young people through my research or because I teach them. And when I compare my experience with how their problems are presented to us, it seems to me that we’re underestimating them. Many of those who seem lost may just need someone to talk to. I’m filled with hope that my conversations with the younger generation show me that they’re aware of all this, often much more so than my parents’ generation.
So, I see it more as a challenge for us to show more respect and humanity towards young people and less of a patronising attitude.
The interview was originally published in Deník Referendum.