Technology and Business Model
Due to the business model and algorithm-driven technology of social media, we are confronted with an unprecedented dissemination of antisemitic hate on an unknown scale. Because social media is available wherever there is internet connection, it is impossible to know how much antisemitic content is generated and disseminated since it appears in many modes and languages. Antisemitism on social media manifests as memes, gifs, videos, vlogs, and comments, and many other multimodal formats in which people and institutions are attacked for being Jewish, because of their actual or supposed affiliation with Judaism or because of their affiliation with Jews. The generation and dissemination of antisemitism are made possible and reinforced by each social media platform’s features, such as liking, sharing, and commenting, and are often mixed with disinformation and other forms of prejudice, such as misogyny (cp. Hübscher et al., 2019).
Social media is also the place where Jews feel most directly confronted with antisemitism.2 In 2018, more than 10,000 Jews in 13 countries, including Germany, were registered for the EU study “Experiences and Perceptions of Antisemitism. Second Survey on Discrimination and Hate Crimes against Jews in the EU.” The study, which collected everyday experiences and perceptions of antisemitism, shows that 89% of Jews surveyed rated antisemitism as the most problematic on social media, even before experiences with antisemitism in public places, in the media, and in politics (FRA, 2018). Another report generated by the Community Security Trust documents anti-Jewish attacks in the United Kingdom in 2019 and not only records an increase in antisemitic incidents in general, but also shows that antisemitism is most commonly communicated on social media (CST, 2019, p. 35). Hate speech on social media can be a precursor to hateful violence in real life. According to a CBS news report, the shooter of the massacre at the Tree of Life Congregation in Pittsburgh on October 17, 2019, who killed 11 and wounded 6 others, had posted antisemitic comments against the Hebrew Immigrant Aid Society (HIAS) on the “free speech” platform Gab, utilized mostly by the extreme right. He accused HIAS of aiding immigrants “that kill our people” (Pegus, 2018).
With the help of cheaply available social bots and organized troll farms,3 social media can be used to target Jewish people and institutions on a large scale. During the 2016 presidential election campaign in the United States, 800 Jewish journalists were followed and harassed on Twitter in an organized troll attack with antisemitic content. The majority of the hate messages that were signed with pro-Trump slogans were aimed at ten Jewish journalists who were targeted daily by around 1,600 trolls (Green, 2016). This example shows how antisemitism can be weaponized on social media for political campaigns, to polarize, and above all to intimidate. Additionally, the manifestos written by the perpetrators of terror attacks on Jews and Jewish institutions have been circulated widely on social media (Amend, 2018).
Social media companies like to claim that they are merely offering a platform for users to connect with others, but all social media content is moderated by algorithms and edited, e.g., through removal by content moderators (Gillespie, 2018, p. 5 ff). Most social media users are unaware of the fact that what they are seeing on their social media is decided by algorithms who adapt content to the users’ online behavior4 (Lanier, 2018, p. 6; also cp. Nield, 2020). Furthermore, users subconsciously learn that outrageous content, such as Holocaust denial, creates buzz (attention) in the form of reactions (likes, dislikes, comments, and shares), which in turn is profitable (Lanier, 2018, p. 13). All content, also hateful and polarizing content like antisemitism and Holocaust denial, creates profit for social media companies. As Roger McNamee, one of the early investors in Facebook, now one of its strongest critics, said in the PBS Frontline documentary The Facebook Dilemma
polarization was the key to the model, this idea of appealing to people’s lower-level emotions, things like fear and anger, to create greater engagement, and, in the context of Facebook – more time on site, more sharing, and therefore more advertisement value.
(PBS, 2018)5
Social media businesses depend on user-generated posts for their revenue. There are two revenue streams: users generate content, which, if it creates a lot of buzz, becomes more attractive for advertisers, which in turn means more revenue for social media companies. However, often revenue is derived from advertisements based on users’ personal data, which social media companies collect and monetize. Once collected, it is impossible to prevent that data from being abused by bad actors, the police, or the government. Data protection experts have been warning about this for years. Here are a few recent examples of minorities specifically being targeted: the Russian state is surveilling students’ social media accounts regarding LGBTQ activity (Baume, 2020); in Pittsburgh, police surveilled social media accounts of Black Lives Matter activists (Riehl, 2020). There is no reason to believe Jews could not also become targets.
Social media algorithms are adaptive and favor content with lots of engagement. The buzz from engagement with negative, harmful, or hateful content creates something that Chamath Palihapitiya, Facebook’s former vice president of user growth called “dopamine-driven feedback-loop,” which he fears has a massive negative impact on society (Brown, 2017). To exemplify, it can be said that antisemitic content, when it is liked and shared and commented on, constitutes social validation and reward for the user who posted it, and, thus, social media encourages the creation and dissemination of hateful content, such as antisemitism. A recent study shows how Facebook, with the help of a “stimulus-response loop,” prioritizes incendiary content, such as hateful speech and visuals, which subsequently becomes normalized. The study further shows how recommendation algorithms, a feature of many social media platforms, lead users toward consuming more extreme content (Munn, 2020). It can be concluded that through algorithmic selection, elevation, and recommendation of content that has lots of engagement (which often contains incendiary content), social media companies shape and manipulate users’ perception of what is acceptable.
Social media companies also provide digital infrastructure to extremists. Despite deplatforming efforts, terror organizations are continuously and extensively using Twitter and YouTube more or less openly. Recently, accounts of the so-called Islamic State have been found on the relatively new social media platform TikTok, which is mostly used by children and young adults (Wells, 2019). The extreme right has successfully utilized several platforms and their technology to recruit followers and to widely disseminate hateful content so extensively that social scientist Julia Ebner has called them “radicalization machines” (2020). Efforts to tackle hate speech in general and antisemitism in particular have been irresponsibly ineffective, and the resources dedicated to these efforts have been inconsequential in light of the astronomical profits some platforms have accumulated. As the number of incidents where a connection between hate content on the platforms and offline violence seems to exist continues to mount, the task of removing hateful content is left to algorithmic hate speech detection and the problematic concept of content moderators (see below).