Published in: Tackling Insurgent Ideologies in a Pandemic World, Observer Research Foundation & Global Policy Journal (August 2020), Saltman – Chapter 14 (pages 80 – 85). Full publication accessible here: https://bit.ly/30o6dfu
At Facebook, we rely on a combination of technology, people and partnerships with experts to help keep our platforms safe. Even as governments, companies and non-profits have battled terrorist propaganda online, we’ve faced a complex question over the best way to tackle a global challenge that can proliferate in different ways, across different parts of the web.
Often analysts and observers ask us at Facebook why, with our vast databases and advanced technology, we can’t just block nefarious activity using technology alone. The truth is that we also need people to do this work. And to be truly effective in stopping the spread of terrorist content across the entire internet, we need to join forces with others. Ultimately this is about finding the right balance between technology, human expertise and partnerships. technology helps us manage the scale and speed of online content. Human expertise is needed for nuanced understanding of how terrorism and violent extremism manifests around the world and track adversarial shifts. Partnerships allow us to see beyond trends on our own platform, better understand the interplay between online and offline, and build programmes with credible civil society organisations to support counterspeech at scale.
Proactive Efforts at Facebook: Technology and Human Expertise
Deploying Artificial Intelligence (AI) for counterterrorism is not as simple as flipping a switch. Depending on the technique, you need to carefully curate databases or have human beings code data to train a machine. A system designed to find content from one terrorist organisation may not work for another because of language and stylistic differences in their propaganda. However, the use of AI and other automation to stop the spread of terrorist content is showing promise. As discussed in our most recent Community Standards Enforcement report, in just the first three months of 2020, we removed 6.3 million pieces of terrorist content, with a proactive detection rate of 99 percent (1).
This was primarily driven by improvements to our technology that helps us detect and manually review potential violations, often before anyone sees the content. While these numbers are significant, there is no one tool or algorithm to stop terrorism and violent extremism online. Instead, we use a range of tools to address different aspects of how we see dangerous content manifest on our platforms. Some examples of the tooling and AI we use to proactively detect terrorist and violent extremist content includes:
Image and video matching: When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, for instance, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.
Language understanding: We have used AI to understand text that might be advocating for terrorism. This is language and often broad group-type specific.
Removing terrorist clusters: We know from studies of terrorists that they tend to radicalise and operate in clusters (2)(3). This offline trend is reflected online as well. So, when we identify pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
Recidivism: We are now much faster at detecting new accounts created by repeat offenders (people who have already been blocked from Facebook for previous violations). Through this work, we have been able to dramatically reduce the time that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We are constantly identifying new ways that terrorist actors try to circumvent our systems, and we update our tactics accordingly.
The use of AI against terrorism is increasingly bearing fruit, but ultimately it must be reinforced with manual review from trained experts. to that end, we utilise expertise from inside the company and from the outside, partnering with those who can help address extremism across the internet.
While some overtly violating content can be removed directly with automation, the technology and AI is also programmed to triage a large amount of content to our human review and subject matter expert teams. More tech solutions do not mean less human involvement. Often it is the opposite. Human expertise is needed for nuanced understanding of language, detecting new trends and reviewing content that is not obviously violating. Along with increased industry collaboration, we continue to deepen our bench of internal specialists— including linguists, subject matter experts, academics, former law enforcement personnel and former intelligence analysts. We now have 350 people working full time on our dangerous organisation teams. This includes full time support for policy, engineering, operations, investigations, risk and response teams. This is supplemented by over 35,000 people in our safety and security teams around the world that assist with everything from translation to escalations. These teams have regional expertise in understanding the nuanced existence of terrorist groups around the world and also help us build stronger relationships with experts outside the company who can help us identify regional trends and adversarial shifts in how terror groups are attempting to use the internet.
Despite Facebook’s increasing efforts, we know that countering terrorism and violent extremism effectively is ever evolving and cannot be done alone. The nature of the threat is both cross-platform and transnational. That is why partnerships with other technology companies and other sectors will always be key.
Global Internet Forum to Counter Terrorism
Our Counterterrorism and Dangerous Organization Policy team at Facebook works directly with the public policy, engineering and programmes teams to ensure that our approach is global and understands the huge variety of international trends. In the counterterrorism, space the most notable partnership has been built through the launch of the Global Internet Forum to Counter terrorism (GIFCT) (4). In the summer of 2017, Facebook, Microsoft, twitter and Youtube came together to form GIFCT. Since then, the organisation has grown to include a myriad of companies working together to disrupt terrorists’ and violent extremists’ abilities to promote themselves, share propaganda and exploit digital platforms to glorify real-world acts of violence.
Since its foundation, GIFCT companies have contributed over 300,000 unique hashes, or digital fingerprints, of known terrorist images and video propaganda to our shared industry database, so member companies can quickly identify and take action on potential terrorist content on their respective platforms. We have made progress in large part by working together as a collective of technology companies, but we have also partnered with experts in government, civil society and academia who share our goal. For example, by working with tech Against terrorism (5), a UN Counter-terrorism Committee Executive Directorate-mandated NGO, GIFCT has brought over 140 tech companies, 40 NGOs and 15 government bodies together in workshops across the world to date. In 2019, we held four workshops—in the US, Jordan, India and the UK—to discuss and study the latest trends in terrorist and violent extremist activity online.
Working with experts and collaborating in a multi-sector environment in India has been significant to GIFCT’s progress. In November 2019, GIFCT held its first workshop in India. The event brought together 85 leading experts across India, Afghanistan, Sri lanka and Bangladesh focussed on counterterrorism, counter-extremism, and localised resiliency work. recognising the importance of enabling research on terrorism issues in the South Asian region, the Observer research Foundation was invited to participate in the GIFCT’s Global research Network on terrorism and technology in 2019 and continues as a member of the Global Network on Extremism and technology (6)(7). OrF’s paper for GIFCt examines what government and social media companies can do in the context of Jammu and Kashmir and the nexus between technology and terrorism (8).
GIFCT has also grown to respond to real world threats that have online impact. The abuse of social media to glorify the horrific terrorist attack on 15 March 2019 in Christchurch, New Zealand, demonstrated the need for greater collaboration to respond to mass violence to curb the spread of violent extremist content. In May 2019, Facebook and founding GIFCT companies signed the Christchurch Call to Action (9), whereby GIFCT has worked to implement a nine-point plan to prevent terrorist exploitation of the internet while respecting human rights and freedom of speech (10). As part of this plan, GIFCT developed the Content Incident Protocol to respond to emerging and active terrorist or violent extremist events, and assess for any potential online content produced and disseminated by those responsible for or aiding in the attack. Since the attack in Christchurch, GIFCT member companies have developed, refined and tested the protocol through workshops with Europol and the New Zealand government.
Given its expanding capacities, it was announced at the UN General Assembly 23 September 2019 that GIFCT will transform into an independent NGO (11). In June, the first executive director of the NGO—Nicholas J. rasmussen, former director of the US’s National Counterterrorism Center—was announced, along with a multi-sector, international, Independent Advisory Committee (IAC). Meanwhile, the IAC will serve as a governing body tasked with counselling on GIFCT priorities, assessing performance and providing strategic expertise. The IAC is made up of representatives from seven governments, two international organisations, and 12 members of civil society, including counter terrorism- and countering violent extremism experts; digital, free expression and human rights advocates; academics and others.
A large focus is often on efforts that remove terrorist and violent extremist content. However, removing content alone will only tackle a symptom of radicalisation, not the root causes. In addition to having strong policies and enforcement, there is immense value in empowering community voices online through counter narratives, or counterspeech. Our online community uses our platform to raise moderate voices in response to extremist ones and it is our role as a tech company to upscale and optimise those voices, strategically countering hate speech and extremism.
Online extremism can only be tackled with a strong partnership among policymakers, civil society organisations, academia and corporates so that we can work closely with experts and support counterspeech initiatives, including commissioning research on what makes counterspeech effective, training NGOs about best counterspeech practices, and partnering with other organisations to help amplify the voices of those on the ground.
In India, for instance, the Voice+ platform allows practitioners, experts and NGO leaders to share their experiences of countering violent extremism and of people building positive changes in the face of terrorism and insurgency (12). Nearly 100 civil society and grassroots organisations, peace and youth activists, and journalists participated in Voice+ Dialogue events around India. Additionally, Voice+ Counterspeech labs were rolled out in five cities across India, equipping over 500 university students, policymakers and experts with the essential tools and resources to counter extremist narratives through photography, storytelling, humor and digital video. For three consecutive years, Facebook has also partnered with one of India’s leading publications, The Indian Express, to celebrate ‘Stories of Strength’ (13). Through this partnership, Facebook seeks to enable conversations on community resilience against terror and extremism.
In March 2020, Facebook launched the resiliency Initiative across Asia Pacific and Southeast Asia. Despite restrictions on travel, we innovated to connect with grassroots organisations which served minority and marginalised communities to provide them with hands-on training to help them improve on their social media outreach activity. In three months, we reached over 140 activists from 40 organisations across nine countries. Participants were given free workshops to learn tools and strategies to develop more creative content to build community resiliency, and were invited to submit their creative pieces for feedback and suggestions. We also continue to actively support UNDP’s Extreme lives programme, which is now in its third year, and discusses real life experiences of extremism (14).
terrorism and violent extremism are transnational, include regional nuances, attempt to exploit the real world and online spaces, are cross-platform and are constantly evolving. Efforts to combat terrorism and extremism, therefore, will also be ever evolving in order to understand and meaningfully challenge adversarial threats. It is only when each of our sectors—technology, government, civil society, academia—recognise where we are best placed to work together and combine our expertise that we get true impact.
(1) “DangerousOrganizations”,FacebookCommunityStandardsEnforcementreport,Facebook, https://transparency.facebook.com/community-standards-enforcement#dangerous- organizations.
(2) Johannes Baldauf, Julia Ebner and Jacob Guhi, “Hate Speech and radicalisation Online: The OCCI research report”, ISD Global, June 2019, https://www.isdglobal.org/isd- publications/hate-speech-and-radicalisation-online-the-occi-research-report/.
(3) Erin Saltman, “How Young People Join Violent Extremist Groups – And How to Stop Them”, tED talks, June 2016, https://www.ted.com/talks/erin_marie_saltman_how_young_ people_join_violent_extremist_groups_and_how_to_stop_them.
(4) Global Internet Forum to Counter terrorism, https://gifct.org/
(5) tech Against terrorism, http://techagainstterrorism.org/.
(6) “Global research Network on terrorism and technology (GrNtt)”, rUSI, https://rusi.org/ projects/global-research-network-terrorism-and-technology.
(7) Global Network on Extremism and technology (GNEt), http://gnet-research.org/.
(8) Kabir taneja and Kriti M Shah, “The Conflict in Jammu and Kashmir and the Convergence of technology and terrorism”, Global Research Network on Terrorism and Technology, August 2019, https://rusi.org/publication/other-publications/conflict-jammu-and-kashmir-and- convergence-technology-and-terrorism.
(9) “Christchurch Call to Eliminate terrorism and Violent Extremist Content Online”, https:// http://www.christchurchcall.com/.
(10) “Actions to Address the Abuse of technology to Spread terrorist and Violent Extremist Content”, Global Internet Forum to Counter terrorism, May 15, 2019 https://gifct.org/ press/actions-address-abuse-technology-spread-terrorist-and-violent-extremist-content/.
(11) “Next Steps for GIFCt”, Global Internet Forum to Counter terrorism, September 23 2019, https://gifct.org/press/next-steps-gifct/.
(12) “ Voice+”, Counterspeech Initiatives at Facebook, https://counterspeech.f b.com/en/ initiatives/voice-positive/.
(13) “Stories of Strength”, Indian Express, https://indianexpress.com/facebook-stories-of- strength-2019/.
(14) “Extreme lives”, UNDP, http://www.asia-pacific.undp.org/content/extremelives/en/home. html.