Business
Verifying #BigTech promises

The pattern in Silicon Valley is too familiar. Move fast, break things, apologize, promise to do better, and repeat. After nearly two decades of this pattern, we are exhausted, writes Dr. Hany Farid.
We are exhausted by spectacular failures to protect user’s privacy, allowing advertisers to illegally target housing and job ads based on race and age, promoting fake news designed to incite violence and disrupt elections, and allowing terror-groups to continue to radicalize and recruit online. We are exhausted by the apologies and the promises to do better.
Policymakers and advertisers have been increasingly vocal in their frustration of social media companies’ lack of aggressively addressing everything from radicalizing, terror-related material to sexually explicit content targeted at children. In response, YouTube just last week announced that they had removed over 8 million videos in the previous three months, the majority of which were spam or adult content. A total of 6.7 million of the deleted content were automatically identified by machines, 76% of which were removed before receiving a single view.
YouTube also reports that more than half of the violent extremism videos were removed with fewer than 10 views. We are not told, however, how many of these videos were taken down, and of course, we don’t know how many videos remain online having been missed by YouTube’s algorithms or human moderators.
On the surface, these numbers sound promising, but a closer look paints a different picture. Over a seven-week period between March 8 and April 26 of this year, we used our own technology to identify the presence on YouTube’s platform of just 256 previously identified ISIS-generated terror-related videos. Here is what we learned:
-
No less than 942 ISIS videos were uploaded to YouTube.
-
These 942 videos garnered a total of 134,644 views.
-
Although 74% of the videos remained on YouTube for less than two hours, within this time window, these videos garnered an average of 12 views each, with a maximum view count of 252.
-
For videos that were available for more than two hours, the average number of views over a 48-hour window was 515 with a maximum view count of 9,589 (we stopped tracking views after a 48-hour window).
-
These 942 videos were uploaded by 157 different YouTube accounts, some of which uploaded as many as 70 videos before the channel was either removed by YouTube administrators or deleted by the user.
-
Approximately 91% of the videos that we found were uploaded more than once and stayed online at least long enough for us to find it.
To give just one specific example - a video titled Hunt Them O’ Monotheist, originally released by ISIS’s Somalian affiliate on December 25, 2017, encourages firearm and vehicular attacks in Western countries, including Paris and London. This video was uploaded to YouTube on March 10, 2018, was available for 29 hours and received 405 views before it was removed. This video was re-uploaded on March 11 by a different account and was available for 39 hours and received 113 views before it was again removed. This video was then again re-uploaded and deleted at least six more times over the seven-week window, receiving collectively another 875 views.
A few patterns have emerged from our analysis. Despite claims that it is using automatic tools to find and remove terror-related content, we are seeing the same content being repeatedly uploaded to YouTube, often from the same account. This content is staying online anywhere from hours to days and is being viewed hundreds to thousands of times. Once removed, the same content is then simply re-uploaded, meaning that it is effectively available all the time. If a small not-for-profit organization like the Counter Extremism Project with limited financial and computing resources, can find content on these platforms, we see no reason why the platforms themselves cannot do the same thing.
The bottom line is that too much terror-related content is finding its way online, it is staying online for too long, and it continues to reappear even if it is eventually taken down.
While we have seen some progress as compared to a few years ago, there are still significant gaps in the development and deployment of technology to quickly and accurately find and remove terror-related content. In addition to improving existing technologies, the industry must do more to develop and deploy new technology to contend with an ever-changing landscape — and this must go beyond vague promises of using AI in five to ten years from now to solve the problems we are facing today.
Policymakers, advertisers, and the public must continue to pressure technology companies to take more responsibility for the direct and measurable harm coming from the abuses on their platforms. If these technology companies don’t respond more effectively, then policymakers should consider fines (as the Germans have) and advertisers should consider withholding advertising dollars (as Unilever has threatened to do).
Dr. Hany Farid is the Albert Bradley 1915 Third Century Professor and Chair of Computer Science at Dartmouth College and a senior adviser to the Counter Extremism Project.
Share this article:
-
Gender equality3 days ago
International Women's Day: An invitation for societies to do better
-
European Parliament4 days ago
MEPs back plans for a climate neutral building sector by 2050
-
Bulgaria5 days ago
Bulgaria threatened with bankruptcy, risk for the lev-euro rate, incomes get freezing
-
Slovakia4 days ago
European Maritime, Fisheries and Aquaculture Fund 2021-2027: Commission adopts over €15 million programme for Slovakia