YouTube vows ban on hate videos
YouTube said it will ban videos that use racist, white supremacist ideology to justify violence and discrimination. YouTube will specifically target videos that use white supremacist ideology to justify violence and discrimination (CN) – As part of
YouTube said it will ban videos that use racist, white supremacist ideology to justify violence and discrimination.
YouTube will specifically target videos that use white supremacist ideology to justify violence and discrimination
(CN) – As part of an ongoing effort to remove extremist and violent content from its site, YouTube said Wednesday it will ban videos that use racist, white supremacist ideology to justify violence and discrimination.
In a blog post Wednesday, the video-sharing site said its new hate speech policy will explicitly ban “inherently discriminatory” content such as videos that promote or glorify Nazi ideology and other extremist views.
The policy includes banning users whose videos promote racist, supremacist views to justify discrimination and harassment of individuals based on their race, caste, religion, gender or sexual orientation, among other factors.
“The openness of YouTube’s platform has helped creativity and access to information thrive,” the company’s blog post said. “It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence.”
The California-based company also said it will ban content that denies “well-documented” events such as the Holocaust of the 2012 massacre at Sandy Hook Elementary School in Newtown, Connecticut, that left 20 six- and seven-year-olds dead along with six adult staff members.
The company already prohibits advertisements from running on videos that promote hateful views.
YouTube’s announcement comes weeks after tech giants Facebook, Google and Twitter pledged at a Paris meeting to adopt a set of guidelines to ensure their platforms are not be used to spread hateful messages or broadcast racist attacks.
The companies were pushed to adopt new guidelines – dubbed the “Christchurch Call” – after a March 15 mass shooting in New Zealand in which 51 people were killed in an attack on a mosque was almost entirely live-streamed on Facebook.
Whether efforts by YouTube or the other tech giants will be successful remains to be seen, the Southern Poverty Law Center noted in a statement.
“But as with other outlets before it, YouTube’s decision to remove hateful content depends on its ability to enact and enforce policies and procedures that will prevent this content from becoming a global organizing tool for the radical right,” the group said. “It has taken Silicon Valley years to acknowledge its role in allowing this toxic environment to exist online.
“Whether this rabbit hole of dangerous rhetoric is due to a flaw in the algorithm or that the algorithm is too easily manipulated, the end result is the same. Tech companies must proactively tackle the problem of hateful content that is easily found on their platforms before it leads to more hate-inspired violence.”
Mounting calls for tighter regulations spurred U.S. lawmakers to announce an antitrust probe of tech companies this week, with some officials calling for Facebook – which owns social media site Instagram and global communications app WhatsApp – to be broken up.
U.S. Sen. Dianne Feinstein, D-California, said in a statement, the House and Senate Judiciary committees should not focus solely on a few companies but instead examine regulations across the tech and social media industry.
“For tech companies, there’s an essential need to balance these innovations with the challenges they bring,” Feinstein said in the statement. “Congress must examine how consumer data is treated, ensure that consumers have control of their privacy and fight the spread of criminal activity and hate speech.”
YouTube – a subsidiary of Google – aid in the blog post that while its new policy banning racist, white supremacist content will go into effect today, it may take “several months” for the site to fully register the changes.
Some of the banned videos may be made available to researchers studying hate-fueled content online, the company’s blog post said, and other videos will remain posted “because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events.”
YouTube said it is also expanding its work on curbing videos that circulate false claims and fake news, “such as videos promoting a phony miracle cure for a serious illness or claiming the earth is flat.”
The company – which claims a 50 percent drop in views of this type of content – said that when users watch videos that come close to violating YouTube’s content policies, videos “from authoritative sources” such as news media groups will be promoted instead.
The company’s policy update comes a day after it decided against banning right-wing YouTube host Steven Crowder for repeatedly launching homophobic and racist attacks against Vox journalist Carlos Maza, including calling him “Mr. Lispy queer from Vox” and “an angry little queer.”
YouTube used Twitter to respond to Maza’s complaint, saying Crowder’s videos didn’t violate their speech policies.
“Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site,” YouTube tweet. “Even if a video remains on our site, it doesn’t mean we endorse/support that viewpoint.”
Maza responded on Twitter by calling on fellow LGBTQ creators on the platform to challenge the company’s policies.
“YouTube has decided not to punish Crowder, after he spent two years harassing me for being gay and Latino,” Maza tweeted. “That’s an absolutely batshit policy that gives bigots free license.”