The video platform has been under pressure from experts and watchdog groups for years, as researchers showed that its algorithm tended to drag users from mainstream videos toward extremist political content.
YouTube said Wednesday that the platform will begin banning videos promoting Nazi ideology, as well as denying "well-documented violent events" such as the Holocaust or the Sandy Hook massacre.
"Today, we're taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status," the company wrote in a blog post.
Farshad Shadloo, YouTube's global product policy communications lead, added that the company would begin enforcing the policy immediately but "it will take time for our systems to fully ramp up."
The video platform has been under pressure from experts and watchdog groups for years to address its role in hosting and spreading hateful content, as researchers showed that its algorithm tended to drag users from mainstream videos toward extremist political content through the use of its autoplay feature and recommendation system.
YouTube said its new policy would aim to "prevent our platform from being used to incite hatred, harassment, discrimination and violence."
YouTube will also begin to change the videos it recommends alongside "borderline content" that does not violate the company's policies.
"In January, we piloted an update of our systems in the U.S. to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat," the company wrote. "We're looking to bring this updated system to more countries by the end of 2019."
The video-sharing platform has been criticized by researchers who argued that the company wasn't simply hosting the videos but also leading users down "rabbit holes" toward increasingly violent and extreme viewpoints while shutting out dissent.
Jonas Kaiser, an assistant researcher at Harvard University, and Adrian Rauchfleisch, an assistant professor at the National Taiwan University, outlined the phenomenon last year in a research paper titled "Unite the Right? How YouTube's Recommendation Algorithm Connects The U.S. Far-Right."
"In much the same way that the 'Unite the Right' rally in Charlottesville, where a white supremacist injured many and killed Heather Heyer by driving a car into a crowd of counterprotesters, sought to bring together many far-right influencers, so too does YouTube's recommendation algorithm bring together far-right channels," Kaiser and Rauchfleisch wrote.
YouTube said it would also work to hone its algorithm to stop directing users toward silos of extremist content.
"If a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the 'watch next' panel," Shadloo wrote.
YouTube's changes add to similar moves by other platforms that have begun to rewrite policies around hate speech. In March, Facebook announced a ban on "praise, support and representation of white nationalism and white separatism on Facebook and Instagram."
Twitter does not have a similar policy against white nationalism, but it does ban targeted harassment based on race.
YouTube's announcement comes just hours after the company said that it would not take action against far-right YouTube personality Steven Crowder, who has published numerous videos targeting Vox reporter Carlos Maza with anti-gay and anti-Mexican slurs over several years.
Despite rules on the service that prohibit "racial, ethnic, religious, or other slurs where the primary purpose is to promote hatred" and "stereotypes that incite or promote hatred," YouTube declined to take action on the videos created by Crowder, whose videos have received more than 833 million total views on YouTube.
"Our teams spent the last few days conducting an in-depth review of the videos flagged to us, and while we found language that was clearly hurtful, the videos as posted don't violate our policies," a YouTube spokesperson tweeted to Maza on Tuesday night.