There is hardly more important topic for anyone considering himself as an internet law to explore than Google removal policies and its content moderation system.
How in fact Google and other search engines decided on what goes in, out, up or down their platforms is a crucial element of any advice given to a client.
The content moderation policy of Google or Facebook, will often determine how easy or quickly they will agree to delete harmful internet content. Despite the fact that most internet service providers and search engine operators having a single content removal policy, Europeans are often able to receive preferential treatment, which is either within or outside the official content moderation policy, to enable the internet provider to comply with European data protection laws and particularly with the so called right to be forgotten provisions.
Last week, together with some of the leading U.S internet law lawyers and many respectable academics, I attended a unique event at Santa Clara University Law School, California, where for the first time in the history of the internet, tech companies’ legal leaders and internet policy experts came together for a day of panels and discussions on content moderation and removal policies.
Among the attendees were representatives from Facebook, Google, Reddit, Yelp, GlassDoor, Automattic, Pinterest and Wikipedia. I was then given a rate insight into the policies social media companies have developed concerning their content moderation. Some content moderation is done automatically whilst a surprising amount is carried out manually.
I have written some of my observations in an article for the UK social media blog Inforrm’s Blog.