Recently after a major update Google’s Bot removed around 1.5 billion URLs from its Search Engine to make it more reliable source of information.
In a blog post, the Google stated that
“Google may temporarily or permanently remove sites from its index and search results if it believes it is obligated to do so by law, if the sites do not meet Google’s quality guidelines, or for other reasons, such as if the sites detract from users’ ability to locate relevant information.
We cannot comment on the individual reasons a page may be removed. However, certain actions such as cloaking, writing text in such a way that it can be seen by search engines but not by users, or setting up pages/links with the sole purpose of fooling search engines may result in removal from our index.”
Google won’t take any action on its own until the owner of the copyrighted content Officially requests to Google to remove it. Once the request is sent, Google concocts the request and take actions accordingly. The URL is removed from the search results if the website in question is violating the guidelines of the Google.
The owners of the blocked websites can access their websites, log in the search console, add their website URL and verify the site ownership which will be processed again by Google.
The count of the requests submitted has reached almost 20 billion which lead to the removal of the 1.75 billion URLs regarding the 888 thousand domains.