Have you at any time wanted to prevent Google from indexing a certain URL on your world-wide-web site and displaying it in their lookup motor final results web pages (SERPs)? If you handle internet web sites lengthy adequate, a day will very likely arrive when you will need to know how to do this.
The three solutions most usually utilised to stop the indexing of a URL by Google are as follows:
Applying the rel=”nofollow” attribute on all anchor features utilized to link to the page to avoid the one-way links from staying followed by the crawler.
Employing a disallow directive in the site’s robots.txt file to stop the web page from currently being crawled and indexed.
Applying the meta robots tag with the articles=”noindex” attribute to avert the website page from getting indexed.
Whilst the variations in the a few approaches appear to be delicate at to start with glance, the effectiveness can vary significantly based on which technique you opt for.
Employing rel=”nofollow” to reduce Google indexing
Quite a few inexperienced site owners try to avert Google from indexing a specific URL by applying the rel=”nofollow” attribute on HTML anchor aspects. They increase the attribute to each and every anchor ingredient on their site applied to url to that URL.
Such as a rel=”nofollow” attribute on a website link prevents Google’s crawler from adhering to the website link which, in transform, helps prevent them from getting, crawling, and indexing the target webpage. Even though this approach may well operate as a small-phrase answer, it is not a viable extended-time period alternative.
The flaw with this technique is that it assumes all inbound back links to the URL will involve a rel=”nofollow” attribute. The webmaster, however, has no way to protect against other world-wide-web web pages from linking to the URL with a adopted backlink. So the odds that the URL will ultimately get crawled and indexed making use of this method is fairly higher.
Making use of robots.txt to avert Google indexing
Another widespread technique utilised to stop the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be added to the robots.txt file for the URL in question. Google’s crawler will honor the directive which will reduce the website page from currently being crawled and indexed. In some circumstances, nevertheless, the URL can still surface in the SERPs.
At times Google will exhibit a URL in their SERPs nevertheless they have never indexed the contents of that webpage. If ample world wide web web-sites link to the URL then Google can frequently infer the matter of the webpage from the url text of those people inbound backlinks. As a result they will show the URL in the SERPs for similar searches. While making use of a disallow directive in the robots.txt file will avoid Google from crawling and indexing a URL, it does not guarantee that the URL will never ever seem in the SERPs.
Working with the meta robots tag to protect against Google indexing
If you want to stop Google from indexing a URL while also stopping that URL from staying displayed in the SERPs then the most powerful technique is to use a meta robots tag with a information=”noindex” attribute within the head aspect of the website web site. Of study course, for Google to really see this meta robots tag they need to have to initial be able to discover and crawl the page, so do not block the URL with robots.txt. When Google crawls the website page and discovers the meta robots noindex tag, they will flag the URL so that it will by no means be shown in the SERPs. google reverse index is the most productive way to stop Google from indexing a URL and exhibiting it in their research success.