How to remove my site's pages in search results?
-
I have tested hundreds of pages to see if Google will properly crawl, index and cached them. Now, I want these pages to be removed in Google search except for homepage. What should be the rule in robots.txt?
I use this rule, but I am not sure if Google will remove the hundreds of pages (for my testing).
User-agent: *
Disallow: /
Allow: /$ -
Why not just 404/410 those pages?
-
Hi Matt! I've already tried your suggestion. I'll let you know what's the result. Thanks a lot man!
-
why don't you try adding a meta robots tag on those pages with "NOINDEX".
i would also do remove url with WMT
-
These are just test pages and I need them to be private and not visible in Google after I test. I understand that there will be a drop in SERP rankings.
-
I would do
User-agent: *
Disallow: /?
Allow: /But test it first in WMT first to be safe. However, you must be sure that this is the route you want to go down. Robots.txt will prevent all of those pages from being indexed, which means that none of their content will count. Any links to these pages may also be devalued. The result is a potential drop in SERPs.
What is the reason why you don't want them appearing? That way we may find an alternative solution.
-
This is basically a duplicate of your other thread where I gave you that code. Yes, it should block the other pages. Put that in, fetch in WMT and you should be right.
Also, you can test it before you implement in WMT as well. I tried it on my end and it works.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO Impact & Google Impact On Removing Product From Category Page for Ecommerce Site
Hello Experts, For my Ecommerce site previously I was showing products at category pages i.e. first all subcategories name after that list all products of all subcateogries. That also approx per category 500 products via load more feature. My query is now I am planning to show products only at Product Listing Page and not on Category pages so what will be SEO impact and how google will treat this? Thanks!
Intermediate & Advanced SEO | | Johny123450 -
What is the impact of an off-topic page to other pages on the site?
We are working with a client who has one irrelevant, off-topic post ranking incredibly well and driving a lot of traffic. However, none of the other pages on the site, that are relevant to this client's business, are ranking. Links are good and in-line with competitors for the various terms. Oddly, very few external links reference this off-topic post, most are to the home page. Local profile is also in-line with competitors, including reviews, categorization, geo-targeting, pictures, etc. No spam issues exist and no warnings in Google Search Console. The only thing that seems weird is this off-topic post but that could affect rankings on other pages of the site? Would removing that off-topic post potentially help increase traffic and rankings for the other more relevant pages of the site? Appreciate any and all help or ideas of where to go from here. Thanks!
Intermediate & Advanced SEO | | Matthew_Edgar0 -
Pages Titles in SERPs - Wordpress Site
In Google SERPs we have several websites (built in wordpress) who's pages are being displayed without using the page title - is this google ignoring the page title or is there a problem in our code - also if this is google is it still taking notice of the page title to determine what content is on the page?I have read several articles on this but wondered if someone can advise - I can provide the URL if required.Also I wanted to 100% that our robots.txt is behaving its self.
Intermediate & Advanced SEO | | JohnW-UK0 -
Why are our sites top landing pages URL's that no longer exist and retrun 404 errors?
Digging through analytics today an noticed that our sites top landing pages are for pages that were part of the old www.towelsrus.co.uk website taken down almost 12 months ago. All these pages had the 301 re-directs which were removed a few months back but still have not dropped out of Googles crawl error logs. I can't understand why this is happening but almost certainly the bounce rate on these pages (100%) mean we are loosing potential conversions. How can I identify what keywords and links people are using to land on these pages?
Intermediate & Advanced SEO | | Towelsrus0 -
In mobile searches, does Google recognize HTML5 sites as mobile sites?
Does Google recognize HTML5 sites using responsive design as mobile sites? I know that for mobile searches, Google promotes results on mobile sites. I'm trying to determine if my site, created in HTML5 with responsive design falls into that category. Any insights on the topic would be very helpful.
Intermediate & Advanced SEO | | BostonWright0 -
How are pages ranked when using Google's "site:" operator?
Hi, If you perform a Google search like site:seomoz.org, how are the pages displayed sorted/ranked? Thanks!
Intermediate & Advanced SEO | | anthematic0 -
SEOMOZ duplicate page result: True or false?
SEOMOZ say's: I have six (6) duplicate pages. Duplicate content tool checker say's (0) On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory. To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Pneumatic-grippers.htm). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name. What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing. So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page." ****Page Authority Linking Root Domains http://www.agi-automation.com/ 43 82 http://www.agi-automation.com/index.html 25 2 http://www.agi-automation.com/Linear-escapements.htm 21 1 www.agi-automation.com/linear-escapements.htm 16 1 http://www.agi-automation.com/Pneumatic-grippers.htm 30 3 http://www.agi-automation.com/pneumatic-grippers.htm 16 1**** Duplicate content tool estimates the following: www and non-www header response; Google cache check; Similarity check; Default page check; 404 header response; PageRank dispersion check (i.e. if www and non-www versions have different PR).
Intermediate & Advanced SEO | | AGIAutomation0 -
3 results for a site on page one?!?
Hi, I've never seen a website rank on page 1 in position 2, 3 and 4 for one query, completely separate results as well. I thought they limited the amount of results from a website on each page?
Intermediate & Advanced SEO | | activitysuper0