Robots.txt, Disallow & Indexed-Pages..
-
Hi guys,
hope you're well.
I have a problem with my new website. I have 3 pages with the same content:
- http://example.examples.com/brand/brand1 (good page)
- http://example.examples.com/brand/brand1?show=false
- http://example.examples.com/brand/brand1?show=true
The good page has rel=canonical & it is the only page should be appear in Search results but Google has indexed 3 pages...
I don't know how should do now, but, i am thinking 2 posibilites:
- Remove filters (true, false) and leave only the good page and show 404 page for others pages.
- Update robots.txt with disallow for these parameters & remove those URL's manually
Thank you so much!
-
Finally, i decided to do the next:
-
Delete all pages from my site with filters (i have the option and it wasn't a problem)
-
Delete URL using GWT individually
It works!
-
-
Hi thekiller99! Did this get worked out? We'd love an update.
-
Hi,
Did you actually implement canonical tags on duplicate pages, and do the point to the original piece?
-
Hi!
Not sure if i understood how you implemented the canonical element on your pages, but it sounds like you have only put the canonical code to what you call "good page"
The scenario should be like this:
1. You have 3 pages with similar/exact content.
2. Obviously you want to index only one of them and in your case it is the one without the parameters ("good page")
3. You need to go ahead and implement the canonical elements in the following way:- page-1: http://example.examples.com/brand/brand1 (you do not have to, but if it makes it ieasier for you you can use self canonical.)
- page-2: http://example.examples.com/brand/brand1?show=false (canonical to page-1)
- page-3: http://example.examples.com/brand/brand1?show=true (canonical page-1)
PS. Google best practice suggests that you should never use robots.txt to de-index a page from the search results. In case you decide to remove certain pages completely from the search results, the best practice is to 404 them and use Google Search console to signal google that these pages are no longer available. But if you implement the canonical element as described above, you will have no problems.
Best
Yossi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to index your website pages on Google 2020 ?
Hey! Hopefully, everyone is fine here I tell you some step how you are index your all website pages on Google 2020. I'm already implementing these same steps for my site Boxes Maker. Now Below I'm giving you some steps for indexing your website pages. These are the most important ways to help Google find your pages: Add a sitemap. ... Make sure people know your site. ... Ensure full navigation on your site. ... Apply the indexing application to your homepage. ... Sites that use URL parameters other than URLs or page names may be more difficult to broadcast.
Intermediate & Advanced SEO | | fbowable0 -
Robots.txt Disallowed Pages and Still Indexed
Alright, I am pretty sure I know the answer is "Nothing more I can do here." but I just wanted to double check. It relates to the robots.txt file and that pesky "A description for this result is not available because of this site's robots.txt". Typically people want the URL indexed and the normal Meta Description to be displayed but I don't want the link there at all. I purposefully am trying to robots that stuff outta there.
Intermediate & Advanced SEO | | DRSearchEngOpt
My question is, has anybody tried to get a page taken out of the Index and had this happen; URL still there but pesky robots.txt message for meta description? Were you able to get the URL to no longer show up or did you just live with this? Thanks folks, you are always great!0 -
Number of indexed pages dropped. No manual action though?
I have a client who had their WordPress site hacked. At that point there was no message from Google in webmaster tools and the search results for their pages still looked normal. They paid sitelock to fix the site. This was all about a month ago. Logging into Webmaster Tools now there are still no messages from Google nor anything on the manual actions page. Their organic traffic is essentially gone. Looking at the submitted sitemap only 3 of their 121 submitted pages are indexed. Before this all of them where in the index. Looking at the index status report I can see that the number of indexed pages dropped completely off the map. We are sure that the site is free of malware. This client has done no fishy SEO practices. What can be done?
Intermediate & Advanced SEO | | connectiveWeb0 -
meta robots no follow on page for paid links
Hi I have a page containing paid links. i would like to add no follow attribute to these links
Intermediate & Advanced SEO | | Kung_fu_Panda
but from technical reasons, i can only place meta robots no follow on page level (
is that enough for telling Google that the links in this page are paid and and to prevent Google penlizling the sites that the page link to? Thanks!0 -
Indexing Dynamic Pages
http://www.oreillyauto.com/site/c/search/Wiper+Blade/03300/C0047.oap?make=Honda&model=Accord&year=2005&vi=1430764 How is O'Reilly getting this page indexed? It shows up in organic results for [2005 honda accord windshield wiper size].
Intermediate & Advanced SEO | | Kingof50 -
Recovering from robots.txt error
Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!
Intermediate & Advanced SEO | | RikkiD220 -
NOINDEX listing pages: Page 2, Page 3... etc?
Would it be beneficial to NOINDEX category listing pages except for the first page. For example on this site: http://flyawaysimulation.com/downloads/101/fsx-missions/ Has lots of pages such as Page 2, Page 3, Page 4... etc: http://www.google.com/search?q=site%3Aflyawaysimulation.com+fsx+missions Would there be any SEO benefit of NOINDEX on these pages? Of course, FOLLOW is default, so links would still be followed and juice applied. Your thoughts and suggestions are much appreciated.
Intermediate & Advanced SEO | | Peter2640 -
Tool to calculate the number of pages in Google's index?
When working with a very large site, are there any tools that will help you calculate the number of links in the Google index? I know you can use site:www.domain.com to see all the links indexed for a particular url. But what if you want to see the number of pages indexed for 100 different subdirectories (i.e. www.domain.com/a, www.domain.com/b)? is there a tool to help automate the process of finding the number of pages from each subdirectory in Google's index?
Intermediate & Advanced SEO | | nicole.healthline0