Are HTML Sitemaps Still Effective With "Noindex, Follow"?
-
A site we're working on has hundreds of thousands of inventory pages that are generally "orphaned" pages. To reach them, you need to do a lot of faceting on the search results page. They appear in our XML sitemaps as well, but I'd still consider these orphan pages.
To assist with crawling and indexation, we'd like to create HTML sitemaps to link to these pages. Due to the nature (and categorization) of these products, this would mean we'll be creating thousands of individual HTML sitemap pages, which we're hesitant to put into the index.
Would the sitemaps still be effective if we add a noindex, follow meta tag? Does this indicate lower quality content in some way, or will it make no difference in how search engines will handle the links therein?
-
Additional research turned up this Matt Cutts video which addresses my question head on:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
HELP: Why do I have a 61% score for "% of total links, external + follow"?
Firstly, I understand what this percentage is. It's the ratio of external links that are "follow" -> compared to the links that are "no-follow". Four questions: This is definitely not accurate! I have loads of no-follow links Does anyone have ideas or techniques to add more healthy no-follow links? Am I completely misunderstanding this? Will this high score negatively affect my ranking? I could definitely use some help. Thanks so much in advance. I don't think my website address should help, but if you need it for context, it's estatediamondjewely.com.
Intermediate & Advanced SEO | | SamCitron0 -
Best Way To Go About Fixing "HTML Improvements"
So I have a site and I was creating dynamic pages for a while, what happened was some of them accidentally had lots of similar meta tags and titles. I then changed up my site but left those duplicate tags for a while, not knowing what had happened. Recently I began my SEO campaign once again and noticed that these errors were there. So i did the following. Removed the pages. Removed directories that had these dynamic pages with the remove tool in google webmasters. Blocked google from scanning those pages with the robots.txt. I have verified that the robots.txt works, the pages are longer in google search...however it still shows up in in the html improvements section after a week. (It has updated a few times). So I decided to remove the robots.txt file and now add 301 redirects. Does anyone have any experience with this and am I going about this the right away? Any additional info is greatly appreciated thanks.
Intermediate & Advanced SEO | | tarafaraz0 -
Disabling a website - What to do with "link juice"?
Hi I built a website for a client a long time ago now and for a number of reasons I have decided to shut down the website. None payment being one of the reasons! My question to all you SEO guru's out there is, what should I do with 301 redirects. The site is an e-commerce based website and my personal website is simply advertising my services and portfolio. If I 301 redirect all the traffic from the customer website, will there be any issue with Google (or any search engine) seeing that my website is receiving traffic for search phrases such as "Coffee Mugs"? I.e. abolutely no relevance at all to my website content! My worry is my site could be penalised for a flurry of thousands of redirected links. Also, if I redirect everything to my site and the customer decides to pay the bill in due course, I will then remove the redirects - I guess this will have a massive impact on the rankings of the site? Thanks for reading and any advice.
Intermediate & Advanced SEO | | yousayjump0 -
Sitemap Dissappearance??
Greetings Mozzers, Doing my standard run through Webmaster tools and I discover up to 30% of my sitemaps no longer exist. Has anyone else experienced the recent loss of sitemaps/can suggest reasons why this may have happened? Re-submitting all sitemaps now but just concerned this might become an on-going issue...
Intermediate & Advanced SEO | | RobertChapman0 -
Sitemaps and subdomains
At the beginning of our life-cycle, we were just a wordpress blog. However, we just launched a product created in Ruby. Because we did not have time to put together an open source Ruby CMS platform, we left the blog in wordpress and app in rails. Thus our web app is at http://www.thesquarefoot.com and our blog is at http://blog.thesquarefoot.com. We did re-directs such that if the URL does not exist at www.thesquarefoot.com it automatically forwards to blog.thesquarefoot.com. What is the best way to handle sitemaps? Create one for blog.thesquarefoot.com and for http://www.thesquarefoot.com and submit them separately? We had landing pages like http://www.thesquarefoot.com/houston in wordpress, which ranked well for Find Houston commercial real estate, which have been replaced with a landing page in Ruby, so that URL works well. The url that was ranking well for this word is now at blog.thesquarefoot.com/houston/? Should i delete this page? I am worried if i do, we will lose ranking, since that was the actual page ranking, not the new one. Until we are able to create an open source Ruby CMS and move everything over to a sub-directory and have everything live in one place, I would love any advice on how to mitigate damage and not confuse Google. Thanks
Intermediate & Advanced SEO | | TheSquareFoot0 -
"nocontent" class use for Google Custom Search: SEO Ramifications?
Hi all, Have a client that uses Google Custom Search tool which is crawling, indexing and returning millions of irrelevant results for keywords that are on every page of the site. IT/Web dev. team is considering adding a class attribute to prohibit Google Custom Search from indexing bolierplate content regions. Here's the link to Google's custom search help page: http://support.google.com/customsearch/bin/answer.py?hl=en&answer=2364585 "...If your pages have regions containing boilerplate content that's not relevant to the main content of the page, you can identify it using the nocontent class attribute. When Google Custom Search sees this tag, we'll ignore any keywords it contains and won't take them into account when calculating ranking for your Custom Search engine. (We'll still follow and crawl any links contained in the text marked nocontent.) To use the nocontent class attribute, include the boilerplate content in a tag (for example, span or div) like this: Google Custom Search also notes:"Using nocontent won't impact your site's performance in Google Web Search, or our crawling of your site, in any way. We'll continue to follow any links in tagged content; we just won't use keywords to calculate ranking for your Custom Search engine."Just want to confirm if anyone can forsee any SEO implications the use of this div could create? Anyone have experience with this?Thank you!
Intermediate & Advanced SEO | | MRM-McCANN0 -
To "Rel canon" or not to "Rel canon" that is the question
Looking for some input on a SEO situation that I'm struggling with. I guess you could say it's a usability vs Google situation. The situation is as follows: On a specific shop (lets say it's selling t-shirts). The products are sorted as follows each t-shit have a master and x number of variants (a color). we have a product listing in this listing all the different colors (variants) are shown. When you click one of the t-shirts (eg: blue) you get redirected to the product master, where some code on the page tells the master that it should change the color selectors to the blue color. This information the page gets from a query string in the URL. Now I could let Google index each URL for each color, and sort it out that way. except for the fact that the text doesn't change at all. Only thing that changes is the product image and that is changed with ajax in such a way that Google, most likely, won't notice that fact. ergo producing "duplicate content" problems. Ok! So I could sort this problem with a "rel canon" but then we are in a situation where the only thing that tells Google that we are talking about a blue t-shirt is the link to the master from the product listing. We end up in a situation where the master is the only one getting indexed, not a problem except for when people come from google directly to the product, I have no way of telling what color the costumer is looking for and hence won't know what image to serve her. Now I could tell my client that they have to write a unique text for each varient but with 100 of thousands of variant combinations this is not realistic ir a real good solution. I kinda need a new idea, any input idea or brain wave would be very welcome. 🙂
Intermediate & Advanced SEO | | ReneReinholdt0