Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Indexing product attributes in sitemap
-
Hey Mozzers!
I'm battling a few questions about the sitemap for my ecommerce store. Could you help me out?
- Is it necessary to include your product attributes in the sitemap? I'm not sure why it would matter to have a sitemap that lists everything in the color cherry. Also, if the attributes were included in the sitemap, would that count as duplicate content for the same products to show up in multiple attributes?
- Is there any benefit to submitting the sitemaps individually? For example, submitting /product-sitemap.xml, /product_brand-sitemap.xml versus just /sitemap.xml?
Any other best practices for managing my ecommerce sitemap, or great resources, would be very helpful.
Thank you!
-
Hello Localwork,
By "product attributes" do you mean URLs associated with product variants, like color and size? From the context of your question, I'll assume for now you mean that each product attribute / variant appears on it's own URL (e.g. /?color=red and /?color=blue) and you want to know whether these should be included in the sitemap.
As Andy mentions below, more information is needed before prescribing a best practice specifically to your situation. However, in this case you should probably only have the one "canonical" version of the product URL (e.g. without variants). There are many ways to handle this and I recommend Googling "SEO for product variants" to familiarize yourself with the pros and cons of each.
To answer your question about sitemap segmentation, yes it is a good thing to do for several reasons, most important of which is easier diagnoses of crawl issues, such as which "sections" of your sites have indexation problems. It also helps on large sites with issues reaching URL limits in sitemaps, and is a more logical tree-like structure for people and machines to follow than having every URL in one sitemap.
-
Hi,
Without knowing a little more detail, it's hard to say with 100% certainty, but I can't see why the sitemap should have every iteration of a product in there. These pages (pages that are produced due to an attribute change) should rel=canonical back to the main product page anyway and this will handle duplication.
And unless you many many thousands of products in each sitemap, then you wouldn't want to be splitting them up like this, although you can rationalize these somewhat depending on the products and site.
Just remember that the sitemap is only there as an aid to helping Google crawl and there is no actual SEO benefit to this. It is whatever is going to make the most sense to the site and to Google.
-Andy
Edit: Just Tweeted this out as well to see if others wish to chime in

Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google is indexing bad URLS
Hi All, The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/ I have done the following to prevent these URLs from being created & indexed: 1. Added a directive in my Htaccess to 404 all of these URLs 2. Blocked /wp-content/uploads/revslider/ in my robots.txt 3. Manually de-inedex each URL using the GSC tool 4. Deleted the plugin However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I Thanks!
Technical SEO | | Tom3_150 -
Google not Indexing images on CDN.
My URL is: https://bit.ly/2hWAApQ We have set up a CDN on our own domain: https://bit.ly/2KspW3C We have a main xml sitemap: https://bit.ly/2rd2jEb and https://bit.ly/2JMu7GB is one the sub sitemaps with images listed within. The image sitemap uses the CDN URLs. We verified the CDN subdomain in GWT. The robots.txt does not restrict any of the photos: https://bit.ly/2FAWJjk. Yet, GWT still reports none of our images on the CDN are indexed. I ve followed all the steps and still none of the images are being indexed. My problem seems similar to this ticket https://bit.ly/2FzUnBl but however different because we don't have a separate image sitemap but instead have listed image urls within the sitemaps itself. Can anyone help please? I will promptly respond to any queries. Thanks
Technical SEO | | TNZ
Deepinder0 -
How do I handle duplicate content of the same product in Multiple product categories?
I am building a BigCommerce store for selling framed art. Many of the pieces of art will fall in more than one product category. Let's say I have a framed print of a photograph of a western landscape. This piece of art would fit into these categories; "western", "landscape", and "photography". I would have three pages with duplicate content for just this one framed print. Will google give me less page rank due to this? Can all the link juice be given to just one of the three categories by use of rel=canonical? If so, does anyone know how to do this for a bigcommerce site? I would appreciate any feedback. Thanks, Kelly
Technical SEO | | Kelly_S0 -
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
Hi I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ? Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ? In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Instant Indexing
I've been working on a site for a while now, methodically building content and building trust and authority. Lately I've noticed that anything I publish there appears to be instantly indexed by Google, which surprises me. I haven't had this happen before so I'm curious. I'd be interested to hear the experience of others.
Technical SEO | | waynekolenchuk0 -
Products with discrete URLs for each color
here is the issue. i have an ecommerce site that on a category page, shows each individual color for each product sold. and there is a distinct URL for each color. each product page shares the same content, with the only potentially differentiating factor being customer reviews (not nearly enough of these to differentiate anything). so we have URLs like: www.domain.com/product-green www.domain.com/product-yellow www.domain.com/product-red and so on. i am looking for a way to consolidate these URL while still showing all colors on the category page. the first solution i am considering is using the hash tag. so we would create www.domain.com/product#green, www.domain.com/product#yellow, www.domain.com/product#red. if possible, i would set the canonical tag as www.domain.com/product. the second solution would be to use the canonical tag and keep the URLs as is. the issue i see here is that we would need to create www.domain.com/product and show that page somewhere. www.domain.com/product would the URL that the above color URLs would canonicalize to. what would be the preferred solution? or is there something else?
Technical SEO | | rakesh_patel0