Sitemap url's not being indexed
-
There is an issue on one of our sites regarding many of the sitemap url's not being indexed. (at least 70% is not being indexed)
The url's in the sitemap are normal url's without any strange characters attached to them, but after looking into it, it seems a lot of the url's get a #. + a number sequence attached to them once you actually go to that url. We are not sure if the "addthis" bookmark could cause this, or if it's another script doing it.
For example
Url in the sitemap: http://example.com/example-category/0246
Url once you actually go to that link: http://example.com/example-category/0246#.VR5a
Just for further information, the XML file does not have any style information associated with it and is in it's most basic form.
Has anyone had similar issues with their sitemap not being indexed properly ?...Could this be the cause of many of these url's not being indexed ?
Thanks all for your help.
-
Anders,
Thanks for the reply. I definitely agree a self referring canonical might just be a good extra addition on these product pages, so I'm definitely adding that to our list of to do's if it does not improve.
In terms of indexing pages - We have not restricted crawl frequency, we have it set to "allow google to determine the optimal crawl rate". No other warnings found within the search console either.
Thanks for your help.
-
I agree - i probably would ignore everything after the "#".
But have you tried added a <link rel="canonical" href="http://example.com/page-url" /> to your pages and see if this will update it? Also: Add the sitemap to your robots.txt if not allready done.
Regarding indexing pages - have you restricted crawl frequency in Google Search Console, or is it set to be determined by GoogleBot? Any other warnings or messages in Search Console?
Best regards,
Anders -
Lesley,
Thanks for the confirmation on that one and the article. Since it doesn't seem like a lot of people on the site are using that address share function, I do not think it would do any harm to remove it.
At least we know the root cause of why it's doing it to the url's. Now the real question is...could it be getting in the way of indexing those url's ?...one would think not, as from what I've read, google would simply ignore what comes after the #.
Thoughts ?
Appreciate the help.
-
Patrick,
We'd prefer to keep the actual url's private, however I can provide further information to help hopefully allow the community to dissect this further:
- It's an E-commerce website, meaning many facets, filters, and possible duplicate content angles
- It seems many of the static pages (/products main page, /contact,etc) are indexed, however it seems the individual products are mostly not being indexed through the sitemap
- While the url's found in webmaster tools under "index" has also steadily been going down, it definitely doesn't correspond with the lack of pages indexed vs submitted within the sitemap
- We have checked robots.txt, and it is not blocking any important pages. (I also had them allow robots to crawl css and js so google could have full access)
- The individual product pages all have the "addthis" feature, meaning they all have a #. + number sequence added to the url's. However one would think this wouldn't be the cause of this lack of indexation ?
Thanks for your help.
-
Yes, add this is doing this to your url. I hate it, that is one reason why I do not use them.
Here is an article on how to remove them, http://support.addthis.com/customer/portal/articles/1013558-removing-all-hashtags-anchors-weird-codes-from-your-urls
-
Hi there
Could you provide you website's URL? It would help the community take a deeper look - thanks!
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is this a true rel=nofollow for the whole article? "printfriendly.com" is part of the URL which is why I'm confused.
Is the rel=nofollow tag on this article a true NoFollow for the whole article (and all the external links to other sites in the article), or is it just for a specific part of the page? Here is the article: https://www.aplaceformom.com/blog/americans-are-not-ready-for-retirement/ The reason I ask is that I'm confused about the code since it has "printfriendly.com..." as a portion of the URL. Your help is greatly appreciated. Thanks!
Technical SEO | | dklarse0 -
Our client's site was owned by former employee who took over the site. What should be done? Is there a way to preserve all the SEO work?
A client had a member of the team leave on bad terms. This wasn't something that was conveyed to us at all, but recently it came up when the distraught former employee took control of the domain and locked everyone out. At first, this was assumed to be a hack, but eventually it was revealed that one of the company starters who unhappily left the team owned the domain all along and is now holding it hostage. Here's the breakdown: -Every page aside from the homepage is now gone and serving a 404 response code -The site is out of our control -The former employee is asking for a $1 million ransom to sell the domain back -The homepage is a "countdown clock" that isn't actively counting down, but claims that something exciting is happening in 3 days and lists a contact email. The question is how we can save the client's traffic through all this turmoil. Whether buying a similar domain and starting from square one and hoping we can later redirect the old site's pages after getting it back. Or maybe we have a legal claim here that we do not see even though the individual is now the owner of the site. Perhaps there's a way to redirect the now defunct pages to a new site somehow? Any ideas are greatly appreciated.
Technical SEO | | FPD_NYC0 -
Specific question about pagination prompted by Adam Audette's Presentation at RKG Summit
This question is prompted by something Adam Audette said in this excellent presentation: http://www.rimmkaufman.com/blog/top-5-seo-conundrums/08062012/ First, I will lay out the issues: 1. All of our paginated pages have the same URL. To view this in action, go here: http://www.ccisolutions.com/StoreFront/category/audio-technica , scroll down to the bottom of the page and click "Next" - look at the URL. The URL is: http://www.ccisolutions.com/StoreFront/IAFDispatcher, and for every page after it, the same URL. 2. All of the paginated pages with non-unique URLs have canonical tags referencing the first page of the paginated series. 3. http://www.ccisolutions.com/StoreFront/IAFDispatcher has been instructed to be neither crawled nor indexed by Google. Now, on to what Adam said in his presentation: At about minute 24 Adam begins talking about pagination. At about 27:48 in the video, he is discussing the first of three ways to properly deal with pagination issues. He says [I am somewhat paraphrasing]: "Pages 2-N should have self-referencing canonical tags - Pages 2-N should all have their own unique URLs, titles and meta descriptions...The key is, with this is you want deeper pages to get crawled and all the products on there to get crawled too. The problem that we see a lot is, say you have ten pages, each one using rel canonical pointing back to page 1, and when that happens, the products or items on those deep pages don't get get crawled...because the rel canonical tag is sort of like a 301 and basically says 'Okay, this page is actually that page.' All the items and products on this deeper page don't get the love." Before I get to my question, I'll just throw out there that we are planning to fix the pagination issue by opting for the "View All" method, which Adam suggests as the second of three options in this video, so that fix is coming. My question is this: It seems based on what Adam said (and our current abysmal state for pagination) that the products on our paginated pages aren't being crawled or indexed. However, our products are all indexed in Google. Is this because we are submitting a sitemap? Even so, are we missing out on internal linking (authority flow) and Google love because Googlebot is finding way more products in our sitemap that what it is seeing on the site? (or missing out in other ways?) We experience a lot of volatility in our rankings where we rank extremely well for a set of products for a long time, and then disappear. Then something else will rank well for a while, and disappear. I am wondering if this issue is a major contributing factor. Oh, and did I mention that our sort feature sorts the products and imposes that new order for all subsequent visitors? it works like this: If I go to that same Audio-Technica page, and sort the 125+ resulting products by price, they will sort by price...but not just for me, for anyone who subsequently visits that page...until someone else re-sorts it some other way. So if we merchandise the order to be XYZ, and a visitor comes and sorts it ZYX and then googlebot crawls, google would potentially see entirely different products on the first page of the series than the default order marketing intended to be presented there....sigh. Additional thoughts, comments, sympathy cards and flowers most welcome. 🙂 Thanks all!
Technical SEO | | danatanseo0 -
Will syndicated content hurt a website's ranking potential?
I work with a number of independent insurance agencies across the United States. All of these agencies have setup their websites through one preferred insurance provider. The websites are customizable to a point, but the content for the entire website is mostly the same. Therefore, literally hundreds of agency sites have essentially the same content. The only thing that changes is a few "wildcards" in the copy where the agency fills in their city, state, services areas, company history, etc. My questions is: will this syndicated content hurt their ranking potential? I've been toying with the idea of further editing the content to make it more unique to an agency, but I would hate to waste a lot of hours doing this if it won't help anything. Would you expect this approach to be beneficial or a waste of time? Thank you for your help!
Technical SEO | | copyjack0 -
Will rel canonical tags remove previously indexed URLs?
Hello, 7 days ago, we implemented canonical tags to resolve duplicate content issues that had been caused by URL parameters. These "duplicate content" had already been indexed. Now that the URLs have rel canonical tags in place, will Google automatically remove from its index the other URLs with the URL parameters? I ask because we have been tracking the approximate number of URLs indexed by doing a site: search in Google, and we have barely noticed a decrease in URLs indexed. Thanks.
Technical SEO | | yacpro130 -
Exclude Child URLs from XML Sitemap Generator (Wordpress)
Hi all, I was recommended the XML Sitemap Generator for Wordpress by the very helpful Keith Bloemendaal and John Pring - however I can't seem to exclude child URLs. There is a section Exclude items and a subsection Exclude posts. I have tried inputting the URLs for the pages I don't want in the sitemap, however that didn't work. So I read that you have to include a list of "IDs" - not sure where on earth to find that info, tried the page name and the post= number from the URL, however neither worked. I hope somebody can point me in the right direction - and apologies, I am a Wordpress novice, and I got no answers from the Wordpress forums so turned right back to SEOmoz! Cheers.
Technical SEO | | markadoi840 -
Duplicate pages, overly dynamic URL’s and long URL’s in Magento
Hi there, I’ve just completed the first crawl of my Magento site and SEOMOZ has picked up 1,000’s of duplicate pages, overly dynamic URL’s and long URL’s due to the sort function which appends URL’s with variables when sorting products (e.g. www.example.com?dir=asc&order=duration). I’m not particularly concerned that this will affect our rankings as Google has stated that they are familiar with the structure of popular CMS’s and Magento is pretty popular. However it completely dominates my crawl diagnostics so I can’t see if there are any real underlying issues. Does anyone know a way of preventing this? Cheers,
Technical SEO | | WendyWuTours
Al.1 -
What's the best way to switch over to a new site with a different CMS?
Is it better to 301 or to closely duplicate each page URL when switching over to a new website from an established site with good ranking and a different CMS ( Drupal switching to Wordpress)?
Technical SEO | | OhYeahSteve0