Issues with Google-Bot crawl vs. Roger-Bot
-
Greetings from a first time poster and SEO noob...
I hope that this question makes sense...
I have a small e-commerce site, I have had Roger-bot crawl the site and I have fixed all errors and warnings that Volusion will allow me to fix.
Then I checked Webmaster Tools, HTML improvements section and the Google-bot sees different dupe. title tag issues that Roger-bot did not. so
A few weeks back I changed the title tag for a product, and GWT says that I have duplicate title tags but there is only one live page for the product. GWT lists the dupe. title tags, but when I click on each they all lead to the same live page. I'm confused, what pages are these other title tags referring to? Does Google have more than one page for that product indexed due to me changing the title tag when the page had a different URL?
- Does this question make sense? 2) Is this issue a problem? 3) What can I do to fix it?
Any help would be greatly appreciated
Jeff
-
Thank you very much for the quick response, I will take a look at that solution
-
Hi Jeff
-
Yes it makes sense
-
It could be, it could affect your SEO rankings if Google catches it as duplicate ontent.
-
This is how you can fix it... create a new site map (personally I like vigos sitemap generator), and submit it to Google Webmaster Central. This will tell Google exactly how many pages you have, which ones should be indexed, etc. If you already submitted the sitemap manually try re-doing it.
I hope that helps
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Issue with Google Structured Data Testing Toll asking for "logo" - ld+json
Hi I am trying to get schema set up for a number of articles we are putting on our site (eg:https://www.plasticpipeshop.co.uk/temporary-KB-page_ep_88-1.html) the mark up I think I should use is : Google structured data testing tool keeps insisting I have "publisher" and then "logo" but doesn't seem to want accept anything for the "logo" entry no matter how I seem to code it. Any assistance would be much appreciated as after three hours on this I am pulling what little hair I have left out! Bob
Intermediate & Advanced SEO | | BobBawden10 -
Google Seeing 301 as 404
Hi all, We recently migrated a few small sites into one larger site and generally we had no problems. We read a lot of blogs before hand, 301'd the old links etc and we've been keeping an eye on any 404s. What we have found is that Webmaster is picking up quite a few 404s, yet when we investigate these 404s they are 301'd and work fine. This isn't for every url, but Google is finding more and I just want to catch any problems before they get out of hand. Is there any reason why Google would count a 301 as a 404? Thanks!
Intermediate & Advanced SEO | | HB170 -
Constructing the perfect META Title - Ranking vs CTR vs Search Volume
Hello Mozzers! I want to discuss the science behind the perfect META Title in terms of three factors: 1. Ranking 2. CTR 3. Search Volume Hypothetical scenario: A furniture company "Boogie Beds" wants to optimise their META Title tag for their "Cane Beds" ecommerce webpage. 1. The keywords "Cane Beds' has a search volume of 10,000 2. The keywords " Cane Beds For Sale" has a search volume of 250 3. The keywords "Buy Cane Beds" has a search volume of 25 One of Boogie Beds SEO's suggests a META Title "Buy Cane Beds For Sale Online | Boogie Beds" to target and rank for all three keywords and capture long tail searches. The other Boogie Bed SEO says no! The META Title should be "Cane Beds For Sale | Boogie Beds" to target the most important two competitive keywords and sacrifice the "Buy" keyword for the other two Which SEO would you agree more with, considering 1. Ranking ability 2. Click through rates 3. Long tail search volume 4. Keyword dilution Much appreciated! MozAddict
Intermediate & Advanced SEO | | MozAddict1 -
Google Crawl Rate and Cached version - not updated yet :(
Hi, Ive noticed that Google is not recognizing/crawling the latest changes on pages in my site - last update when viewing Cached version in Google Results is over 2 months ago. So, do I Fetch as Googlebot to force an update? Or do I remove the page's cached version in GWT remove urls? Thanks, B
Intermediate & Advanced SEO | | bjs20100 -
If I had an issue with a friendly URL module and I lost all my rankings. Will they return now that issue is resolved next time I'm crawled by google?
I have 'magic seo urls' installed on my zencart site. Except for some reason no one can explain why or how the files were disabled. So my static links went back to dynamic (index.php?**********) etc. The issue was resolved with the module except in that time google must have crawled my site and I lost all my rankings. I'm nowher to be found in the top 50. Did this really cause such an extravagant SEO issue as my web developers told me? Can I expect my rankings to return next time my site is crawled by google?
Intermediate & Advanced SEO | | Pete790 -
Google Page Rank Dead?
Does PR still work? I have sites that have PR3 and get almost no traffic and sites that are PR1 and get thousands of uniques per month. My PR on my main sites haven't moved for about 7 years, even though we've grown significantly. I know lots of you are going to jump in with get the MOZ toolbar, which I already have done, and I agree, it's great ... But can anyone tell me about what's going on with Google PR? Is it still active? Or has Google abandoned? I noticed that the Google toolbar is not even available for Google Chrome. That should say something ... If you like this question, do me a favor, and give me a THUMBS UP!
Intermediate & Advanced SEO | | applesofgold2 -
Google, Links and Javascript
So today I was taking a look at http://www.seomoz.org/top500 page and saw that the AddThis page is currently at the position 19. I think the main reason for that is because their plugin create, through javascript, linkbacks to their page where their share buttons reside. So any page with AddThis installed would easily have 4/5 linbacks to their site, creating that huge amount of linkbacks they have. Ok, that pretty much shows that Google doesn´t care if the link is created in the HTML (on the backend) or through Javascript (frontend). But heres the catch. If someones create a free plugin for wordpress/drupal or any other huge cms platform out there with a feature that linkbacks to the page of the creator of the plugin (thats pretty common, I know) but instead of inserting the link in the plugin source code they put it somewhere else, wich then is loaded with a javascript code (exactly how AddThis works). This would allow the owner of the plugin to change the link showed at anytime he wants. The main reason for that would be, dont know, an URL address update for his blog or businness or something. However that could easily be used to link to whatever tha hell the owner of the plugin wants to. What your thoughts about this, I think this could be easily classified as White or Black hat depending on what the owners do. However, would google think the same way about it?
Intermediate & Advanced SEO | | bemcapaz0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0