Over 1000 pages de-indexed over night
-
Hello,
On my site (www.bridgman.co.uk) we had a lot of duplicate page issues as reported by the Seomoz site report tool - this was due to database driven URL strings. As a result, I sent an excel file with all the duplicate pages to my web developer who put rel canonical tags on what I assumed would be all the correct pages.
I am not sure if this is a coincidence, or a direct result of the canonical tags, but a few days after (yesterday) the amount of pages indexed by google dropped from 1,200 to under 200.
The number is still declining, and other than the canonical tags I can't work out why Google would just start de-indexing most of our pages.
If you could offer any solutions that would be greatly appreciated.
Thanks, Robert.
-
Canonical and duplicate content are both interresting issues, thanks for your anserws!
-
The 1st question I would ask myself as a website owner is: How many pages have duplicate content on them? Maybe try going the manual way and check for yourself. There will be pages you do not know about. Use a tool like Xenu's link sleuth to extract all links on your website(which are reachable from atleast 1 link of your website).
It may happen that google is adjusting its index for all. Worth keeping a track on whether your competitors are losing at the same or slower rate or not.
-
I would first remove the canonicals (or actually do them correctly) and wait for a re-crawl of the pages. If not, then re-submit.
Also, problems with canonicals do show up in the page grader app, so when your admin re-writes the canonical, test the page before Google sees it : )
-
It appears so
I've asked my webmaster to remove all of the canonical tags but after reading that article I feel that a lot of the damage has already been done!
Looks like I'll have to submit a reconsideration request and beg Google for forgiveness.
-
Hmm, I'd remove them.
The fact that all your individual product pages have a rel=canonical of productsdetail.asp and category level is products.asp is telling Google that all your products pages are the same!
The problem is there are a lot of parameters in your URLs, let me go have a look at how to deal with that when using the canonical tag.
EDIT - Spent half my lunch on this but couldn't find much I would guess you can get away with parameters in the canonical tag so decide which is the official URL and put that in the header for that paticular piece of content. Only really found this - http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
-
Could you be in a canonical loop?
-
I don't think this is a result of an algo change, I've just had a look at the crawl diagnostics and it appears that he has put the tag value of the homepage (www.bridgman.co.uk/ - where all the back links are pointed to) as www.bridgman.co.uk/default.asp !!
This has also been done to all of our product pages!! i.e.all of the duplicated product pages now have the tag value - www.bridgman.co.uk/productdetail.asp (a page which doesn't even exist on our site!!!)
Now I may be stating the obvious, but I guess this is the problem?
He did this 8 days ago, where should I go from here? Ask him to remove all rel canonical tags and pray we bounce back....?
-
First question would be do you think this may have had an effect on you? - http://searchengineland.com/google-forecloses-on-content-farms-with-farmer-algorithm-update-66071
All your content unique? Your links come from sites that may have been effected?
If not then I'd give it a few days to see if they come back (as HR128 says, may just be Google working out what's changed and what to do) and I'm doing a quick crawl of your site to see what I can see
-
It could take a couple of days before you see results, Google has to step back and reassess most likely, or your web developer has no clue what he's doin haha.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Paginated pages are being indexed?
I have lots of paginated pages which are being indexed. Should I add the noindex tag to page 2 onwards? The pages currently have previous and next tags in place. Page one also has a self-referencing canonical.
Technical SEO | | WTH0 -
How can a keyword placed on a page with the Moz page optimization score of 100 be ranked #51+?
Hi, Please help me figure out why this is happening and what goes wrong. This is the example of the poor ranked keyword - 'viking cooktop repair' with page optimization score of 100 (http://www.yourappliancerepairla.com/blog/viking-cooktop-repair/) Yet it's ranking is #51+. I've got many like these: Page Optimization Score for 'kitchenaid oven repair' is 100 (http://www.yourappliancerepairla.com/blog/kitchenaid-oven-repair/) yet its ranking is #51+ And so on. According to Google Search Console, I have 266 of links to my site with variety of root domains. While building backlinks, I paid attention to relevancy and DA.What else do I have to do to get those keywords ranked higher? And why don't they rank well if the pages are 100% optimized, not keywords stuffed and I have quality backlinks? What am I missing out on? Please help!
Technical SEO | | kirupa1 -
What should I do with all these 404 pages?
I have a website that Im currently working on that has been fairly dormant for a while and has just been given a face lift and brought back to life. I have some questions below about dealing with 404 pages. In Google WMT/search console there are reports of thousands of 404 pages going back some years. It says there are over 5k in total but I am only able to download 1k or so from WMT it seems. I ran a crawl test with Moz and the report it sent back only had a few hundred 404s in, why is that? Im not sure what to do with all the 404 pages also, I know that both Google and Moz recommend a mixture of leaving some as 404s and redirect others and Id like to know what the community here suggests. The 404s are a mix of the following: Blog posts and articles that have disappeared (some of these have good back-links too) Urls that look like they used to belong to users (the site used to have a forum) which where deleted when the forum was removed, some of them look like they were removed for spam reasons too eg /user/buy-cheap-meds-online and others like that Other urls like this /node/4455 (or some other random number) Im thinking I should permanently redirect the blog posts to the homepage or the blog but Im not sure what to do about all the others? Surely having so many 404s like this is hurting my crawl rate?
Technical SEO | | linklander0 -
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
No Index PDFs
Our products have about 4 PDFs a piece, which really inflates our indexed pages. I was wondering if I could add REL=No Index to the PDF's URL? All of the files are on a file server, so they are embedded with links on our product pages. I know I could add a No Follow attribute, but I was wondering if any one knew if the No Index would work the same or if that is even possible. Thanks!
Technical SEO | | MonicaOConnor0 -
Are image pages considered 'thin' content pages?
I am currently doing a site audit. The total number of pages on the website are around 400... 187 of them are image pages and coming up as 'zero' word count in Screaming Frog report. I needed to know if they will be considered 'thin' content by search engines? Should I include them as an issue? An answer would be most appreciated.
Technical SEO | | MTalhaImtiaz0 -
Can Google show the hReview-Aggregate microformat in the SERPs on a product page if the reviews themselves are on a separate page?
Hi, We recently changed our eCommerce site structure a bit and separated our product reviews onto a a different page. There were a couple of reasons we did this : We used pagination on the product page which meant we got duplicate content warnings. We didn't want to show all the reviews on the product page because this was bad for UX (and diluted our keywords). We thought having a single page was better than paginated content, or at least safer for indexing. We found that Googlebot quite often got stuck in loops and we didn't want to bury the reviews way down in the site structure. We wanted to reduce our bounce rate a little, so having a different reviews page could help with this. In the process of doing this we tidied up our microformats a bit too. The product page used to have to three main microformats; hProduct hReview-Aggregate hReview The product page now only has hProduct and hReview-Aggregate (which is now nested inside the hProduct). This means the reviews page has hReview-Aggregate and hReviews for each review itself. We've taken care to make sure that we're specifying that it's a product review and the URL of that product. However, we've noticed over the past few weeks that Google has stopped feeding the reviews into the SERPs for product pages, and is instead only feeding them in for the reviews pages. Is there any way to separate the reviews out and get Google to use the Microformats for both pages? Would using microdata be a better way to implement this? Thanks,
Technical SEO | | OptiBacUK
James0 -
Differing numbers of pages indexed with and without the trailing slash
I noticed today that a site: query in Google (UK) for a certain domain I'm looking at returns different numbers depending on whether or not the trailing slash is added at the end. With the trailing slash the numbers are significantly different. This is a domain with a few duplicate content issues. It seems very rare but I've managed to replicate it for a couple of other well known domains, so this is the phenomenon I'm referring to: site:travelsupermarket.com - 16'300 results
Technical SEO | | ianmcintosh
site:travelsupermarket.com/ - 45'500 results site:guardian.co.uk - 120'000'000 results
site:guardian.co.uk/ - 121'000'000 results For the particular domain I'm looking at the numbers are 19'000 without the trailing slash and 800'000 with it! As mentioned, there are a few duplicate content issues at the moment that I'm trying to tidy up, but how should I interpret this? Has anyone seen this before and can advise what it could indicate? Thanks in advance for any answers.0