Have you submitted a new sitemap to Webmaster Tools? Also, you could consider 301 redirecting the pages to relevant new pages to capitalize on any link equity or ranking power they may have had before. Otherwise Google should eventually stop crawling them because they are 404. I've had a touch of success getting them to stop crawling quicker (or at least it seems quicker) by changing some 404s to 410s.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Best posts made by MikeRoberts
-
RE: How to Stop Google from Indexing Old Pages
-
RE: How to Stop Google from Indexing Old Pages
After reading the further responses here I'm wondering something...
You switched to a new site, can't 301 the old pages, and have no control over the old domain... So why are you worried about pages 404ing on an unused site you don't control anymore?
Maybe I'm missing something here or not reading it right. Who does control the old domain then? Is the old domain just completely gone? Because if so, why would it matter that Google is crawling non-existent pages on a dead site and returning 404s and 500s? Why would that necessarily affect the new site?
Or is it the same site but you switched to Java from PHP? If so, wouldn't your CMS have a way of redirecting the old pages that are technically still part of your site to the newer relevant pages on the site?
I feel like I'm missing pertinent info that might make this easier to digest and offer up help.
-
RE: Duplicate Content Issues on Product Pages
Similar to what BJS1976 and Takeshi stated, the way we handled the bulk of duplicate content issues from a similar circumstance for our ecommerce site was handling the different varieties of the same product through parameters and then canonicalizing the parameters to the version of the URL sans parameter.
For example, due to database reasons /product1.php?color=42 and /product1.php?color=30 are the same product but one is red and one is blue, the pages are exactly the same & have radials/buttons/dropdowns to choose any available color, /product1.php would default to one specific variation we chose (usually the best selling color) and then /product1.php?color=42 and /product1.php?color=30 had a rel=canonical tag added pointing at /product1.php
For any remaining products flagged as duplicates that couldn't be fixed that way, we set those aside to have myself and another copywriter work on creating further content that would set them apart enough as to not be duplicates.
-
RE: Duplicate Content Issues on Product Pages
I agree with Everett from a standpoint of User Experience. It could potentially be better for users if they appeared on a product page where they could then choose color, size, etc. variables for their product instead of having to click through multiple pages to find the right one or scroll through a huge list of variations.
The reduction in pages should also help consolidate link equity and keep pages from cannibalizing each other in the SERPs.
As for Takeshi's suggestion on Canonicals, I'm a fan of the rel=canonical tag but the potential problem with using them in this instance is twofold. 1) As Takeshi mentioned: "as far as Google is concerned you only have 1 page with the content on it" and 2) Canonicals are suggestions not directives so the search engines may choose not to recognize it if not used properly.
-
RE: Include or exclude noindex urls in sitemap?
You could technically add them to the sitemap.xml in the hopes that this will get them noticed faster but the sitemap is commonly used for the things you want Google to crawl and index. Plus, placing them in the sitemap does not guarantee Google is going to get around to crawling your change or those specific pages. Technically speaking, doing nothing and jut waiting is equally as valid. Google will recrawl your site at some point. Sitemap.xml only helps if Google is crawling you to see it. Fetch As makes Google see your page as it is now which is like forcing part of a crawl. So technically Fetch As will be the more reliable, quicker choice though it will be more labor-intensive. If you don't have the man-hours to do a project like that at the moment, then waiting or using the Sitemap could work for you. Google even suggests using Fetch As for urls you want them to see that you have blocked with meta tags: https://support.google.com/webmasters/answer/93710?hl=en&ref_topic=4598466
-
RE: Include or exclude noindex urls in sitemap?
That opens up other potential restrictions to getting this done quickly and easily. I wouldn't consider it best practices to create what is essentially a spam page full of internal links and Googlebot will likely not crawl all 4000 links if you have them all there. So now you'd be talking about maybe making 20 or so thin, spammy looking pages of 200+ internal links to hopefully fix the issue.
The quick, easy sounding options are not often the best option. Considering you're doing all of this in an attempt to fix issues that arose due to an algorithmic penalty, I'd suggest trying to follow best practices for making these changes. It might not be easy but it'll lessen your chances of having done a quick fix that might be the cause, or part of, a future penalty.
So if Fetch As won't work for you (considering lack of manpower to manually fetch 4000 pages), the sitemap.xml option might be the better choice for you.
-
RE: Homepage not indexed - seems to defy explanation
I took a look at all of the usual suspects as well... which amounts to pretty much everything that everyone else mentioned but I was intrigued by this issue and thought maybe another set of eyes might notice something that was off. Nothing was wrong in the page source from what I saw, no issues crawling it myself and I didn't see any penalties. Normally I'd think that if your homepage wasn't appearing for branded organic searches then a penalty was levied against you but when that is the case the homepage is still normally find-able in a Site operator search. M__aybe it is related to all the backlinks that were lost/deleted in the past month but I'm not sure why that would be the case unless removing the homepage from the index was a Penguin response to link issues... but I was under the impression that peguin was devaluing the link source not the link recipient and deleting/removing links seems to be a preferred method of handling penguin-related issues. So if there is a relationship between penguin and your homepage being deindexed then I am not sure at all why nor am I certain how to fix it as I'm not seeing anything in particular that screams "linking issue" at me. (though I only did a fairly cursory inspection of things)
So I am stumped. Whenever the issue is figure out I would love to know how/why this came to be.
-
RE: Homepage not indexed - seems to defy explanation
Glad you figured it out. I honestly didn't think it would have been the canonicals. I'm a little surprised that the bots didn't just choose not to respect the suggestion as opposed to blanking your site from the index. Didn't think that was even a possibility from incorrect canonicals. Good to know for the future though in case anything like this comes up with anyone else's site.