Pleasure Zack!
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Posts made by Nigel_Carr
-
RE: Duplicate Content Issues with Pagination
Hi Zack
If you have specified No URLs in search console or set to crawl only say, one, it's just a matter of time before they are dropped from the index. Just be patient and it should resolve itself.
There are ways to block them completely using robots.txt or a rewrite in htaccess but I haven't used them - maybe someone else can advise:
https://www.hallaminternet.com/avoiding-the-seo-pitfalls-of-url-parameters/
Regards Nigel
-
RE: Duplicate Content Issues with Pagination
Hi Zack - That's where parameters come in to play - if you look further down you can specify which one if any to allow to be crawled.
Choose the parameter from the list then choose whether they narrow, sort etc and pick 'narrow' then choose what Google does.You can exclude all sorted pages this way.
Regards
Nigel
-
RE: Duplicate Content Issues with Pagination
Hi Zack
There area number of ways of dealing with this problem all of which are covered here: https://moz.com/blog/seo-guide-to-google-webmaster-recommendations-for-pagination
For me when I have sort problems I go to parameters and specify exactly what Google should do with the result. It's reasonably simply to use parameters in search console and specify NO URLs for the parameter ?view
However, a word of warning - you have to be really careful doing this as you could end up blocking the whole site with the slightest slip. Like canonicals, no indexing and robots.txt - get a good SEO just to take a look.
Best Regards
Nigel - Carousel Projects
-
RE: Sanity Check: NoIndexing a Boatload of URLs
Hi Michael
The problem you have is the very low value content that exists on all of those pages and the complete impossibility of writing any unique Titles, Descriptions and content. There are just too many of them.
With a footwear client of mine I no indexed a huge slug of tags taking the page count down by about 25% - we saw an immediate 22% increase in organic traffic in the first month. (March 18th 2017 - April 17th 2017) the duplicates were all size and colour related. Since canonicalising (I'm English lol) more content and taking the site from 25,000 pages to around 15,000 the site is now 76% ahead of last year for organics. This is real measurable change.
Now the arguments:
Canonicalisation
How are you going to canonicalise 10,000+ pages ? unless you have some kind of magic bullet you are not going to be able to but lets look at the logic.
Say we have a page of Widgets (brand) and they come in 7 sizes. When the range is fully in stock all of the brand/size pages will be identical to the brand page, apart from the title & description. So it would make sense to canonicalise back to the brand. Even when sizes started to run out, all of the sizes will be on the brand page. So size is a subset of the brand page.
Similar but not the same for colour. If colour is a tag then every colour sorted page will be on the brand page. So really they are the same page - just a slimmer selection. Now I accept that the brand page will contain all colours as it did all sizes but the similarity is so great - 95 % of the content being the same apart from the colour, that it makes sense to call them the same.
So for me Canonicalisation would be the way to go but it's just not possible as there are too many of them.
Noindex
The upside of noindex is that it is generally easier to put the noindex tag on the page as there is no URL to tag. The downside is that the page is then not indexed in Google so you lose a little juice - I would argue by the way that the chances of being found in Google for a size page is extremely slim, less than 2% of visits came from size pages before we junked them and most of those were from a newsletter so reality is <1% not worth bothering about You could leave off the nofollow so that Google crawls through all of the links on the pages - the better option.
Considering your problem and having experience of a number of sites with the same problem Noindex is your solution.
I hope that helps
Kind Regards
Nigel - Carousel Projects.
-
RE: Sanity Check: NoIndexing a Boatload of URLs
Hi Mike
I see this a lot with sites that have a ton of tag groups. One site I am working on has 50,000 pages in Google caused by tags appending themselves to every version of a URL, the site only has 400 products. Example
Site/size-4
Site/womens/size-4
Site/womens/boots/size-4
Site/womens/boots/ankle/size-4
Site/womens/clarks/boots/size-4Etc etc - If there are other tags like colour and features, this can cause a huge 3 dimensional matrix of additional pages that can slow down the crawl of the site - Google may not crawl all of the site as a result.
If it's possible to canonicalse then that is the best option as juice and follows are retained - very often it would be the page with the tag lopped off that the tag should cite.
In extreme circumstances I would consider noindexing the pages as they offer very skinny content and rubbish Meta because it's impossible to handle them individually. I have seen significant improvement in organics as a result.
Personally I don't think it's enough to simply leave Google to figure it out although I have seen some sites with very high DA get away with it.
To be honest I am pretty shocked that Shopify doesn't have a feature to cope with this
Regards
Nigel
Carousel Projects.
-
RE: E-Commerce Site Collection Pages Not Being Indexed
It seems odd to deal with filtering here. I'd normally do that in Search Console under URL parameters but you have to be extremely careful altering stuff in there. If you email me the site I'll run a check on why Collections aren't showing.
-
RE: Subdomain vs. Separate Domain for SEO & Google AdWords
Hi mk
I am completely with Martin here. It never made sense creating a domain name with company and product in in the first place. You would be doubling the mistake by bringing it back to a sub domain.
In my opinion you either leave it like it is but more preferably, use a subfolder, then 301 all the old site back to the equivalent pages within that. i.e.
domain.com/catergory/separateproduct
It just doesn't make sense from a number of points running a site just for a subset of products. The site would continually fight with www.companyname.com for link juice. It's far better to keep the whole shebang under one umbrella.
Although part of the ranking factor in determining QS is associated with conversion history I would argue that it is heavily skewed towards how the ad text is written targeting appropriately written on page content/headings/meta.
Regards
NIgel
-
RE: E-Commerce Site Collection Pages Not Being Indexed
Try deleting these from the Robots txt.
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2bSubmit again and see what comes up. I'd hazard that this is the most logical explanation apart from having a noindex tag on the collections pages.
Are the blogs pages coming up by the way?
Right click 'view source' and have a look if there is one there.
Regards
Nigel
-
RE: How to fix: Attribute name not allowed on element meta at this point.
Without seeing that section of code it's hard to say. I would assume that the meta tags are in the wrong position on the page from what you have posted. They need to be in the head element and should go: name="description" content=XXX"/>
-
RE: Too many Tags and Categories what should I do to clean this up?
I always no index tags but keep categories. Then add unique content to each category heading to help with SEO. Some WP themes display the content, others don't. The problem with tags is they have a habit of appending themselves to all versions of a page so explode useless and low content pages.
It would take an age to caonicalise tags on most sites so in my opinion they are essentially worthless. You can always use MOZ to check the page ranks before deciding.