Sorry for confusions. By search results I thought you might have been specifically talking about putting keywords into a site search and getting the results page. I've noindexed that page.
What you've said makes sense.
Thanks Peter.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Sorry for confusions. By search results I thought you might have been specifically talking about putting keywords into a site search and getting the results page. I've noindexed that page.
What you've said makes sense.
Thanks Peter.
Yes, it's the latter instance that I was talking about.
Thanks Peter.
Thanks Peter.
Just to clarify: I'm not talking about search results pages. I'm talking about paginated category pages. I've honestly had a number of cases where sites have linked to those 2nd or 3rd pages. Weird, I know.
Anyway, it's only a few links so I'm not too concerned about it.
Cheers.
Hi Alan, that wasn't my understanding of how it worked. I thought the "follow" part in this only permitted the bots to literally follow those links to other pages, and no link juice passes through. Maybe I misunderstood that?
Thanks Peter. One other advantage I can think that the rel=prev/next has: if someone is looking at products on a site and they are on the 2nd or 3rd page, they might decide to link to the page. This will pass the link juice to that page (or collection of pages) whereas if the page was noindexed, it would be a wasted link.
Cheers,
Thanks Peter. I hadn't seen Google's official advise on this. Having thought about it again, it does make more sense as I think it would be quite messy trying to get the rel next prev tags pointing to the non parameter urls. It's good to know that the canonical tag works in conjunction with these tags to point to the correct url.
I know it's easier to just no index those pages, but doesn't that mean you leak link juice that goes to those pages? Telling Google that they are a part of a series and having all that link juice combined into a single page should mean a more powerful page?
Thanks Peter.
Thanks Dan. I use Yoast's Wordpress SEO. It's a great plugin. I have the author archive disabled.
We don't actually link the author's name to an author page, so I think we're ok there. Thanks for the clarification.
Thanks Willny. Can you clarify what you mean by "Unless the links are going to the author's list of posts"?
Hi,
I've a number of wordpress posts that were written by different authors, and I want to merge them into a single author. If Google sees that originally the post was rel authored to person A and later we change the author reference to person B, will Google see this as suspicious in any way?
Or does it not matter, as long as it's only attributed to a single author at any one time?
Thanks,
Leigh
Thanks Miriam. In an ideal world, I agree with you, but there are many reasons why this system will work better for them, so it looks like they will be going with it.
The "Joe Bloggs" name was just an example name. They will, of course, be using a believable looking name.
Thanks,
Hi,
I have a client who's made some changes to their content strategy.
They want to use a single author for all content produced and publish, to maintain a consistent identity across the web. This single author is a persona e.g. "Joe Bloggs" but this is not a real person.
This works fine for creating and publish content (for their blog and outside blog posts). It allows many people to work on creating and publishing content under the same name, which for a number of reasons makes good logistical sense.
The problem arises when it comes to social marketing. They have set up a Facebook and Google + profile and Facebook and Google business pages.
The main issue is that they are finding it difficult to friend other people because nobody knows this "Joe Bloggs" persona.
Can anybody offer advise on how to approach this kind of strategy.
Thanks,
Hi Cyrus,
I don't see any issues with the canonical tag.
I'll contact the help team.
Thanks,
Yes, but the non-www version 301 redirects to the www version.
Hi,
I'm not sure I want to list the domain here, but here's a example of what I mean. We create google tracking links (google url builder) for use in a newsletter. The homepage looks like this:
and one the links in the newsletter might look like this:
http://www.site.com/?utm_source=newsletter&utm_medium=email&utm_content=offer&utm_campaign=1
When you look at the source code for both urls, they both have the canonical tag equal to:
So, Google knows there's no duplicate content issue there. It would be good if the diagnostics tool could recognise that too.
Thanks,
Leigh
Hi,
In the Crawl diagnostics reports, I'm getting lots of duplicate errors warnings e.g. duplicate page title. In most cases these are tracking urls and the page has a canonical tag pointing to the original page.
It would be helpful if the crawl analysis reports could separate these out from ones that are of genuine concern.
It can also happen when there's a noindex tag on a page.
Thanks,
Leigh
Ok. Thanks for the advise, Ryan.
Thanks Ryan.
I've no direct contact with the developer, so I can't answer those questions. I'm afraid I just have to work with what my client is telling me.
By what you're saying, and if done correctly, the pages would look to google as if they were in a folder on that domain e.g. website.com/language-site, and we would geo-target that folder, and not the sub domain?
Then we'd need to find a way to stop the search engines crawling the sub-domain. Would this be done in the robots.txt file?
Do you think it we'd be just better off using the sub-domain and forgetting about the rewrites. The main reason I'm advising him to go for a folder structure is because of the uncertainty of domain authority flowing to a sub-domain.
I have a client that's setting up a section of his site in a different language, and we're planning to geo-target those pages to that country. I have suggested a sub-folder solution as it's the most cost effective solution, and it will allow domain authority to flow into those pages.
His developer is indicating that they can only set this up as a sub-domain, for technical reasons, but they're suggesting they can rewrite the url's to appear as sub folder pages.
I'm wondering how this will work in terms of geo-targeting in Google Webmaster Tools. Do I geo-target the sub domain or the sub folder i.e. does Google only see urls or does it physically see those pages on the sub-domain?
It seems like it might be a messy solution. Would it be a better idea just to forget about the rewrites and live with the site being a sub domain?
Thanks,
I've a follow on question to this: I have a client that's setting up a section of his site in a different language, and we're planning to geo-target those pages to that country. I have suggested a sub-folder solution as it's the most cost effective solution, and it will allow domain authority to flow into those pages.
His developer is indicating that they can only set this up as a sub-domain, for technical reasons, but they're suggesting they can rewrite the url's to appear as sub folder pages.
I'm wondering how this will work in terms of geo-targeting in Google Webmaster Tools. Do I geo-target the sub domain or the sub folder i.e. does Google only see urls or does it physically see those pages on the sub-domain?
It seems like it might be a messy solution. Would it be a better idea just to forget about the rewrites and live with the site being a sub domain?
Thanks,
Thanks Ricko. That's is what we're thinking. Put it up and see what happens before blocking pages or anything like that.
There is an existing site there already. We're doing a redesign, using a new eCommerce platform. The site currently has a domain authority of 27, and all the category pages urls we'll be able to maintain, and they have page authority, which is good. So, it shouldn't take long for us to see what will happen with the new pages.
Thanks.
Thanks for comprehensive reply, Ricko.
First, just to address what you mentioned about getting 2 listings: my concern was more about 2 of my pages competing for keywords, when actually 1 page with more page rank (rather than page rank being split over 2) would have a better chance of ranking. Do you know what I mean?
The form optimization solution sounds interesting, and this is stuff that we're planing to do, but manually. My concern is though: will this make the pages unique enough? It seems from what you said that it might just be?
On the last thing you mentioned about customer reviews: this is something that I've considered. The problem is the high number of products, and the relatively low level of sales volume (initially at least) making this a more long term solution.
Thanks,
Leigh
Thanks EGOL.
I agree. The problem is that each metal type needs to be a separate product in order for the filter system to work correctly. And the filtering is very important so that we can provide the best user experience possible.
I'm working with someone who's setting up an online jewelry store. The jewelry is available in many metal types, so we're creating filters to provide a good user experience in trying to narrow down their choice.
Let's take an example of a wedding ring that's available these options:
10kt yellow gold
10kt white gold
18kt yellow gold
18kt white gold
Palladium
Platinum
These are all entered as separate products, so that they can be used in the filtering system. However, apart from some minor changes to the title and description most of the content will be identical, across these 6 product pages.
Also, many wedding ring styles are going to be very similar, so we're going to have very similar descriptions for a lot of the rings.
We're concerned about problems this might cause with the search engines in terms of duplicate content. There's 2 issues that I an see (there may be more!):
Also, these products are likely to come and go, so investing heavily on creating really unique content for them isn't really sustainable, affordable.
Any advise?
Thanks,
In my crawl diagnostics, I notice some 4xx client errors. They are appearing for pages that no longer exist, so I'm not sure what the problem is. Shouldn't they just be dealt as 404's?
Anyway, on closer inspection I noticed that my 404 error page contains a canonical tag which points to the missing page. Could this be the issue? Is it a good idea to remove the canonical tag from this error page?
Thanks.