In this case it was easy as they had created the duplicate domains themselves and they had control over them, so it was just a case of getting them taken down.
Posts made by Digitator
-
RE: Panda penalty removal advice
-
RE: Rel=canonical vs noindex/follow - tabs with individual URLs
Hello CleverPHD!
Thank you very much for your response - that really clears a few things up. Most info around rel=canonical revolves around duplicate content so I just wanted to make sure!
Regarding your separate question, your assumption is correct, each hotel on the site is set up this way (and there are hundreds of them) which is why I am concerned.
I've just come into this project but I assume that the initial thinking was going to be that they wanted to rank for a range of search terms around the hotel i.e. "Xxx Hotel reviews", "map to Xxx Hotel" etc and thought that dedicated pages for each term would help them.
What I am hoping is that by rel=canonicaling these pages to the main Hotel URL we will in effect be creating the "really awesome" page you refer to, while avoiding any potential penalties!
Once again, thanks for your in depth response - it's very much appreciated!
-
RE: Panda penalty removal advice
Yes, we do have Bing Webmaster Tools set up - I agree, even through Bing is limited in terms of traffic volume, Bing Webmaster Tools does give a slightly different take on things compared to Search Console.
Damon.
-
RE: Panda penalty removal advice
Hello again Alan!
Agree with you 100% that this is a ongoing process. I asked the question with regards to getting the new hosting set up asap - if it wasn't going to be taken into account for the latest Panda update we would have a little more time.
As you say, having to wait for Google for almost a year to rerun Panda is really difficult for everyone (not just us). It's a really pity that we didn't pick this up earlier when Panda was running more regularly.
I've just run another crawl and we have 79x 30* redirects and 26x 40* pages, most of which are thumbnail jpgs and category pages (which are noindexed anyway). As stated above, I'll get these fixed this week.
We completed a competitor content analysis and redeveloped our main landing pages around this, and, together with our backlink profile, we think we've got a good chance of hitting the top ten SERP results - we are targeting some quite specific keywords with not particularly strong competition and have gained some excellent backlinks over the last few months.
Once again, thanks for your insight and help!
Damon.
-
RE: Panda penalty removal advice
Hi Alan
Thanks for your comprehensive response - you make some very good points.
1. Host: The client is currently changing host as the current host is very entry level and we were aware that we had a problem - having said that the response times are a lot slower than when I last looked so we'll get in touch with the current host to see what they can do now.
2. 404/301 pages: Again these are on the list for the team to pick up on. I didn't actually think that there were enough to cause a problem - I can imagine if there were hundreds we might have an issue, but I would have thought 20 or so would have been OK? I'll chase to get these fixed in any case.
3. Content: I guess this is the gray area between a page not ranking due to poor page quality and a website being "algorthmically adjusted" because of poor page quality. We've worked on all our main landing pages to make them more comprehensive and from the research we have done we felt that we had done enough. We did consider noindexing the blog as well, but felt that as it was unique, while not particulary comprehensive, it shouldn't causing any Panda problems.
Quick question - is it your experience that once Panda starts running it is to late to make changes to your website? I've read that it is in a few places, but not in others places. I guess when it was running monthly it wasn't such an issue.
Once again, thank you very much for having a look - it's great to get a fresh set of eyes on the site.
Best
Damon.
-
Panda penalty removal advice
Hi everyone! I'm after a second (or third, or fourth!) opinion here!
I'm working on the website www.workingvoices.com that has a Panda penalty dating from the late March 2012 update. I have made a number of changes to remove potential Panda issues but haven't seen any rankings movement in the last 7 weeks and was wondering if I've missed something...
The main issues I identified and fixed were:
- Keyword stuffed near duplicate title tags - fixed with relevant unique title tags
- Copies of the website on other domains creating duplicate content issues - fixed by taking these offline
- Thin content - fixed by adding content to some pages, and noindexing other thin/tag/category pages.
Any thoughts on other areas of the site that might still be setting off the mighty Panda are appreciated!
Cheers
Damon.
-
RE: Rel=canonical vs noindex/follow - tabs with individual URLs
Thanks for that Dana - appreciated.
At the moment it is currently set to Let Googlebot decide and I need admin access to look under the hood which I don't have, so I'll have a look when I gain administrator access. I can see one potential issue through - some of their landing pages don't exist without the "?tab" parameter which I think might make this a non starter unfortunately.
Given the possibility that this method doesn't work, does anyone else have any thoughts on rel=canonical vs noindex in this situation?
-
Rel=canonical vs noindex/follow - tabs with individual URLs
Hi everyone
I've got a situation that I haven't seen in quite this way before. I would like some advice on whether I should be rel=canonicalzing of noindexing/following a range of pages on a clients website.
I've just started working on a website that creates individual URLs for tabs within each page which has resulted in several URLs being created for each listing:
Example URLs:
- hotel-downtown-calgary
- hotel-downtown-calgary/gallery?tab
- hotel-downtown-calgary?tab
- hotel-downtown-calgary/map?tab
- hotel-downtown-calgary/facilities?tab
- hotel-downtown-calgary/reviews?tab
- hotel-downtown-calgary/in-the-area?tab
Google has indexed over 1500 pages with the "?tab" parameter (there are 4380 page indexed for the site in total), and also seems to be indexing some of these pages without the "?tab" parameter i.e. ("hotel-downtown-calgary/reviews" instead of "hotel-downtown-calgary/reviews?tab") so the amount of potential duplication could be more. These tabbed pages are getting minimal traffic from organic search, so I've got no issues with taking them out of the index - the question is how.
There are the issues I see:
- Each tab has the same title as the other tabs for each location, so lots of title duplication.
- Each individual tab doesn't have much content (although the content each tab has is unique).
I would usually expect the tabs to be distinguished by the parameters only, not have unique URLs - if that was the case we wouldn't have a duplication issue.
So the question is: rel=canonical or noindex/follow? I can see benefits of both.
Looking forward to your thoughts!
-
Alternative domains redirected with 301 to the main domain
Hi everyone
I've got a website which gained a Panda penalty back in March 2012 which was because of the implementation of a range of spammy practices (keyword stuffing in titles, indexed category and tag pages, duplicate domains).
I've fixed the titles and deindexed any pages that could be seen as thin or duplicate so I'm confident that any onsite Panda issues have been fixed.
As mentioned above the client had also created over 40 alternative domains to the website and pointed them to their main website folders (hence duplicating the website and content 40 times over). These domains have now been redirected via 301 redirects to the main website to ensure that any links they have gained are captured.
The reason for the redirection is that we initially took the domains down and saw a drop in traffic and this seemed to be the most likely reason. While Moz and Majestic are not showing any significant links to these domains (which is why they where originally taken down), past experience has told me that these tools don't always pick up all referring domains.
Primary domain
5 Example Alternative Domains
- presentationskillslondon.com
- workingvoiceslive.biz
- workingvoices.co.uk
- livingvoices.co.uk
- working-voices.net
Question 1: At the same time we took down the alternative domains (and experienced the drop in traffic) we removed duplicate instances of Google Analytics code from the webpages. All the guidance that we could find stated that duplicate instances of code shouldn't affect your Analytics numbers, hence we assumed it was the taking down of the alternative domains, but maybe the guidance we found was wrong?
Question 2: It is 3 months later and these alternative domains are still indexed by Google, and Panda hasn't run since October 2014 so we haven't experienced a recovery yet. Redirecting the domains will remove any issue of a Panda penalty, but now of course I am worried about Penguin - the last thing I want to do is trigger that can of worms!
This whole saga has been pretty complicated and I think I need some fresh sets of eyes.
What does everyone think? Could the initial drop have been due to the duplicate Analytics code being removed? Could the redirecting domains trigger Penguin? Should we take the alternative domains down and be done with them? Any other thoughts!
Looking forward to hearing your opinions!
Damon.