Don't understand this ... :-(
-
Hello,
I'm going nuts as I don't understand what's going on with this domain of a client.
We have this classical htaccess redirect
from http://domain.com to http://www.domain.com
But I'm getting Page Authority for both domains, and the non-www, which shouldn't be crawled, is gettting higher PA ..
http://www.myanamar.rundreisen.de - PA 34
http://myanamr-rundreisen.de - PA 36
I attach a file, you see there that google robot is recognizing the 301 redirecht from non-www to www ...
But, the site isn't doing good at all in google, it seems the home page has a penalty ... duplicate content due to non-www and www home page?
So it would be great if somebody has a hint for me ... my client is losing trust in me
Thx!
-
Thanks!
-
Matt Cutts talked about this a few years back....let me find it.
Basically where your server is (minus county specific) doesn't matter to Google.
Google understands that people share servers and it's not that important in the scheme of things. What does matter is server up time.
-
Thanks for your support! I think the last tool reports show a little improvement.
But one more information or possible problem(?): On the same server, in another directory, another site of the client is hosted, which has a very good Google standing for 6 or 7 years.
The HTML structure is similar, and it depends on the same CMS and similar CSS.
So could this be a problem for Google? Should the site be moved to another provider?
Once again thx
Guenter
-
Yes Agreed. I guess its a waiting game for him to see how effective it has been placed.
But in my instances rel=canonical always solved the problem for dup content.
Thanks Darin
-
Yes, both can get indexed especially if preferences and 301s weren't in place the last time Google crawled. I've noticed it takes time for Google to use the canonical on a page. I've seen it take 4 or 5 crawls for it to take effect correctly. But don't forget it's just a suggestion and not a directive. I think Google wants to make sure that it's in the best interest of the site before it adheres to it (just a guess)
Don't forget too that Google will only crawl a portion of a site when it crawls (especially for bigger sites) to make sure it doesn't take up to much bandwidth on your server. The home page may not have been crawled since the element has been put in.
-
Yes, thanks,
I forgot to mention, this was set some weeks ago and in Google's cached cersion the rel=canonical tag ist in the source code, so they should habe the newest page.
Just edited the post above a few seconds after your question
-
Yes, how long ago did you set this?
Has google since indexed your page
-
Thanks, I've set since a couple of weeks
<link rel="<a class="attribute-value">canonical</a>" href="[http://www.myanmar-rundreisen.de/](view-source:http://www.myanmar-rundreisen.de/)" /> That should be fine?
-
Thanks, yes, the preferred domain ist set to www
-
Darin has a good point. Set your preferences
Also Rel=canonical
Darin if i am not mistaken maybe you can shed some light , dont both pages still get indexed even if its redirected with a 301? I am sure a rel=canonical will solve the issue !
Best Wishes,
Hampig M
BizDetox
-
Have you set your preferred domain in Google Webmaster Tools?
(Make sure you have verified both versions of your domain)
Configuration > settings > preferred domain > radial for the www version
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can I make sure pages with similar content don't damage the other's SEO?
I work for a travel company and I have a 'tour page' targeted for pre-booking and a 'booking pack page' post-booking page, with some similar content but with details such as hostel locations, meeting places and times etc. I want to make sure the tour page keeps the authority as this is what I want to rank on SEO. I've got a couple of similar problems to this across site, there are a few pages on site that are post-sale and don't really need to rank on Google but it would be great if they could contribute to other pages' rankings. Thanks!
On-Page Optimization | | nicolewretham0 -
Should I be worried about our 'Duplicate' content
Hi guys... I've just been working through some issues to give our site a little cleanup. I'm working through our duplicate content issues (we have some legitimate duplicate pages that need removing, and some of our dynamic content is problematic. Are web developers are going to sort with canonical tags this week.) However... There are some pages that are actually different products, but are very similar pages that are 'triggering' MOZ to say we have duplicate pages. Here an example... http://www.toaddiaries.co.uk/filofax-refills/filo-12-month-inserts-personal-size/fortnight-view-filofax-personal and http://www.toaddiaries.co.uk/filofax-refills/filo-12-month-inserts-personal-size/week-to-a-view-filofax-personal They are very similar refill products, it's just the diary format is different. Question: Should I be worried about this? I've never seen our rankings change in the past when 'cleaning up' duplicate content. What do you guys think? Isaac.
On-Page Optimization | | isaac6630 -
Putting review aggregation in product's navigation
Do you guys think it's a bad idea to put a review aggregation page in a product's navigation? Such as: "Which Brand of Men's Shampoo Is Best for You?" Rand suggests against it in this Whiteboard Friday as it interferes with a product's funnel, but I wonder if including it in navigation will give a domain and that page increased authority for a head keyword, such as "men's shampoo." What do you guys think?
On-Page Optimization | | Edward_Sturm0 -
Is there a limit to the number of duplicate pages pointing to a rel='canonical ' primary?
We have a situation on twiends where a number of our 'dead' user pages have generated links for us over the years. Our options are to 404 them, 301 them to the home page, or just serve back the home page with a canonical tag. We've been 404'ing them for years, but i understand that we lose all the link juice from doing this. Correct me if I'm wrong? Our next plan would be to 301 them to the home page. Probably the best solution but our concern is if a user page is only temporarily down (under review, etc) it could be permanently removed from the index, or at least cached for a very long time. A final plan is to just serve back the home page on the old URL, with a canonical tag pointing to the home page URL. This is quick, retains most of the link juice, and allows the URL to become active again in future. The problem is that there could be 100,000's of these. Q1) Is it a problem to have 100,000 URLs pointing to a primary with a rel=canonical tag? (Problem for Google?) Q2) How long does it take a canonical duplicate page to become unique in the index again if the tag is removed? Will google recrawl it and add it back into the index? Do we need to use WMT to speed this process up? Thanks
On-Page Optimization | | dsumter0 -
Best start for int'l SEO?
Hi all We're soon going to begin our international SEO efforts, and I wanted to get some opinions on laying the foundation first. I'm aware of the best, most ideal practices (getting a proper translator, ccTLDs vs subdomains vs folders, etc.) and wanted to know if this would be a good first step: Creating folders by language/country code (does it matter which?) that will have unique copy on the respective page, and targeting those pages to the corresponding country via Google WMT. The nature of our website would require a massive, coordinated effort to translate all of the content, so I was thinking about starting with the homepage for each country and going from there. Is the risk of duplicate content for every new folder too high to chance not translating EVERY bit of content? Thanks for any help or advice!
On-Page Optimization | | brandonRT0 -
Is my blog simply duplicate content of my authors' profiles?
www.example.com/blog is the full list of blog posts by various writers. The list contains the title of each article and the first paragraph from the article. In addition to /blog being indexed, each author's contribution list is being indexed separately. It's not a profile, really, just a list of articles in the same title & paragraph format of the /blog page. So if /blog a list of 10 articles written by two writers, I have three pages: /blog/author1 is a list of 4 articles /blog/author2 is a list of 6 different articles /blog is a list of 10 articles (the 4+6 from the two writers) Is this going to be considered duplicate content?
On-Page Optimization | | Brocberry0 -
20 x '400' errors in site but URLs work fine in browser...
Hi, I have a new client set-up in SEOmoz and the crawl completed this morning... I am picking up 20 x '400' errors, but the pages listed in the crawl report load fine... any ideas? example - http://www.morethansport.co.uk/products?sortDirection=descending&sortField=Title&category=women-sports clothing
On-Page Optimization | | Switch_Digital0 -
How woud you deal with Blog TAGS & CATEGORY listings that are marked a 'duplicate content' in SEOmoz campaign reports?
We're seeing "Duplicate Content" warnings / errors in some of our clients' sites for blog / event calendar tags and category listings. For example the link to http://www.aavawhistlerhotel.com/news/?category=1098 provides all event listings tagged to the category "Whistler Events". The Meta Title and Meta Description for the "Whistler Events" category is the same as another other category listing. We use Umbraco, a .NET CMS, and we're working on adding some custom programming within Umbraco to develop a unique Meta Title and Meta Description for each page using the tag and/or category and post date in each Meta field to make it more "unique". But my question is .... in the REAL WORLD will taking the time to create this programming really positively impact our overall site performance? I understand that while Google, BING, etc are constantly tweaking their algorithms as of now having duplicate content primarily means that this content won't get indexed and there won't be any really 'fatal' penalties for having this content on our site. If we don't find a way to generate unique Meta Titles and Meta Descriptions we could 'no-follow' these links (for tag and category pages) or just not use these within our blogs. I am confused about this. Any insight others have about this and recommendations on what action you would take is greatly appreciated.
On-Page Optimization | | RoyMcClean0