Duplicate page error
-
SEO Moz gives me an duplicate page error as my homepage www.monteverdetours.com is the same as www.monteverdetours.com/index is this actually en error? And is google penalizing me for this?
-
The site should not look different. We changed some URLs for unranked kewords on minor pages (with 301 links). We added the canonical tag.
We got rid of the https and redirected by 301 to http
and some of the suggestions you said above.
No major stuff and the site is not recovering... so strange. I think we must have done something structurally wrongs. I would happily pay someone to revise my site and make suggestions. I am at my wits end. Need someone familar with MOdx.
Can you see something obviously wrong with the site?
-
What other changes happened at that same time? The site seems different?
-
Between 12th and 16th July it dropped from page 8 to 45 and from 20 to 24th from page 45 to 86
-
Do you know which day it dropped on? There really isn't any reason that anything above should cause a drop. There have been updates - so let's figure out what is going on first.
-
Hi Mat
Thank you so much for your detailed answer I really appreciated it. So here is an update....I asked my web master to implement the changes you suggested for www.monteverdetours.com - the site has dropped from page 2 or 3 to page 89 and we thought it would bounce back after a few days and it hasn't. Do you have any idea why this would be so?
Best Regards,
Janet
-
Thank you so much!
-
You are diluting your homepage strength as you could have some links to one version of the page and some to another. I would create a 301 redirect from the /index to the plane .com version. In Googles eyes you have two pages with the same content, this is a common mistake with a lot of websites and their homepage.
For more info read:
http://www.seomoz.org/learn-seo/duplicate-content
http://www.seomoz.org/learn-seo/redirection
http://www.seomoz.org/blog/url-rewrites-and-301-redirects-how-does-it-all-work
-
A great answer.
-
Yes and No! (that was helpful, wasn't i??!)
Google is smarter than seomoz crawler when it comes to dealing with this issue. semoz seems to flag up home page variants quite often, but I haven't seen this cause a problem for a major search engine in years. Generally then it's pretty safe.
However - you do have some similar problems. To check the above I did a couple of searches for phrases that appear on your home page, limiting the results to pages off your domain. Whilst the domain.com vs domain.com/index issue doesn't seem to be a problem, you do have something weird going on with your home page.
The following pages do appear to be duplicates of your home page, and these ARE appearing in the index:
https://www.monteverdetours.com/~desafio/
https://www.monteverdetours.com/index.html?iframe=true&width=95%25&height=95%25
And your home page isn't being listed properly.
What you need to do ASAP:
- Consistently link to your home page: Where you have the home link up next to the sitemap link that to the home page throughout the site. Just to the www.domain.com version
- Log in to google webmaster tools and tell it so ignore the following url parameters:
- iframe
- width
- height
- Look at getting a canonical tag added throughout your site to ensure that the correct URL is always indexed
I hope that is helpful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our protected pages 302 redirect to a login page if not a member. Is that a problem for SEO?
We have a membership site that has links out in our unprotected pages. If a non-member clicks on these links it sends a 302 redirect to the login / join page. Is this an issue for SEO? Thanks!
Technical SEO | | rimix1 -
Blog archive pages are meta noindexed but still flagged as duplicate
Hi all. I know there several threads related to noindexing blog archives and category pages, so if this has already been answered, please direct me to that post. My blog archive pages have preview text from the posts. Each time I post a blog, the last post on any given archive page shifts to the first spot on the next archive page. Moz seems to report these as new duplicate content issues each week. I have my archive pages set to meta noindex, so can I feel good about continuing to ignore these duplicate content issues, or is there something else I should be doing to prevent penalties? TIA!
Technical SEO | | mkupfer1 -
Forum post multiple pages gives meta description duplicate.
My website has a forum that is using the title of the posts as a Meta Description.The problem is that when a posts becomes long and separates in pages Google tells me that i have duplicate meta description issues because the 2nd page and the 3rd page are using the same meta description.What is the best course of action here?
Technical SEO | | Angelos_Savvaidis0 -
Noticed a lot of duplicate content errors...
how do I fix duplicate content errors on categories and tags? I am trying to get rid of all the duplicate content and I'm really not sure how to. Any suggestions, advice and/or help on this would be greatly appreciated. I did add the canonical url through the SEO Yoast plugin, but I am still seeing errors. I did this on over 200 pages. Thanks for any assistance in advance. Jaime
Technical SEO | | slapshotstudio0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Webmaster Tools 404 Errors Pages Never Created
Recently, 196 404 errors appeared in my WMT account for pages that were never created on my site. Question: Any thoughts on how they got there (i.e. WMT bug, tactic by competitor)? Question: Thoughts on impact if any? Question: Thoughts on resolution?
Technical SEO | | Gyi0