The system shows duplicate content for the same page (main domain and index.html). Is this an error of SEOMOZ?
-
Should I be worried that this will affect SEO? Most sites redirect to the index.html page, right?
[edited by staff to remove toolbar data]
-
Awesome! Thank you so much for the advice Ryan!
-
Thank you for sharing the link.
Your site definitely has a duplication problem. Presently your entire site is available via the root domain and the www subdomain.
http://www.physicaltherapyspecialists.com/
http://physicaltherapyspecialists.com/
Both of the above links work. Both of the above links also present the site with the listed URL and a 200 header code. That means your entire site is duplicated. This is an SEO issue which needs to be addressed.
The first step towards resolution is deciding how you wish your site to be seen, as the www or non-www URL. Next, you need to be absolutely consistent with your selected URL version. If you have managed hosting, contact your host and request your site be 301 redirected to the chosen URL version.
Once this issue is resolved, I recommend you take a look at the SEO Beginner's Guide. It contains a lot of truly helpful information. You can learn any particular element of SEO such as canonical tags fairly quickly. Understanding the optimal times to use canonical tags vs 301s takes experience, and even experts will disagree at times. Your issues can be handled by yourself if you are willing to learn and invest time in your site.
Good luck.
-
Hi Ryan,
The URL is physicaltherapyspecialists.com. I just took out the redirect and cannonical url code. I will wait it out. But I am not sure even what the dup pages are.SEOMoz just shows the pages that are duplicates but not the duplicate URL's
Aaron
-
Based on your results and how you worded the reply, I am concerned you have not implemented the tag correctly. Can you provide your URL so I can take a look?
Please also share a couple examples of duplicate pages and the URL which should be the primary page for the selected content.
-
Hi Ryan,
I added url="canonical" tag for the homepage and now I have 38 pages of duplicate content (supposedly) according to my SEOMOZ dashboard. I called my host over the weekend, and they have no idea how to redirect for SEO. I would have been much happier leaving everything alone and going back to my 8 duplicate pages.... Any further suggestions?
Thanks,
Aaron
-
Aaron, your SEOquake extension is adding data to each of your replies which makes them very difficult to read.
I have viewed your responses in FF and Chrome. Please remove the extra content from the replies.
-
I just added 301 redirect command for whole entire old site domain to the new site with the "/". I hope that this works
-
Hi Ryan,
I looked into this, and I currently have the redirect of my old domain going to /. Is there anything else you could think of here? I will go in and see if I have an .htaccess file with proper 301 language.
Aaron
[edited by staff to remove toolbar data]
-
Aaron,
Change any links pointing to index.html to the root domain ie. /
-
Thanks for the explanation Ryan! I have tried so many times to get this perfect, but the host for one of my client's sites keeps screwing it up. I will try to rectify asap. Thank you so much, Aaron.
[edited by staff to remove toolbar data]
-
If you are seeing this error, the most likely cause is your site isn't redirecting the index.html page to the / page. If you have a site which offers two URLs which provide the same content, it will be considered duplicate content by search engines. The best approach is to properly redirect to the preferred URL.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Pages #!
Hi guys, Currently have duplicate pages accross a website e.g. https://archierose.com.au/shop/cart**#!** https://archierose.com.au/shop/cart The only difference is the URL 1 has a hashtag and exclamation tag. Everything else is the same. We were thinking of adding rel canonical tags on the #! versions of the page to the correct URLs. But Google doens't seem to be indexing the #! versions anyway. Does anyone know why this is the case? If Google is not indexing them, is there any point adding rel canonical tags? Cheers, Chris https://archierose.com.au/shop/cart#!
Intermediate & Advanced SEO | | jayoliverwright0 -
Help with duplicate pages
Hi there, I have a client who's site I am currently reviewing prior to a SEO campaign. They still work with the development team who built the site (not my company). I have discovered 311 instances of duplicate content within the crawl report. The duplicate content appears to either be 1, 2, or 3 versions of the same pages but with differing URL's. Example: http://www.sitename.com http://sitename.com http://sitename.com/index.php And other pages follow a similar or same pattern. I suppose my question is mainly what could be causing this and how can I fix it? Or, is it something that will have to be fixed by the website developers? Thanks in advance Darren
Intermediate & Advanced SEO | | SEODarren0 -
Indexing specified entry pages
Hi,We are currently working on location based info.Basically, when someone searches from Florida they will get specific Florida results and when they search from California they will specific California results.How does this location based info affect crawling and indexing?Lets say we have location info for googlebot, sometimes they crawl from a New York ip address, sometimes they do it from Texas and sometimes from California. In this case google will index 3 different pages with 3 different prices and a bit different text, and I'm afraid they might see these as some kind of cloaking or suspicious movement because we serve different versions of the page. What's the best way to handle this?
Intermediate & Advanced SEO | | SEODinosaur0 -
Removing a Page From Google index
We accidentally generated some pages on our site that ended up getting indexed by google. We have corrected the issue on the site and we 404 all of those pages. Should we manually delete the extra pages from Google's index or should we just let Google figure out that they are 404'd? What the best practice here?
Intermediate & Advanced SEO | | dbuckles0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0 -
1 of the sites i work on keeps having its home page "de-indexed" by google every few months, I then apply for a review and they put it back up. But i have no idea why this keeps happening and its only the home page
1 of the sites i work on (www.eva-alexander.com) keeps having its home page "de-indexed" by google every few months, I then apply for a review and they put it back up. But i have no idea why this keeps happening and its only the home page I have no idea why and have never experienced this before
Intermediate & Advanced SEO | | GMD10 -
How to manage duplicate content?
I have a real estate site that contains a large amount of duplicate content. The site contains listings that appear both on my clients website and on my competitors websites(who have better domain authority). It is critical that the content is there because buyers need to be able to find these listings to make enquiries. The result is that I have a large number pages that contain duplicate content in some way, shape or form. My search results pages are really the most important ones because these are the ones targeting my keywords. I can differentiate these to some degree but the actual listings themselves are duplicate. What strategies exist to ensure that I'm not suffereing as a result of this content? Should I : Make the duplicate content noindex. Yes my results pages will have some degree of duplicate content but each result only displays a 200 character summary of the advert text so not sure if that counts. Would reducing the amount of visible duplicate content improve my rankings as a whole? Link back to the clients site to indicate that they are the original source Any suggestions?
Intermediate & Advanced SEO | | Mulith0 -
Duplicate Content Through Sorting
I have a website that sells images. When you search you're given a page like this: http://www.andertoons.com/search-cartoons/santa/ I also give users the option to resort results by date, views and rating like this: http://www.andertoons.com/search-cartoons/santa/byrating/ I've seen in SEOmoz that Google might see these as duplicate content, but it's a feature I think is useful. How should I address this?
Intermediate & Advanced SEO | | andertoons0