Duplicate Homepage - How to fix?
-
Hi Everyone,
I've tried using BeamUsUp SEO Crawler and have found one warning and two errors on our site.
The warning is for a duplicate meta description, and the errors are a duplicate page and a duplicate title.
For each problem it's showing the same two pages as the source of the error, but one has a slash at the end and one doesn't. They're both for the homepage.
And
Has anyone seen this before? Does anyone know if this is anything we should worry about?
-
Moz was warning me about thin content and I got the idea that it was because of "/" from
which said zero unique phrases
maybe it's because i have "Home" and "Home_master" although the Home_master has eye crossed out (shouldnt be visible to goog)
-
My answer above applies specifically to your situation too, @Elchanan. You are not being penalised by Google for duplicate content in your example.
As I mentioned, the root of a domain is a special case. The version with the slash at the end is considered the same URL as the one without the slash by both browsers and search engines. So much so that it is impossible to redirect from one to the other because that creates a redirect loop - the page is redirecting to itself.
If that SEO tool has warned you of that issue, it's because the tool isn't properly programmed to handle this unique situation correctly.
Hope that helps.
Paul
-
I'm having a similar problem with my wix site. I am being penalised by google for duplicate content of "my-site.com/" and "my-site.com"
i tried 301 redirect in wix but it doesnt seem to help according to https://datayze.com/thin-content-checker.php
Could someone help me
-
Thanks for the great answer, Paul, that's very helpful.
-
This is an incorrect implementation in the BeamUsUp tool. The hostname (the basic root URL) is a special case. Both the version with the ending slash and without the ending slash are considered by browsers and search engines to be exactly the same.
In fact, you cannot redirect one to the other. Because the browser is programmed to consider them the same, you'll create an infinite loop. So not only is there nothing you should do, there's nothing you can do.
This is the only case where this is true though! For all other internal URLs. the version with the slash is considered to be a completely different URL than the one without the slash. So unless you redirect one version to the other for internal pages, you'll have duplicate content issues.
Hope that helps.
Paul
-
Hi there,
If you 301 redirect one version of the url to the other (whichever version matches the rest of your urls), then it should solve this duplicate content issue.
thanks!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I add FAQS schema on my homepage?
Hello, can we have the FAQ code on the homepage (staff time)? we have written some questions and answers in the drop-down list on the homepage, and also add the schema code script to one tag of the page, but it does not work!
Intermediate & Advanced SEO | | fbowable0 -
Are feeds bad for duplicate content?
One of my clients has been invited to feature his blog posts here https://app.mindsettlers.com/. Here is an example of what his author page would look like: https://app.mindsettlers.com/author/6rs0WXbbqwqsgEO0sWuIQU. I like that he would get the exposure however I am concerned about duplicate content with the feed. If he has a canonical tag on each blog post to itself, would that be sufficient for the search engines? Is there something else that could be done? Or should he decline? Would love your thoughts! Thanks.
Intermediate & Advanced SEO | | cindyt-17038
Cindy T.0 -
Duplicate Content Question With New Domain
Hey Everyone, I hope your day is going well. I have a question regarding duplicate content. Let's say that we have Website A and Website B. Website A is a directory for multiple stores & brands. Website B is a new domain that will satisfy the delivery niche for these multiple stores & brands (where they can click on a "Delivery" anchor on Website A and it'll redirect them to Website B). We want Website B to rank organically when someone types in " <brand>delivery" in Google. Website B has NOT been created yet. The Issue Website B has to be a separate domain than Website A (no getting around this). Website B will also pull all of the content from Website A (menus, reviews, about, etc). Will we face any duplicate content issues on either Website A or Website B in the future? Should we rel=canonical to the main website even though we want Website B to rank organically?</brand>
Intermediate & Advanced SEO | | imjonny0 -
Duplicate H1 on single page for mobile and desktop
I have a responsive site and whilst this works and is liked by google from a user perspective the pages could look better on mobile. I have a wordpress site and use the Divi Builder with elegant themes and have developed a separate page header for mobile that uses a manipulated background image and smaller H1 font size. When crawling the site two H1s can be detected on the same page - they are exactly the same words and only one will show according to device. However, I need to know if this will cause me a problem with google and SEO. As the mobile changes are not just font size but also adaptations to some visual elements it is not something I can simply alter in the CSS. Would appreciate some input as to whether this is a problem or not
Intermediate & Advanced SEO | | Cells4Life0 -
Duplicate page url crawl report
Details: Hello. Looking at the duplicate page url report that comes out of Moz, is the best tactic to a) use 301 redirects, and b) should the url that's flagged for duplicate page content be pointed to the referring url? Not sure where the 301 redirect should be applied... should this url, for example: <colgroup><col width="452"></colgroup>
Intermediate & Advanced SEO | | compassseo
| http://newgreenair.com/website/blog/ | which is listed in the first column of the Duplicate Page Content crawl, be pointed to referring url in the same spreadsheet? Or, what's the best way to apply the 301 redirect? thanks!0 -
Trying to advise on what seems to be a duplicate content penalty
So a friend of a friend was referred to me a few weeks ago as his Google traffic fell off a cliff. I told him I'd take a look at it and see what I could find and here's the situation I encountered. I'm a bit stumped at this point, so I figured I'd toss this out to the Moz crowd and see if anyone sees something I'm missing. The site in question is www.finishlinewheels.com In Mid June looking at the site's webmaster tools impressions went from around 20,000 per day down to 1,000. Interestingly, some of their major historic keywords like "stock rims" had basically disappeared while some secondary keywords hadn't budged. The owner submitted a reconsideration request and was told he hadn't received a manual penalty. I figured it was the result of either an automated filter/penalty from bad links, the result of a horribly slow server or possibly a duplicate content issue. I ran the backlinks on OSE, Majestic and pulled the links from Webmaster Tools. While there aren't a lot of spectacular links there also doesn't seem to be anything that stands out as terribly dangerous. Lots of links from automotive forums and the like - low authority and such, but in the grand scheme of things their links seem relevant and reasonable. I checked the site's speed in analytics and WMT as well as some external tools and everything checked out as plenty fast enough. So that wasn't the issue either. I tossed the home page into copyscape and I found the site brandwheelsandtires.com - which had completely ripped the site - it was thousands of the same pages with every element copied, including the phone number and contact info. Furthering my suspicions was after looking at the Internet Archive the first appearance was mid-May, shortly before his site took the nose dive (still visible at http://web.archive.org/web/20130517041513/http://brandwheelsandtires.com) THIS, i figured was the problem. Particularly when I started doing exact match searches for text on the finishlinewheels.com home page like "welcome to finish line wheels" and it was nowhere to be found. I figured the site had to be sandboxed. I contacted the owner and asked if this was his and he said it wasn't. So I gave him the contact info and he contacted the site owner and told them it had to come down and the owner apparently complied because it was gone the next day. He also filed a DMCA complaint with Google and they responded after the site was gone and said they didn't see the site in question (seriously, the guys at Google don't know how to look at their own cache?). I then had the site owner send them a list of cached URLs for this site and since then Google has said nothing. I figure at this point it's just a matter of Google running it's course. I suggested he revise the home page content and build some new quality links but I'm still a little stumped as to how/why this happened. If it was seen as duplicate content, how did this site with no links and zero authority manage to knock out a site that ranked well for hundreds of terms that had been around for 7 years? I get that it doesn't have a ton of authority but this other site had none. I'm doing this pro bono at this point but I feel bad for this guy as he's losing a lot of money at the moment so any other eyeballs that see something that I don't would be very welcome. Thanks Mozzers!
Intermediate & Advanced SEO | | NetvantageMarketing2 -
Duplicate content clarity required
Hi, I have access to a masive resource of journals that we have been given the all clear to use the abstract on our site and link back to the journal. These will be really useful links for our visitors. E.g. http://www.springerlink.com/content/59210832213382K2 Simply, if we copy the abstract and then link back to the journal source will this be treated as duplicate content and damage the site or is the link to the source enough for search engines to realise that we aren't trying anything untoward. Would it help if we added an introduction so in effect we are sort of following the curating content model? We are thinking of linking back internally to a relevant page using a keyword too. Will this approach give any benefit to our site at all or will the content be ignored due to it being duplicate and thus render the internal links useless? Thanks Jason
Intermediate & Advanced SEO | | jayderby0 -
Subdomains - duplicate content - robots.txt
Our corporate site provides MLS data to users, with the end goal of generating leads. Each registered lead is assigned to an agent, essentially in a round robin fashion. However we also give each agent a domain of their choosing that points to our corporate website. The domain can be whatever they want, but upon loading it is immediately directed to a subdomain. For example, www.agentsmith.com would be redirected to agentsmith.corporatedomain.com. Finally, any leads generated from agentsmith.easystreetrealty-indy.com are always assigned to Agent Smith instead of the agent pool (by parsing the current host name). In order to avoid being penalized for duplicate content, any page that is viewed on one of the agent subdomains always has a canonical link pointing to the corporate host name (www.corporatedomain.com). The only content difference between our corporate site and an agent subdomain is the phone number and contact email address where applicable. Two questions: Can/should we use robots.txt or robot meta tags to tell crawlers to ignore these subdomains, but obviously not the corporate domain? If question 1 is yes, would it be better for SEO to do that, or leave it how it is?
Intermediate & Advanced SEO | | EasyStreet0