A crawl revealed two home pages
-
After doing a site crawl using the moz tool, I have found two home pages-www.domain.com/ and www.domain.com. Both URLS have the exact same metrics and I have set a preferred domain name in google, will this hurt seo? Should I claim the www.domain.com/ as well as www.domain.com and domain.com in the search console?
Thanks
-
you guys are awesome! Thank you.
-
Just double check in a clean browser (with history cleared & F5) or in incognito mode to check the default.
Sounds good Tom!
-
Thanks Niglel, after doing a little investigating, I believe google search console may have added in the backslash for formatting reasons. It appears with a backslash in home view, where you can see domains, however when viewing preferred domain, it does not appear with a backslash. To test this I used a practice site and added it in without a backslash, following my submission google added in a backslash under the domain view.
So I should be set?
Thanks!
-
Hi Tom
It will still be there but will slowly decline as the new format one takes over. You won't lose anything, GSC just tracks. You will see the non-trailing slash data begin to populate over the next few weeks.
Regards
Nigel
-
Thanks Nigel, what will happen to the existing data under the view of the current preferred domain with the backslash if I switch the preferred domain to no backslash? I worry that the existing data will be erased or not transferred.
-
Hi Tom
If it redirects to www.domain.com then that must also be set up in GSC as that is now the preferred domain format. It looks better as well without the trailing slash.
Regards
Nigel
-
Thank you for the fast responses.
Currently, "www.domain.com/" has been claimed and set as preferred, all search console data appears on this account. (www and backslash)
"domain.com/" has also been claimed, with no data on this view.---(non www)
However, as stated, "www.domain.com/" (Preferred and with backslash) redirects to www.domain.com. So as per suggestions I should add "www.domain.com", should this now be my preferred domain?
Thanks guys!
-
Hi Tom
Moz will not reveal a 301 unless there is a nasty redirect chain. If you use Screaming Frog it will reveal all the directives for every page.
There must be a redirect but it might be worth checking if it's a 301 (permanent) or 302 (temporary) - it should be 301.
The good news is that it is redirecting.
As Martijn suggests you should add the preferred one to Search Console. It doesn't 'do' anything but you will be able to see both versions.
Regards Nigel
-
We are currently HTTP, however the page domain.com/ seems to redirect to domain.com, as I can not access domain.com/ without it bringing me to domain.com (sorry for the redundancy). However, the moz crawl did not reveal a 301. Does this resolve the duplicate content issue? Thanks for the fast answers.
-So far www and non www have been claimed only.
-
In addition to what Nigel is suggesting I would also recommend to claim www.example.com and example.com, if you used to have a HTTP site and have moved over the last years to HTTPS I would recommend using that to verify as well. All of this gives you the best insight.
This is only worth fixing as it's usually an easy change that needs to be made, right now it won't hurt you as there are so many other issues that have way more weight for a search engine. This particular one is one that millions of sites have.
-
Hi Profitect
These are two separate home pages and duplicate each other so do have the potential to kill all of your SEO efforts as they are seen as separate pages by Google.
You will need to put a directive in the htaccess file to move all traffic to one of the other. It's a two-minute job for a developer.
This would move all URL's to a trailing sash format. (Assuming the site is https)
<code><ifmodule mod_rewrite.c="">RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ #Force Trailing slash RewriteRule ^((.*)[^/])$ $1/ [L,R=301]</ifmodule></code>
Regards
Nigel
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Purchased domain with links - redirect page by page or entire domain?
Hi, I purchased an old domain with a lot of links that I'm redirecting to my site. I want all of their links to redirect to the same page on my site so I can approach this two different ways: Entire site
Technical SEO | | ninel_P
1.) RedirectMatch 301 ^(.*)$ http://www.xyz.com or Page by page
2). Redirect 301 /retiredpage.html http://www.xyz.com/newpage.html Is there a better option I should go with in regards to SEO effectiveness? Thanks in advance!0 -
Log files vs. GWT: major discrepancy in number of pages crawled
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider: Number of pages crawled per log files: 2993 Crawl frequency (i.e. number of times those pages were crawled): 61438 Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages) We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
Technical SEO | | ufmedia0 -
Page not cached
Hi there, we uploaded a page but unfortunately didn't realise it had noindex,nofollow in the meta tags. Google had cached it then decached it (i guess thats possible) it seems? now it will not cache even though the correct meta tags have been put in and we have sent links to it internally and externally. Anyone know why this page isn't being cached, the internal link to it is on the homepage and that gets cached almost every day. I even submitted it to webmaster tools to index.
Technical SEO | | pauledwards0 -
Why are pages linked with URL parameters showing up as separate pages with duplicate content?
Only one page exists . . . Yet I link to the page with different URL parameters for tracking purposes and for some reason it is showing up as a separate page with duplicate content . . . Help? rpcIZ.png
Technical SEO | | BlueLinkERP0 -
Index page
To the SEO experts, this may well seem a silly question, so I apologies in advance as I try not to ask questions that I probably know the answer for already, but clarity is my goal I have numerous sites ,as standard practice, through the .htaccess I will always set up non www to www, and redirect the index page to www.mysite.com. All straight forward, have never questioned this practice, always been advised its the ebst practice to avoid duplicate content. Now, today, I was looking at a CMS service for a customer for their website, the website is already built and its a static website, so the CMS integration was going to mean a full rewrite of the website. Speaking to a friend on another forum, he told me about a service called simple CMS, had a look, looks perfect for the customer ... Went to set it up on the clients site and here is the problem. For the CMS software to work, it MUST access the index page, because my index page is redirected to www.mysite.com , it wont work as it cant find the index page (obviously) I questioned this with the software company, they inform me that it must access the index page, I have explained that it wont be able to and why (cause I have my index page redirected to avoid duplicate content) To my astonishment, the person there told me that duplicate content is a huge no no with Google (that's not the astonishing part) but its not relevant to the index and non index page of a website. This goes against everything I thought I knew ... The person also reassured me that they have worked within the SEO area for 10 years. As I am a subscriber to SEO MOZ and no one here has anything to gain but offering advice, is this true ? Will it not be an issue for duplicate content to show both a index page and non index page ?, will search engines not view this as duplicate content ? Or is this SEO expert talking bull, which I suspect, but cannot be sure. Any advice would be greatly appreciated, it would make my life a lot easier for the customer to use this CMS software, but I would do it at the risk of tarnishing the work they and I have done on their ranking status Many thanks in advance John
Technical SEO | | Johnny4B0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0