Redundant Hostnames Issue in GA
-
I noticed another post on this, but I have another question. I am getting this message from Analytics:
Property http://www.example.com is receiving data from redundant hostnames. Consider setting up a 301 redirect on your website, or make a search and replace filter that strips "www." from hostnames. Examples of redundant hostnames: example.com, www.example.com.
We don't have a 301 in place that manages this and I am quite concerned about handling that the right way. We do have a canonical on our homepage that says:
rel="canonical" href="http://www.example.com/" /> I asked on another site how to safely set up our 301 and I got this response:
RewriteCond %{HTTP_HOST} !^www.example.com$ [NC]
RewriteRule ^ http://www.example.com%{REQUEST_URI} [R=301,L,NE]Is this the best way of handling it? Are there situations where this would not be the best way?
We do have a few subdomains like beta.example.com in use and have a rather large site, so I just want to make sure I get it right.
Thanks for your help!
Craig
-
Rewrites are generally regarded as the best way to handle this kind of redirect, but your DNS provider likely has their own redirect system in place that can implement the same functionality without modifying your site's .htaccess. Same result, different technique.
The reason I use my DNS with a "www" subdomain record to forward to my non-www domain is because WordPress sometimes has issues with using RewriteRule. It seems to break permalinks. I always just set it once with my DNS host and never think about it again.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Breadcrumb issue
The site has 2 main categories for scooters. One category is Type of Scooter menu item with nested types and the second category is Manufacturer menu item with nest makes. So all the scooters can be found in either of these categories depending on how you search. The Manufacturer category is mainly thin content and set as noindex, as well as the nested makes categories. However when searching for products Google is invariably using the breadcrumb for the Manufacturer category rather than the Type of Scooter category, which is indexed. Should this be of concern Google using breadcrumbs of non indexed URLs, even if they are followed and therefore the site navigable?
Technical SEO | | MickEdwards0 -
Index bloating issue
Hello, In the last month, I noticed a huge spike in the number of pages indexed on my site, which I think is impacting my SEO quality score. While I've only have about 90 pages on my site map, the number of pages indexed jumped to 446, with about 536 pages being blocked by robots. At first we thought this might be due to duplicate product pages showing up in different categories on my site, but we added something to our robot.txt file to not index those pages. But the number has not gone down. I've tried to consult with our hosting vendor, but no one seems to be concerned or have any idea why there was such a big jump in the last month. Any insights or pointers would be so greatly appreciated, so that I can fix/improve my SEO as quickly as possible! Thanks!
Technical SEO | | Saison0 -
Wrapping my head around an e-commerce anchor filter issue, need help
I am having a hard time understanding how Google will deal with this scenario, I would love to hear what you guys think or suggest. Ok a category page on the site in question looks like this. http://makeupaddict.me/6-skin-care All fine and well, But a paginated page or a filtered category pages look like these http://makeupaddict.me/6-skin-care#/page-2 and http://makeupaddict.me/6-skin-care#/price-391-1217 From my understanding Google does not index an anchor without a shebang (#!), but that doesn't mean that they do not still crawl them, correct? That is where the issue comes in, since anchors are not indexed and dropped from the urls, when Google crawls a filtered or paginated page, it is getting different results. From the best of my understanding, and someone can correct me if I am wrong but an anchor is not passed in web languages like a querystring is. So if I am using php and land on http://makeupaddict.me/6-skin-care or http://makeupaddict.me/6-skin-care#/price-391-1217 and use something like .$_SERVER['SELF'] to get the url both pages will return http://makeupaddict.me/6-skin-care since the anchor is handled client side. With that being the case, is it imagined that Google uses that standard or is it thought they have a custom function that grabs the whole url anchor in all? Also if they are crawling the page with the anchor, but seeing it anchor less how are they handling the changing content?
Technical SEO | | LesleyPaone0 -
Sitemap issue - Tons of 404 errors
We've recreated a client site in a subdirectory (mysite.com/newsite) of his domain and when it was ready to go live, added code to the htaccess file in order to display the revamped website on the main url. These are the directions that were followed to do this: http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory and http://codex.wordpress.org/Moving_WordPress#When_Your_Domain_Name_or_URLs_Change. This has worked perfectly except that we are now receiving a lot of 404 errors am I'm wondering if this isn't the root of our evil. This is a WordPress self-hosted website and we are actively using the WordPress SEO plugin that creates multiple folders with only 50 links in each. The sitemap_index.xml file tests well in Google Analytics but is pulling a number of links from the subdirectory folder. I'm wondering if it really is the manner in which we made the site live that is our issue or if there is another problem that I cannot see yet. What is the best way to attack this issue? Any clues? The site in question is www.atozqualityfencing.com https://wordpress.org/plugins/wordpress-seo/
Technical SEO | | JanetJ0 -
Http and https issue in Google SERP
Hi, I've noticed that Google indexing some of my pages as regular http, like this: http://www.example.com/accounts/ and some pages are being indexed as https, like this: https://www.example.com/platforms/ When I've performed site audit check in various SEO tools I got something around +450 pages duplicated and showing me pairs of the same URL pages, one time with http and one time with https. In our site there is the possibility for people to register and and open an account, later on to login to our website with their login details. In our company I'm not the one that is responsible for the site's maintenance and I would like to know if this is an issue, and if this is an issue - to know what causing it and how to fix it so I'll be able to forward the solution to the person in charge. Additionally I would like to know in general, what is the real purpose of https vs. http and to know what is the preferred method that our website should use. Currently when URLs are typed manually to the address bar, all the URLs are loading fine - with or without https written at the start of each URL. I'm not allowed to expose our site's name, this is why I wrote example.com instead, I hope you can understand that. Thank you so much for your help and I'm looking forward reading your answers.
Technical SEO | | JonsonSwartz0 -
Duplicate Content Issues
We have some "?src=" tag in some URL's which are treated as duplicate content in the crawl diagnostics errors? For example, xyz.com?src=abc and xyz.com?src=def are considered to be duplicate content url's. My objective is to make my campaign free of these crawl errors. First of all i would like to know why these url's are considered to have duplicate content. And what's the best solution to get rid of this?
Technical SEO | | RodrigoVaca0 -
How unique does a page need to be to avoid "duplicate content" issues?
We sell products that can be very similar to one another. Product Example: Power Drill A and Power Drill A1 With these two hypothetical products, the only real difference from the two pages would be a slight change in the URL and a slight modification in the H1/Title tag. Are these 2 slight modifications significant enough to avoid a "duplicate content" flagging? Please advise, and thanks in advance!
Technical SEO | | WhiteCap0 -
Index Issues with Iframes
I have pages that are being scrapped and displayed in iframes and I wanted to see if anyone could tell me how I could get theses pages to be indexed here is a URL of one of the pages http://coggno.com/onlinetraining/safety-/other/lab-safety-1INde
Technical SEO | | PageOnePowerGang0