Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Www and non www how to check it.......for sure. No, really, for absolutely sure!!
-
Ok, I know it has been asked, answered, and re-asked but I am going to ask for a specific reason. As you know, anyone who is a graphic designer or web developer is also an expert in SEO....Right???
I am dealing with a client who is clinging to a developer but wants us to do the SEO on a myriad of sites. All connect to his main site via links, etc. The main site was just redeveloped by a developer who claims extensive SEO knowledge. The client who referred me to them is getting over twenty times the organic clients they are and is in a JV with the new client. Soooo, I want to show them once and for all they are wrong on the www. versus non-www.When I do a Site:NewClient.com in Google I get a total of 13 www.newclient.com url's and 20 newclient.com url's without the www. Oddly, none are dupes of the other. So, where the www.NewClient/toy-boat/ is there, the other might be non www. NewClient/toy-boat/sailing-green/
Even the contact page is in the www.NewClient/contact versus the non www of NewClient/Contact-us/
But, both pages seem to resolve to the non www. (A note here is that I originally instructed the designer to do non www to www. because the page authority was on the www.NewClient and he did opposite.
With pages that are actually PDF files, if you try to use the www.NewClient/CoolGuy.pdf it comes up 404.
When I check our sites, using Site:We-Build-Better.com ours return all www.We-Build-better/ url's.
So, any other advice on how to insure these correct or incorrect? Oddly, we have discovered that sometimes in OSE, even with a correct canonical redirect it shows one without authority and the other with....we have contacted support.
Come on mozzers, hook a brother up!
-
Hi again Robert,
God of All Things Code is away from the office for a while today, so will need to wait a little longer for his input.
A couple of things that happened since my last post though:
Those twitching antennae just wouldn't stop nudging me to look a little further as everything I see with this site is saying "template" to me. Add to that the URL rewrites which hide the actual URL's and the broken pdf files...so i went digging a little further and ... Aha!
Not a template, but a "Theme". The entire site is built in Wordpress!
Now, I am pretty sure that the broken pdf's are the result of the Wordpress URL rewrites changing the directory name in combination with the hard coded links. If this is the case, then it ought to be just a matter of adding a rule to the .htaccess file to deal specifically with the pdf's. The order in which the rules appear will determine whether the issue is resolved or not.
I'll let you know as soon as I've confirmed the specifics with my Boss.
Hope that helps,
Sha
-
Well done, good point on pref setting in WMT. Thanks,
-
OK Ryan, you don't sleep and that was funny ;).
-
OK Robert,
First I'm going to tip my hat to Ryan, who has perfectly explained the fact that some of what you see in your site: search can be because the 301's have not yet been recognized by the search engine.
Second, an apology to Alan as I went right to the LAMP solution because of prior knowledge from a previous thread or two that you were going to be talking about .htaccess
Now...I will spell out a couple of things because I have a feeling that you are likely to come across them again in the future and quick recognition can often mean a lot of time saved.
So here goes.
When I first read your question, my little web developer antennae suddenly started twitching! When I hear that there are multiple versions of a file with different file names deployed on a server I generally suspect one of two things:
- The site has been developed from a standard Template package, or
- There has just been a little "untidiness" taking place in the development process.
In your example, the /contact.php was the original file deployed live to the server, then the /contact-us.php file was created to replace it (presumably for SEO purposes - debatable, but that is a whole other conversation). As I'm sure you can imagine, /contact is pretty common in template packages, although the biggest template producer out there is much easier to spot, as the pages in their templates are always in the format /index-1.htm etc. It may just be that the developer creates their own standard template from an original design and rather than pre-planning and creating the file names to maximize SEO, they create standard page names and change them later.
While there is nothing really wrong with either of these things (unless you are charging the client for an original design and buying a pre-designed template at a fraction of the cost), both methods do open up the way for mistakes and errors to occur. As a result, there are a few things to keep in mind if you are working this way -
- It is a much better idea to build on a development server so that none of the files that will become obsolete during the process will be indexed by search engines in the meantime. Tidy architecture, remove the obsolete files, test, then push to production.
- When changing file names it is ALWAYS better to re-name the existing file and do a global update of links rather than create a duplicate with a different name. As soon as you create two files, you open up the possibility of accidentally linking both files within the site. You could have /contact.php linked from the home page and contact-us.php linked from the footer for example. There is a danger here that should you decide to delete the unwanted file, you create broken links without knowing it, or you have duplicate content. Either way, you have to recognize the problem and either fix it, or put a 301 in place to catch it.
- NEVER hard code your links, because as soon as you change the name of the directory you placed your files in, you create a broken link! If you use relative links, the change of directory name will not matter.
I can see from Screaming Frog that some of the URL's for the pdf files have 301's in place, but it appears that the Redirect URL may also be hard coded to the /pdfs directory. The fact that they all return a 404 when the directory name is changed to match that section makes it purely a guess as to what is happening here. It seems both www and non www pdf's are returning 404's in the browser.
The picture is muddied a little by the fact that there appear to be internal URL rewrites in the mix as well (to produce those pretty URL's with trailing slashes). So, there are a few options as to why the pdf's are not accessible:
- They are not actually on the server at all (unlikely)
- The names of the pdf's themselves have been changed, so even if the URL rewrite is sending the request to the new directory, the file requested does not exist.
- The /pdfs directory has been named something completely different and the hard coding is the problem
- The /pdfs directory has been moved to another location within the site architecture
I tried guessing a couple dozen of the obvious options, but no luck I'm afraid
There is one other possibility, in that the internal URL rewrites and 301 redirects could be creating a problem for each other. I am not clever enough to identify whether this is the case without a hint from the code, but will ask the God of All Things Code (my Boss) if he can answer that for me when daytime arrives 8D
OK....this is now so long that I really need to read the whole thread back to see if I have forgotten anything! If I find something I have missed, or can find anything else when help arrives, I'll be back!
Hope it makes some sort of sense and ultimately helps,
Sha
-
This info is really not browser dependent, just displayed differently.
But as i stated elswhere, if you PM me the Url i can give you a site wide report that will show you any cononical problems, or any problems for that matter.
-
Thanks for this Alan, I use Linux / Apache but having the IE info is a big help. Usually have Chrome or Firefox up, but some real estate sites here only use IE.
-
I want a sure way to know this ...person....did what they are telling their client they did.
Perhaps someone has more creativity then myself but I do not know any means by which you can be 100% certain a sitewide 301 is implemented without seeing the file on the server. The "file" varies based on the server type. As you know, for Apache servers the .htaccess file is the right one.
Even if you saw the .htaccess file, it is possible for another file to overwrite the command. The way I always have verified is by looking at the site itself. Check the home page and a few other pages. If they are all 301'd properly, then I presume the developer performed their job correctly. It would actually be a lot more work for the developer to attempt to fool you by 301'ing part of the site but not all.
I also suggest ensuring your site's www or non-www standard appears correctly in your crawl report.
Is my assumption that if a 301 was done in .htaccess, there should be no www showing in Google Site:?
That is not necessarily true. If you have a site which shows mixed URL results, then overtime the results from a site: search will be standardized, but it will take time as Google needs to crawl each and every page of the site and see the 301. Also if any page is blocked by robots.txt for example, then Google may not see the 301 for that page and still list the old url.
If you changed the Google WMT preferred domain setting, then it is true you will only see one version of the URL. I would specifically advise you NOT to change that setting in this case as it may cover up the developer's issue which you are trying to locate. As for now, you can wait 30 days and perform a site: search. Investigate any bad URLs you find.
-
If you want Robert, if you PM me the url, i will give you a site wide check
-
I shot you a PM. Just dont want the other guys info out. If it was my site and I had full control would tell all. Sha got one too. Thanks
-
I shot you a PM. Just dont want the other guys info out. If it was my site and I had full control would tell all. Sha got one too. Thanks
-
We are linux on all though. So the .htaccess file is the bomb with a 301 and we follow up with setting preference in Google webmaster tools.
-
Private Message Ok, should of been obvious
-
Well if Robert Private Messages Sha, then you would be missing that message
-
Sha, what does PM stand for? Am I missing somthing?
-
just a point, you dont need to do a 301 in the .htaccess file.
I work with Microsoft Technolgies, and we dont use them, .htacces is a linux appache thing
-
Unfortunately, the other developer controls all. We develop a set of sites that are essentially micro sites that advertise particular facets of our clients professional practice. With our sites when we have the main site and the micro sites, we make the 301 change in the .htaccess and then set the preference with Google per webmaster tools. We look first to see where the page authority lies and redirect from weak to strong if just for www/non www. With a new TLD, obviously, it is from old to new.
I want a sure way to know this ...person....did what they are telling their client they did. It does not appear so. With ours when we do a site:OurSite we get what we assumed on every page of Google search. With this one it is four pages with the 13 www and 20 non www. Some www urls resolve to the non and some do not. When I look in OSE, I see where there is mention of a redirect from www to non www, and the non www all with PA of 1, DA of 15. With www, PA is 25 for home page.
Is my assumption that if a 301 was done in .htaccess, there should be no www showing in Google Site:?
Thanks
-
no .htaccess file access.......hard to even get to site pages to place links to microsites.
I will PM the url. Thanks Sha
You are correct, I want to know did this other developer really do a 301 in the .htaccess file that will allow all weight to inure to one or the other url.
-
no .htaccess file access.......hard to even get to site pages to place links to microsites.
I will PM the url. Thanks Sha
You are correct, I want to know did this other developer really do a 301 in the .htaccess file that will allow all weight to inure to one or the other url.
-
no .htaccess file access.......hard to even get to site pages to place links to microsites.
I will PM the url. Thanks Sha
You are correct, I want to know did this other developer really do a 301 in the .htaccess file that will allow all weight to inure to one or the other url.
-
Hi Robert.
Once you determine which version of a URL you would like to represent your site, the best method to enforce that decision is to use a 301 redirect. For example, direct all non-www traffic to the www version of the URL the same way SEOmoz URLs appear. With this approach, 100% of your URLs will appears as the "www" version in SERPs and there will never be any confusion or conflict.
I've heard people talk about using canonicals or setting the preferred domain in WMT. Neither step is necessary as long as the 301 is in place. The reason I still do both is I like to account for failures in a process. You never know when someone will make an error and modify an .htaccess file incorrectly and wipe out your redirect.
If you have the redirect in place, OSE and similar tools should clearly see the redirect and act appropriately every time. If the tool does not work correctly, I would examine the header tag of the page to ensure the 301 is working properly. If it is, then I would perform the same action you did and report the bug.
If you do not take the proper steps to enforce a "www" or "non-www" structure, you will see the results which you described. Some users will visit and link to each version of the page which will lead to both versions of URLs being indexed. Google will index a version based on which was discovered first or which version it deems more important based on links and other factors. When you perform searches for a site, some URLs will appear with the "www" and some without it. The backlinks will be divided and, as you know, that is bad for SEO. The duplicate content issue will set off alarms for the SEOmoz crawler and similar tools, but Google will still index one version of the page.
I am not sure if this completely answers your question Robert. If I missed anything, feel free to ask.
-
Thwe SEO Toolkit sees the same probnlems as Bing sees, you need windows and you need to install IIS (add features) first
http://www.iis.net/download/SEOToolkit -
What is the SEO Toolkit that runs on Windows?
Best,
Christopher -
Hi Robert,
OK, just to clarify...
- You want to check for sure that newclient.com is 301 redirected to www.newclient.com?
- You want to check for sure that ALL URL's which have been individually 301'd are redirecting to www.newclient.com/filename?
- You want to understand why the non www version of pdf files works and the other doesnt?
Right off the top, the definitive way to check whether there is a properly functioning redirect in place is to type the URL into a browser and see whether it resolves to the redirect target :). You can also run Screaming Frog and see what status the pages return, but be aware that this does not always reflect the real situation in the browser (pages can return status that does not match what you see).
On the other questions, I think perhaps what you really want is to first determine what is happening and then, WHY?
So, first things first:
- do you have access to the .htaccess file?
- Can you provide the URL (and .htaccess if you have it)? You can PM this info if you don't want to share it publicly.
Sha
-
Not quite sure I understand what you want to check, but as long as one 301's to the other it does not really matter. it may take some time for SE's to catch up.
Are you saying you want to check if it is resolving correctly?
in IE click F12, and then select Network and start capturing, you will see if its useing a 301, or a useless 302.If you want to prove to your client that the developer is not on the ball, do a scan with the SEO Toolkit and show the results, if you dont have windows too install it on, i will do one for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moving site from www to non www and also hosting to vps what will be the effect?
Hi SEO gurus,
Intermediate & Advanced SEO | | SeoBlogs61
I am trying to move my site from shared hosting to VPS hosting and also moving from www to non www version.
What is the best possible way to avoid any issue and without losing the backlinks.
Is it good or bad to do? URL: https://buylikesservices.com/0 -
Moving html site to wordpress and 301 redirect from index.htm to index.php or just www.example.com
I found page duplicate content when using Moz crawl tool, see below. http://www.example.com
Intermediate & Advanced SEO | | gozmoz
Page Authority 40
Linking Root Domains 31
External Link Count 138
Internal Link Count 18
Status Code 200
1 duplicate http://www.example.com/index.htm
Page Authority 19
Linking Root Domains 1
External Link Count 0
Internal Link Count 15
Status Code 200
1 duplicate I have recently transfered my old html site to wordpress.
To keep the urls the same I am using a plugin which appends .htm at the end of each page. My old site home page was index.htm. I have created index.htm in wordpress as well but now there is a conflict of duplicate content. I am using latest post as my home page which is index.php Question 1.
Should I also use redirect 301 im htaccess file to transfer index.htm page authority (19) to www.example.com If yes, do I use
Redirect 301 /index.htm http://www.example.com/index.php
or
Redirect 301 /index.htm http://www.example.com Question 2
Should I change my "Home" menu link to http://www.example.com instead of http://www.example.com/index.htm that would fix the duplicate content, as indx.htm does not exist anymore. Is there a better option? Thanks0 -
6 .htaccess Rewrites: Remove index.html, Remove .html, Force non-www, Force Trailing Slash
i've to give some information about my website Environment 1. i have static webpage in the root. 2. Wordpress installed in sub-dictionary www.domain.com/blog/ 3. I have two .htaccess , one in the root and one in the wordpress
Intermediate & Advanced SEO | | NeatIT
folder. i want to www to non on all URLs Remove index.html from url Remove all .html extension / Re-direct 301 to url
without .html extension Add trailing slash to the static webpages / Re-direct 301 from non-trailing slash Force trailing slash to the Wordpress Webpages / Re-direct 301 from non-trailing slash Some examples domain.tld/index.html >> domain.tld/ domain.tld/file.html >> domain.tld/file/ domain.tld/file.html/ >> domain.tld/file/ domain.tld/wordpress/post-name >> domain.tld/wordpress/post-name/ My code in ROOT htaccess is <ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews RewriteEngine On
RewriteBase / #removing trailing slash
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ $1 [R=301,L] #www to non
RewriteCond %{HTTP_HOST} ^www.(([a-z0-9_]+.)?domain.com)$ [NC]
RewriteRule .? http://%1%{REQUEST_URI} [R=301,L] #html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^.]+)$ $1.html [NC,L] #index redirect
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L]</ifmodule> The above code do 1. redirect www to non-www
2. Remove trailing slash at the end (if exists)
3. Remove index.html
4. Remove all .html
5. Redirect 301 to filename but doesn't add trailing slash at the end0 -
How do you check the google cache for hashbang pages?
So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage. My question is - how can you actually look up the cache for hashbang page?
Intermediate & Advanced SEO | | navidash0 -
Switching site from non-www to www
Howdy folks, I've got a website that is roughly 3 months old. I created it as a naked URL as I often prefer the look but I've noticed that a lot of my competition has www and also some of my clients seem to prefer it as well. I feel like switching it to www will be of long-term benefit for my site. The problem is that I currently have several pages with first page rankings and a backlinks. I am wondering what the negative effects of switching it to www would be, and how I can minimize any issues. I am guessing I should do a redirect, and I have access to some of the backlinks so I can change those as well, but is there anything else? Thoughts? I appreciate the feedback!
Intermediate & Advanced SEO | | jameswesleyhunt1 -
How can I make sure Google is crawling a link from an iframe (video)?
Do they crawl backlinks from an iframe example from a Youtube video embedded in a blog post? TIA!
Intermediate & Advanced SEO | | zpm20140 -
Tool to bulk check outbound links
Hi. I have a list of 50 domains I need to check for links to three different sites. Does anybody know an easy way to do this? The best solution I have found so far is to crawl each with Screaming Frog and search for the domains, but I can only do one at a time this way. Some way to speed it up would be great!
Intermediate & Advanced SEO | | Blink-SEO0 -
Redirecting non www site
Hello Ladies and Gentlemen. I 100% agree with the redirecting of the non www domain name. After all we see so many times, especially in MOZ how the two different domains contain different links, different DA and of course different PA. So I have posed the question to our IT company, "How would we go about redirecting our non www domain to the www version?", "Where would we do that?", " we cant do the redirect on our webserver because the website is listed as an IP address, not a domain name, so would we do the redirect somewhere at GoDaddy?" who is currently maintain our DNS record So here is the response from IT: " I would setup a CNAME record in DNS (GoDaddy), such that no matter if you go to the bare domain, or the www, you end up in the same place. As for SEO, having a 301 redirect for your bare domain isn't necessary, because both the bare domain and the www are the same domain. 301 is a redirect for "permanently moved" and is common when you change domain names. Using the bare domain or the www are NOT DIFFERENT DOMAINS, so the 301 would not be accurate, and you'd be telling engines you've moved, when you haven't - which may negatively impact your rank. It sounds to me that IT is NOT recommending the redirect. How can this be? Or are we talking about two different things? Will the redirect cause the melt down as the IT company suggests? Or do they nut understand SEO?
Intermediate & Advanced SEO | | Davenport-Tractor0