Should I be running my crawl on our www address or our non-www address?
-
I currently run our crawl on oursitename.com, but am wondering if it should be run on www.oursitename.com instead.
-
It does make sense. Based on what you wrote, ours is not set up correctly. But now I have the language to better work with our provider.
You are very helpful!! Thank you!
-
There's really no way to avoid the existence of the two different addresses, THMCC - they're a byproduct of the way of the way web servers and domain names work. They both exists as soon as a website is created. And visitors tend to use them interchangeably.
They aren't "actually" two different sites, they are just two different addresses that refer to the same pages, causing the search engines to see them as duplicate content. Kind of like how your house's location can be described by the lot number on the city plan, or by the postal address. Same house - different ways of showing where it's located.
If the 301 redirect is done correctly, the search engines will understand that everything should be considered to be at the one primary address and they'll pass along the other version's authority to the primary version automatically. And therefore no second version to compete with.
You can easily tell if the redirect is working properly. Let's assume you decide for example that the www.thmedicalcallcenter.com version is the primary and redirect the non-www version to it. When you type in the non-www address into the browser's address bar and hit enter, you should actually see the URL in the address bar change to the www version of the address.
And yea, you absolutely must have both addresses taken care of. You have no way of controlling whether someone types in the address with or without the www. and you want either one to get forwarded on to the primary address.
Does that all makes sense?
Paul
-
Thanks so much, Paul!!
The SEO Moz helpdesk is who told me that I needed to have oursitename.com in addition to www.oursitename.com. Is this true? Creating oursitename.com and the redirect seems to have caused a lot of errors, mostly duplicates from the redirect probably not being done correctly. But overall, was/is it in our best interest to have both www.oursitename.com and oursitename.com?
-
Thanks again!! One more question if you don't mind. If the 301 redirect is done correctly will this fix the competition issue in all the search engines? If not, should I get rid of oursitename.com?
-
As far as the search engines are concerned, thmedicalcallcenter.com and www.thmedicalcallcenter.com are two separate sites that will compete against each other and dilute each others' authority and ranking unless one is 301-redirected to the other.
As SEO5 indicates, it's best to assess which version already has the most incoming links and use that version as the primary, redirecting the other one to it.
This is best done using a 301 redirect written into your site's .htaccess file. In addition, there is spot in Google Webmaster Tools (GWT or WMT) where you can also hint to Google which version of the site you want to be the primary.
It's not enough to only use the hint in GWT as that only applies to Google, it will do nothing to correct the problem in the other search engines.
Paul
-
I thought they were the same...with a 301 redirect from oursitename to www.oursitename.com so I am confused as to why one would have more inbound links than the other. Also, what is WMT?
Thanks!!!
-
Pick the one that has more inbound links associated with it and already has high pages in the index. Make sure you set the preferences in WMT to either the www. or the non www version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Lag time between MOZ crawl and report notification?
I did a lot of work to one of my sites last week and eagerly awaited this week's MOZ report to confirm that I had achieved what I was trying to do, but alas I still see the same errors and warnings in the latest report. This was supposedly generated five days AFTER I made the changes, so why are they not apparent in the new report? I am mainly referring to missing metadata, long page titles, duplicate content and duplicate title errors (due to crawl and URL issues). Why would the new crawl not have picked up that these have been corrected? Does it rely on some other crawl having updated (e.g. Google or Bing)?
Moz Pro | | Gavin.Atkinson0 -
Crawl Diagnostics Summary Problem
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
Moz Pro | | iskq0 -
1 page crawled - again
Just had to let you know that it happend again. So right now we are at 2 out of the last 4 crawls. Uptime here is 99,8% for the last 30 days, with a small downtime due to an update process at the 18/5 from around 2:30 to 4:30 GMT In relation to: http://moz.com/community/q/1-page-crawled-and-other-errors
Moz Pro | | alsvik0 -
Pages Crawled: 1 Why?
I have some campaigns which have only 1 page crawled, while some other campaigns, having completely similar URL (subdomain) and number of keywords and pages, have all pages crawled... Why is that so? It has been also a while I waited and so far no change...
Moz Pro | | BritishCouncil0 -
A question about Mozbot and a recent crawl on our website.
Hi All, Rogerbot has been reporting errors on our website's for over a year now, and we correct the issues as soon as they are reported. However I have 2 questions regarding the recent crawl report we got on the 8th. 1.) Pages with a "no-index" tag are being crawled by roger and are being reported as duplicate page content errors. I can ignore these as google doesnt see these pages, but surely roger should ignore pages with "no-index" instructions as well? Also, these errors wont go away in our campaign until Roger ignores the URL's. 2.) What bugs me most is that resource pages that have been around for about 6 months have only just been reported as being duplicate content. Our weekly crawls have never picked up these resources pages as being a problem, why now all of a sudden? (Makes me wonder how extensive each crawl is?) Anyone else had a similar problem? Regards GREG
Moz Pro | | AndreVanKets0 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
Can i force another crawl on my site to see if it recognizes my changes?
i had a problem w/dup content and titles on my site, i fixed them immediately and im wondering if i can run another crawl on my site to see if my changes were recognized thanks shaun
Moz Pro | | daugherty0 -
Crawl reports urls with duplicate content but its not the case
Hi guys!
Moz Pro | | MakMour
Some hours ago I received my crawl report. I noticed several records with urls with duplicate content so I went to open those urls one by one.
Not one of those urls were really with duplicate content but I have a concern because website is about product showcase and many articles are just images with href behind them. Many of those articles are using the same images so maybe thats why the seomoz crawler duplicate content flag is raised. I wonder if Google has problem with that too. See for yourself how it looks like: http://by.vg/NJ97y
http://by.vg/BQypE Those two url's are flagged as duplicates...please mind the language(Greek) and try to focus on the urls and content. ps: my example is simplified just for the purpose of my question. <colgroup><col width="3436"></colgroup>
| URLs with Duplicate Page Content (up to 5) |0