Two "Twin" Domains Responding to Web Requests
-
I do not understand this point in my Campaign Set-Up.
They are the same site as fas as I understand Can anyone help please?
Quote from SEOMOZ
"We have detected that the domain www.neuronlearning.eu and the domain neuronlearning.eu both respond to web requests and do not redirect. Having two "twin" domains that both resolve forces them to battle for SERP positions, making your SEO efforts less effective. We suggest redirecting one, then entering the other here."
thanks
John
-
They're the same site but not the same url. Notice one of those URLs begins with www and the other does not. It's just a weird thing about the way internet servers are set up and having problems with which one of them should be the one you use is called a canonicalization issue.
Most webmasters chose to use the www version and redirect the non-www version to it via settings on the web host. Here's some more reading on canonicalization.
-
That is the problem... that they are the same site.
That means that Google can index both versions and visitors and other sites can create backlinks to both versions - which is not good, because it splits your backlinks up between two sites instead of one.
You need to set up a 301 redirect from one of the versions to the other, as well as set a preferred version in Google Webmaster Tools.
Hope this helps.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
How do I "undo" or remove a Google Search Console change of address?
I have a client that set a change of address in Google Search Console where they informed Google that their preferred domain was a subdomain, and now they want Google to also consider their base domain (without the change of address). How do I get the change of address in Google search console removed?
Technical SEO | | KatherineWatierOng0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Objects behind "hidden" elements
If you take a look at this page: http://www.americanmuscle.com/2010-mustang-body-kits.html You will notice we have a little "Read More" script set up. I have used Google Data Validator to test structured data located behind this 'Read More' and it checks out OK but I was wondering if anyone has insight to whether or not the spiders are even seeing links, etc. behind the 'Read More' script.
Technical SEO | | andrewv0 -
Instance IDs on "Events" in wordpress causing duplicate content
Hi all I use Yoast SEO on wordpress which does a pretty good job of insertint rel=canonical in to the header of the pages where approproate, including on my event pages. However my crawl diagnostics have highlighted these event pages as duplicate content and titles because of the instance id parameter being added to the URL. When I look at the pages head I see that rel=canonical is as it should be. Please see here for an example: http://solvencyiiwire.com/ai1ec_event/unintended-consequences-basel-ii-and-solvency-ii?instance_id= My question is how come SEOMoz is highlighting these pages as duplicate content and what can I do to remedy this. Is it because ?instance_id= is part of the string on the canonical link? How do I remove this? My client uses the following plugins "All-in-One Event Calendar by Timely" and
Technical SEO | | wellsgp
Google Calendar Events Many thanks!0 -
Migrating to a subdirectory in the same domain
Hi! I have a new version of my website, running with a different CMS (joomla). In order to install the new CMS while not loosing my all content and links I was forced to install the new site in a subdirectory. So the old website was http://www.mydomain.com And the new one is http://www.mydomain.com/subdirectory I had redirected http://www.mydomain.com to http://www.mydomain.com/subdirectory but I am not sure if that is correct, or if it will generate SEO problems. I named the subdirectory with a keyword, at least to have any advantage of something that to my short knowledge looks bad... What do you think? Another question... I understand that it is a good SEO rule to optimize each page for a different keyword. Is it a problem if http://www.mydomain.com is not optimized for anything? Thanks!
Technical SEO | | ociosu0 -
Research for "love quotes"
I'm doing some research for the term "love quotes" I'm trying to understand why following URL is ranking so high quote-monster.com/category/love-quotes/ it only has one link? Any advise would be appreciated. Rgds Mark
Technical SEO | | relientmark0 -
New Sub-domains or New Directories for 10+ Year Domain?
We've got a one-page, 10+ year old domain that has a 65/100 domain authority that gets about 10k page views a day (I'm happy to share the URL but didn't know if that's permitted). The content changes daily (it's a daily bible verse) so most of this question is focused on domain authority, not the content. We're getting ready to provide translations of that daily content in 4 languages. Would it be better to create sub-domains for those translations (same content, different language) or sub-folders? Example: http://cn.example.com
Technical SEO | | ipllc
http://es.example.com
http://ru.example.com or http://example.com/cn
http://example.com/es
http://example.com/ru We're able to do either but want to pick the one that would give the translated version the most authority both now and moving forward. (We definitely don't want to penalize the root domain.) Thanks in advance for your input.0