Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google Indexed a version of my site w/ MX record subdomain
-
We're doing a site audit and found "internal" links to a page in search console that appear to be from a subdomain of our site based on our MX record. We use Google Mail internally. The links ultimately redirect to our correct preferred subdomain "www", but I am concerned as to why this is happening and if it can have any negative SEO implications.
Example of one of the links:
Links aspmx3.googlemail.com.sullivansolarpower.com/about/solar-power-blog/daniel-sullivan/renewable-energy-and-electric-cars-are-not-political-footballs I did a site operator search, site:aspmx3.googlemail.com.sullivansolarpower.com on google and it returns several results.
-
You appear to have the MX sub-domain also set up as an A record.
If you have a mac / linux you can run the command: host aspmx3.googlemail.com.sullivansolarpower.com
You get the result aspmx3.googlemail.com.sullivansolarpower.com has address 72.10.48.198
Where you should get the result "not found".
I think you want to delete the A record (though check the documentation of your email provider first). You should only need them set up as MX records and shouldn't need the A record.
You've done the right thing by setting up the redirect - which should mean that the pages drop out of the index and those links disappear. (Note that there is also an https error on the aspmx3 sub-domain - but given that you don't actually want it, I don't suppose that matters that much).
Hope that helps.
-
I did not explain the problem thoroughly. The problem is, the link does not actually exist anywhere. To make a very long story short. There was an issue with server configuration for a period of a couple months. During that time, an unknown number of non-existent subdomains got indexed. Basically, if anyone had a typo in the subdomain when accessing our site, it would get cached and if Google crawled our site before we cleared the cache, the typo subdomain would get indexed. Over a period of a couple months, many bad subdomains were accidentally created and indexed by Google. We do not have any way of finding a comprehensive list of all of them. This problem has been resolved so we are not getting new bad subdomains created and indexed, but the damage has been done.
The way our site is setup currently, any attempt to reach our site with any subdomain other than "www" gets redirected to "www.sullivan..." Also, any nonsecure protocol gets resolved to https://
The actual problem, simply put is this: Google has an index which includes some number of unknown, non existent subdomains. We need to get rid of them and cannot figure out how.
Example: Copy and paste the following into google and search it:
site:aspmx3.googlemail.com.sullivansolarpower.com
Google will return two results. If you click on either, it resolves to the "https://www. version of the page.
I know it is confusing, but does that make sense? I have searched everywhere, but the reason this happened was because of a perfect storm of server configuration issues and I cannot find anyone else who has had the same problem.
If it were one or two bad subdomains, we would just put them into search console and then get "remove URL" for the entire subdomain. But it is not 1 or 2. It is at least 10 that I know of and could be hundreds for all I know.
Does anyone have any ideas? Any and all would be welcome.
Thank you.
-
You should find the locations of those links and correct them to point to the proper URL. I find that Screaming Frog's crawl is the easiest for this, you can find every link and see where they are located.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can I get a photo album indexed by Google?
We have a lot of photos on our website. Unfortunately most of them don't seem to be indexed by Google. We run a party website. One of the things we do, is take pictures at events and put them on the site. An event page with a photo album, can have anywhere between 100 and 750 photo's. For each foto's there is a thumbnail on the page. The thumbnails are lazy loaded by showing a placeholder and loading the picture right before it comes onscreen. There is no pagination of infinite scrolling. Thumbnails don't have an alt text. Each thumbnail links to a picture page. This page only shows the base HTML structure (menu, etc), the image and a close button. The image has a src attribute with full size image, a srcset with several sizes for responsive design and an alt text. There is no real textual content on an image page. (Note that when a user clicks on the thumbnail, the large image is loaded using JavaScript and we mimic the page change. I think it doesn't matter, but am unsure.) I'd like that full size images should be indexed by Google and found with Google image search. Thumbnails should not be indexed (or ignored). Unfortunately most pictures aren't found or their thumbnail is shown. Moz is giving telling me that all the picture pages are duplicate content (19,521 issues), as they are all the same with the exception of the image. The page title isn't the same but similar for all images of an album. Example: On the "A day at the park" event page, we have 136 pictures. A site search on "a day at the park" foto, only reveals two photo's of the albums. 3QolbbI.png QTQVxqY.jpg mwEG90S.jpg
Technical SEO | | jasny0 -
How to stop google from indexing specific sections of a page?
I'm currently trying to find a way to stop googlebot from indexing specific areas of a page, long ago Yahoo search created this tag class=”robots-nocontent” and I'm trying to see if there is a similar manner for google or if they have adopted the same tag? Any help would be much appreciated.
Technical SEO | | Iamfaramon0 -
Staging site and "live" site have both been indexed by Google
While creating a site we forgot to password protect the staging site while it was being built. Now that the site has been moved to the new domain, it has come to my attention that both the staging site (site.staging.com) and the "live" site (site.com) are both being indexed. What is the best way to solve this problem? I was thinking about adding a 301 redirect from the staging site to the live site via HTACCESS. Any recommendations?
Technical SEO | | melen0 -
Correct linking to the /index of a site and subfolders: what's the best practice? link to: domain.com/ or domain.com/index.html ?
Dear all, starting with my .htaccess file: RewriteEngine On
Technical SEO | | inlinear
RewriteCond %{HTTP_HOST} ^www.inlinear.com$ [NC]
RewriteRule ^(.*)$ http://inlinear.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://inlinear.com/ [R=301,L] 1. I redirect all URL-requests with www. to the non www-version...
2. all requests with "index.html" will be redirected to "domain.com/" My questions are: A) When linking from a page to my frontpage (home) the best practice is?: "http://domain.com/" the best and NOT: "http://domain.com/index.php" B) When linking to the index of a subfolder "http://domain.com/products/index.php" I should link also to: "http://domain.com/products/" and not put also the index.php..., right? C) When I define the canonical ULR, should I also define it just: "http://domain.com/products/" or in this case I should link to the definite file: "http://domain.com/products**/index.php**" Is A) B) the best practice? and C) ? Thanks for all replies! 🙂
Holger0 -
How does Google Crawl Multi-Regional Sites?
I've been reading up on this on Webmaster Tools but just wanted to see if anyone could explain it a bit better. I have a website which is going live soon which is going to be set up to redirect to a localised URL based on the IP address i.e. NZ IP ranges will go to .co.nz, Aus IP addresses would go to .com.au and then USA or other non-specified IP addresses will go to the .com address. There is a single CMS installation for the website. Does this impact the way in which Google is able to search the site? Will all domains be crawled or just one? Any help would be great - thanks!
Technical SEO | | lemonz0 -
Google is indexing my directories
I'm sure this has been asked before, but I was looking at all of Google's results for my site and I found dozens of results for directories such as: Index of /scouting/blog/wp-includes/js/swfupload/plugins Obviously I don't want those indexed. How do I prevent Google from indexing those? Also, it only seems to be doing it with Wordpress, not any of the directories on my main site. (We have a wordpress blog, which is only a portion of the site)
Technical SEO | | UnderRugSwept0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Does google use the wayback machine to determine the age of a site?
I have a site that I had removed from the wayback machine because I didn't want old versions to show. However I noticed that in many seo tools the site now always shows a domain age of zero instead of 6 years ago when I registered it. My question is what do the actual search engines use to determine age when they factor it into the ranking algorithm? By having it removed from the wayback machine, does that make the search engines think the site is brand new? Thanks
Technical SEO | | FastLearner0