So there were no external links pointing to the subdomain admin.site.com ? If that's the case you could probably just noindex/nofollow the thing or let it 404. You could write an .htaccess rule to rewrite the domain name, but it's actually probably not worth it now that I think about it. The exception, of course, is if the subdomain had external links pointed to it.
- Home
- Carson-Ward
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Carson-Ward
@Carson-Ward
Job Title: CEO
Company: Ward Enterprises
Website Description
In case you're wondering why I'm inactive on social media and not sharing my websites with the world.
Carson has worked as a marketing manager at Clearlink and as a consultant with Distilled. In 2017 he founded a small company specializing in affiliate marketing. Carson is a long-time friend of Moz and frequently returns to his favorite city of Seattle.
Favorite Thing about SEO
I'm a builder. I love building things that are useful for the user while being the best and highest quality search result. SEO helps me to do that in a way that results in fast growth and successful websites.
Latest posts made by Carson-Ward
-
RE: Backlinks from subdomain, can it hurt ranking?posted in Link Building
-
RE: Backlinks from subdomain, can it hurt ranking?posted in Link Building
Okay, so the situation here is a little unclear, but the solution should be pretty straightforward.
If the admin.site.com was different from the original site domain, simply noindex/nofollow all of the pages on that domain. I recommend this over a robots.txt rule because it will actually remove them from the index. You can add a disallow all rule in robots.txt later once the site is completely noindexed.
If the admin.site.com was the same domain, I'd recommend redirecting all of those pages to the new URLs again and then launching a noindex/nofollow version blocked with robots.txt, though I'm not sure why it needs to exist for reference. If the subdomain was different from the old site you could also probably just noindex/nofollow all of it without the redirect. It's not best practice, but it's not that big a deal.
Hope this helps to answer.
-
RE: Referring domain issuesposted in Intermediate & Advanced SEO
Good answers here - did you get this taken care of? I'd say choose one domain and redirect or forward the others that have the same stuff. To explain it to my boss, I'd say.
- It confuses customers to have the same content on two domains. They might not know which company they're dealing with.
- You probably don't want half the traffic going to one site and half going to the other, especially if their content and user intent is similar. Every live domain is another analytics profiles I have to check on and watch for issues.
- Don't expect any ranking bonus from multiple domains, because when content is duplicate Google will just choose 1 page to rank.
- Maintaining multiple sites is more work than it's worth. We can get more done by focusing on our core domain unless there's a strong case for creating a new brand. (I wouldn't create a new site unless it was for a distinct brand).
Again, probably want to 301 redirect to the primary/canonical domain. If the domains have no links and no traffic (as I'd expect) forwarding through the registrar is fine, too.
-
RE: No-indexed pages are still showing up as landing pages in Google Analyticsposted in Reporting & Analytics
I generally recommend meta noindex without blocking via robots.txt for reasons like this.
First, click secondary dimension and check out source / medium. Are they coming from Google? If all you did was use the remove tool on Google Webmaster Tools, Bing/Yahoo don't get the message yet.
Search engines cannot see a noindex tag on the page if they're blocked from crawling the page. They can't access the page to read the tag. So it can sit around in the index despite not having been crawled for a while. (Though usually it's removed eventually).
Also keep in mind that you see some landing page traffic from instances where GA fails to fire the first time. It's usually a VERY tiny percentage, but I often see traffic to some (virtual pageview) popups that can't even load without entering info on our site (e.g. it's not even a possible landing page).
Might I also ask why you removed the job listing from the index? I was thinking this might be a good time to use rather than blocking crawlers outright. That assumes, of course, that you keep listing up for a fixed time. If you know when the job listing is going to expire, you can just tell the search engine. They might even send some traffic to your individual listings while live.
-
RE: I added a WP Customer Reviews plugin but nothing seems to appear on Google searchposted in Technical SEO
Unfortunately, Google has been scaling back reviews quite a bit. We have several sites where we used to show up with star ratings on every page. Now we're lucky to get them to show up at all.
I have noticed it relies a lot on the query. If you are trying to rank for local queries where Google pulls in a local pack, for example, they're really not likely to show your stars. Maybe this is to prevent confusion. Maybe it's to help themselves on Google Maps.
Try doing a site: search with the exact URL. In my experience it's most likely to show up in low-competition searches, and nothing is less competitive than a site: search with your own site. For example https://www.google.com/search?q=site%3Ahttp%3A%2F%2Fwww.shopperapproved.com does seem to pull in stars.
The good news: what you have is working.
The bad news: you're at Google's mercy as to when your snippets will pull in. Page and domain authority (in Google's eyes) does seem to play a part, and the query will change things up. My advice is always to implement it, continue marketing, and wait for Google to realize you're really an awesome company/site with reviews worth highlighting.
Unfortunately, there's no magic number of reviews or count of links. It just depends on competition in the SERP.
-
RE: Layered navigation and hiding nav from user agentposted in Intermediate & Advanced SEO
If you're really worried about indexation I think that's a fine solution. It's definitely easier to manage, and it'll also be easier to track pageviews in most analytics platforms. The only downside is that if someone emails or links to a category page with filters applied the recipient won't see it. But generally people share products and not category pages, so it's not a big deal. I'd probably go that route.
Also make sure that your category pages still update the URL when you go to page 2, or that page 2 is somehow also being indexed. You don't want products that don't get indexed because categories can't be crawled.
-
RE: Layered navigation and hiding nav from user agentposted in Intermediate & Advanced SEO
Yes, the bots will crawl the pages, but they will not INDEX them.
There is a concern there, but mostly if the bots get caught in some kind of crawl trap - where they're trying out a near-infinite set of variables and getting stuck in a loop. Otherwise the spiders should understand the variables. You can actually check it in Webmaster tools to make sure Google understands. Instructions for that here:
https://support.google.com/webmasters/answer/6080550?hl=en
Ultimately Google will definitely not penalize you for having lots of duplicate content on URLs through variables, but it might be an issue with Googlebot not finding all your pages. You can make sure that doesn't happen by checking the indexation of your sitemap.
You could also try to block any URLs with the URL parameter in robots.txt. Make sure you get some help on the RegEx if you plan to do this. My advice is that blocking the variables in robots.txt is not worth it, as Google should have no problems with the variables - especially if the canonical tags are working.
Googlebot at least is smart enough these days to know when to stop crawling variable pages, so I think there are more important on-site things to worry about. Make sure your categories are linked to and optimized, for example.
-
RE: SEO dealing with a CDN on a site.posted in Reporting & Analytics
Very odd, then, that they're being removed from the index. Do you think it's possible that the images have different URLs depending on which server they're cached on? That could definitely do it. I'd have a friend across the country pull them up and see if the image URL changes.
I'm assuming that the image has some dynamic characters on it, which is pretty common with CDNs under certain configurations. Unfortunately, I've never used MaxCDN. If the image is just cdn.site.com/image.png - I'm afraid I have absolutely no idea why they wouldn't be re-indexed. I have similar CDN images that pull in fine.
-
RE: Layered navigation and hiding nav from user agentposted in Intermediate & Advanced SEO
If I'm not mistaken Magento has canonical tags on category pages by default, so you might be trying to solve an issue that doesn't exist. Take a look at the source code on faceted navigation to confirm. Or you can send me the site and I'll look over it.
-
RE: SEO dealing with a CDN on a site.posted in Reporting & Analytics
Hi there,
Could you tell me whether the URLs on your images are static on the CDN sub-domain? Or do they change regularly?
Best posts made by Carson-Ward
-
RE: Is it safe to 301 redirect old domain to new domain after a manual unnatural links penalty?posted in Intermediate & Advanced SEO
Hi Ewan,
This is a question that probably deserves a blog post at some point, along with a number of related questions about link-based penalties. I've been gathering info for some time, and have seem many instances of redirecting sites that have been penalized. I wish I could collect data on penalized sites more scientifically, but we work with what's available.
Manual Penalties
Manual penalties appear to carry through to new domains almost instantly when redirected to pages housing the same content. Google appears to use a number of signals to make sure that the redirect is to the same site and not to a competitor.
Some Googlers have claimed that if you received an "unnatural link" warning in WMT, it's manual. I'm not entirely convinced of this, but it's now harder than ever to differentiate between manual link-based penalties and Penguin algorithmic adjustment. That brings us to...
Refreshing Adjustments (Penalties)
Panda, Penguin, and a few other updates are a little different. We've seen instances where a user makes a big change (a complete redesign for Panda, or redirecting the entire site); the trend seems to be a brief recovery followed by a drop once the algorithm refreshes.
The obvious up-side here is that if you were going to recover from the penalty anyway, you may start to recover a bit sooner and don't have to wait for the next refresh. The downside is that it's a lot of work to do correctly, and it might be a very short-lived change.
--
Generally, I'd say it's best to clean up the site and keep going on the same domain. If you have a lot of bad links pointing to a specific page, you may want to 410 that page and start a new one, then mention this in your reconsideration request. Otherwise, it's the old process of removal (keeping notes) and using disavow if reconsideration and clean up prove insufficient.
-
RE: Someone is building anchor text links with child porn keywords..help :-(posted in Technical SEO
First off, make sure your site has not been compromised. Are there sections on your site where users generate their own content? Are the links pointing to spam comments or pages you didn't create? Spammers whose sites Google has already banned will often hack or otherwise compromise a site and use it as a gateway to their site. More on that from Google here:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=163634
Next, if you're not dealing in anything like the anchor text indicates, don't worry about it. Spam attacks are generally ineffective, and this is especially true when they're completely different from the terms on the page or the terms you seem to want to rank for.
Really, this is probably not worth worrying about. I wouldn't disavow unless you think it's negatively impacting your site, or unless you have had a warning from Google Webmaster Tools.
-
RE: How to stop my webmail pages not to be indexed on Google ??posted in Technical SEO
Hi,
So you did add a meta noindex tag? Great. Remember that Google needs to crawl the page to see the meta noindex, so blocking it in robots.txt will mean it's still indexed, but has about 0% chance to show up in search unless you search for that URL.
Also, I wouldn't spend any time worrying about obscure pages that are indexed. It's not going to hurt your rankings.
-
RE: Magento Dublicate Content (Noindex and Rel"canonical")posted in Technical SEO
I think there's an underlying assumption here that duplicate content will harm your site, and that's not necessarily true. There's no "duplicate content penalty" - it's more than a filter. Google is better than most at recognizing this, especially with common CMS like Magento and WP. Google attempts to look at the links going to both pages and understand their authority together.
Duplicate content is more of an issue if you're pulling content that others are using as well, e.g. on product descriptions provided by manufacturers and other types of content. Google won't "penalize" you, but they will sometimes filter your site out in favor of the most authoritative site with that content. It's also an issue (mostly for Panda) if you're creating keyword pages that contain duplicate of even very-similar content just to rank for a bunch of very similar keywords.
So my first bit of advice is, "don't obsess over intra-site duplicate content."
That said, it's best to reduce and avoid duplicate content 1) for less-sophisticated search engine, 2) for the sake of your own analytics data integrity and simplicity, 3) just in case Google doesn't get it (very rare).
Set the categories up however you think is best for the user (generally just the product name without categories), double-check the canonical URLs, and wait for Google to catch up on the canonical and noindex. It can take many months depending on your site's authority, but it's unlikely to move the needle either way. Keep in mind that Google may keep pages in the index even if they are honoring the canonical tag - they'll just show the canonical version but keep both indexed. That's working as intended - don't worry about that

-
RE: Redirect ruined domain to new domain without passing link juiceposted in Intermediate & Advanced SEO
I'd just warn that most domain forwarding ends up returning a 301 response code anyway, and some return a 302. You could always test it out to see what happens. I checked (non-masked) domain forwarding on two hosts and found 301s in the header in both cases. I believe this is fairly common.
One controversial solution might be a JavaScript redirect that search engines can't understand instead. It's obviously cloaking if the content is different, but maybe not if the content is similar. See https://support.google.com/webmasters/answer/2721217?hl=en&ref_topic=2371375
Unfortunately, there's not a redirect method that would prevent both versions of the site from being indexed. Even with a penalty, the old site could out-rank the new one for branded and long-tail traffic.
Perhaps the best/safest option is to simply noindex/nofollow the pages, then show a warning with a link to the new version of the page. Yes, it requires a new click from users, but it's simple enough that there's little to worry about.
-
RE: Backlinks From Scraper Sites - Should I Disavow Them?posted in Link Building
I typically tell people not to worry about these types of sites. EVERYONE gets a little spam linkage, and it's rarely worth worrying about.
If you have been hit with a manual penalty or think you're being impacted by Penguin already, then yes, definitely include these domains in the disavow list. Technically Google wants you to try to have the links removed first.
-
RE: How to avoid keyword stuffing on e-Commerce Category pagesposted in On-Page Optimization
Hey there,
Perhaps you could clarify exactly how the CMS structure is creating a "keyword overkill." Are links from the category page to the product page forced to use the same text? Must the anchor text include the category name? Is the CMS pulling product specifications into the anchor text? Does the CMS just insert the word "body" into product names with reckless abandon?
Google tends to be a forgiving with repetitive product names on e-commerce sites. How do you know that keyword stuffing is the problem?
Recommending a fix is going to be next to impossible until we better understand the problem. Based on the conjectures I'm making, I can't imagine how a canonical would help. If the CMS is getting in the way, you may need to focus your energy on convincing them to make the change, rather than beating your head against a wall while pulling in a different direction.
-
RE: Layered navigation and hiding nav from user agentposted in Intermediate & Advanced SEO
If I'm not mistaken Magento has canonical tags on category pages by default, so you might be trying to solve an issue that doesn't exist. Take a look at the source code on faceted navigation to confirm. Or you can send me the site and I'll look over it.
-
RE: Layered navigation and hiding nav from user agentposted in Intermediate & Advanced SEO
Yes, the bots will crawl the pages, but they will not INDEX them.
There is a concern there, but mostly if the bots get caught in some kind of crawl trap - where they're trying out a near-infinite set of variables and getting stuck in a loop. Otherwise the spiders should understand the variables. You can actually check it in Webmaster tools to make sure Google understands. Instructions for that here:
https://support.google.com/webmasters/answer/6080550?hl=en
Ultimately Google will definitely not penalize you for having lots of duplicate content on URLs through variables, but it might be an issue with Googlebot not finding all your pages. You can make sure that doesn't happen by checking the indexation of your sitemap.
You could also try to block any URLs with the URL parameter in robots.txt. Make sure you get some help on the RegEx if you plan to do this. My advice is that blocking the variables in robots.txt is not worth it, as Google should have no problems with the variables - especially if the canonical tags are working.
Googlebot at least is smart enough these days to know when to stop crawling variable pages, so I think there are more important on-site things to worry about. Make sure your categories are linked to and optimized, for example.
-
RE: Layered navigation and hiding nav from user agentposted in Intermediate & Advanced SEO
If you're really worried about indexation I think that's a fine solution. It's definitely easier to manage, and it'll also be easier to track pageviews in most analytics platforms. The only downside is that if someone emails or links to a category page with filters applied the recipient won't see it. But generally people share products and not category pages, so it's not a big deal. I'd probably go that route.
Also make sure that your category pages still update the URL when you go to page 2, or that page 2 is somehow also being indexed. You don't want products that don't get indexed because categories can't be crawled.
Carson has worked as a marketing manager at Clearlink and as a consultant with Distilled. In 2017 he founded a small company specializing in affiliate marketing. Carson is a long-time friend of Moz and frequently returns to his favorite city of Seattle.
Looks like your connection to Moz was lost, please wait while we try to reconnect.