Can I disallow my subdomain for penguin recover?
-
Hi,
I have a site like BannerBuzz.com, before last penguin my site's all keywords were in good position in google, but after penguin hit on my website, my all keywords are going down and down day by day, i have done some changes in my website for improvement, but in 1 change i have some confusion.
i have one sub domain (http://reviews.bannerbuzz.com/), which display my websites all keywords user reviews, in which every category's 15 reviews are display in my website http://www.bannerbuzz.com so are those user reviews consider as duplicate content between sub domain and main website.
can i disallow sub domain from all search engine? currently sub domain is open for all search engine, is that helpful to block it?
Thanks
-
Hello Rafi,
I am going to make necessary changes on it. And, I have started work to gather backlinks on home page with Vinyl Banners keyword from various sources. It may help me to recover my old ranking!
-
No problem my friend. You are most welcome.
So if you are using 3rd party services to fill in the reviews content on the sub-domain, you can the following:
1. Stop using the sub-domain henceforth for the reviews content and use the new reviews sub-folder to get the reviews content filled in.
2. Redirect the old reviews content on the sub-domain to the new reviews sub-folder via 301.
This will make sure that you don't loose the SEO goodies that the sub-domain has acquired till date and also all (almost all) of those goodies will be passed on to the new sub-folder.
Please feel free to post any or all of your queries if you have any in this regard.
Best regards,
Devanur Rafi.
-
Thanks Devanur Rafi, for your information
You gave us really great information, but i have one question, currently i am using 3rd party reviews services fro customer's users (powerreviews.com), so is it possible to make sub folder and redirect sub-domain to sub-folder?
-
Hi there,
Here are my two cents in this regard. Instead of showing 10 or 15 reviews on the root domain, show no more than 2 and for more reviews you can send the visitors to the reviews sub-domain (using a 'view more reviews' button as you currently have). This will mitigate duplicate content issues to a great extent if at all any. I do not recommend blocking the sub-domain from the search engines. However, you can move the content of the sub-domain to something like a reviews sub-folder as follows:
From an SEO stand point, sub-folder is a safe bet compared to a sub-domain. Here is what Rand Fishkin has to say in this regard (http://www.seomoz.org/q/subdomains-vs-subfolders
_ “All the testing, research and examples I've seen in the past few years (and even the past few months) strongly suggest that the same principles still hold true._
Subdomains SOMETIMES inherit and pass link/trust/quality/ranking metrics between one another.
Subfolders ALWAYS inherit and pass link/trust/quality/ranking metrics across the same subdomain.
Thus, having a single subdomain (even just domainname.tld with no subdomain extension) with all of your content is absolutely ideal from an SEO perspective. It's also more usable and brandable, too IMO.”
Here is an interesting discussion about the same here on Moz.com:
http://www.seomoz.org/q/multiple-subdomains-my-worst-seo-mistake-now-what-should-i-do
Hope these help.
Best regards,
Devanur Rafi.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler issues on subdomain - Need resolving?
Hey Guys, I'm fairly new to the world of SEO and have a ton of crawler issues with a friends website I'm doing some work on. After Moz did a site crawl I'm getting loads of errors (Total of 100+ for critical crawler, content and meta data). Most of these are due to broken social links on a subdomain - so my question is do I need to resolve all of the errors even if they are on a sub-domain? Will it affect the primary website? Thanks, Jack
Technical SEO | | Jack11660 -
Do subdomains negatively impact SEO
If we want to eliminate one of our domains and consolidate it with our main domain by the subdomain approach, would that negatively impact our SEO? Example: change www.xyz.org to www.xyz.abcd.org, Thanks, in advance, for your feedback.
Technical SEO | | Shirley.Fenlason0 -
I can't crawl the archive of this website with Screaming Frog
Hi I'm trying to crawl this website (http://zeri.info/) with Screaming Frog but because of some technical issue with their site (i can't find what is causing it) i'm able to crawl only the first page of each category (ex. http://zeri.info/sport/) and then it will go to crawl each page of their archive (hundreds of thousands of pages) but it won't crawl the links inside these pages. Thanks a lot!
Technical SEO | | gjergjshala0 -
Videos in Youtube can we add Advertisements?
I have few hundred videos in Youtube and i have a new order to advertise a real estate company. Can i insert the 1 min AD into my video and still be able to Monetize them in Youtube?
Technical SEO | | bsharath0 -
Can Google Crawl This Page?
I'm going to have to post the page in question which i'd rather not do but I have permission from the client to do so. Question: A recruitment client of mine had their website build on a proprietary platform by a so-called recruitment specialist agency. Unfortunately the site is not performing well in the organic listings. I believe the culprit is this page and others like it: http://www.prospect-health.com/Jobs/?st=0&o3=973&s=1&o4=1215&sortdir=desc&displayinstance=Advanced Search_Site1&pagesize=50000&page=1&o1=255&sortby=CreationDate&o2=260&ij=0 Basically as soon as you deviate from the top level pages you land on pages that have database-query URLs like this one. My take on it is that Google cannot crawl these pages and is therefore having trouble picking up all of the job listings. I have taken some measures to combat this and obviously we have an xml sitemap in place but it seems the pages that Google finds via the XML feed are not performing because there is no obvious flow of 'link juice' to them. There are a number of latest jobs listed on top level pages like this one: http://www.prospect-health.com/optometry-jobs and when they are picked up they perform Ok in the SERPs, which is the biggest clue to the problem outlined above. The agency in question have an SEO department who dispute the problem and their proposed solution is to create more content and build more links (genius!). Just looking for some clarification from you guys if you don't mind?
Technical SEO | | shr1090 -
How can I best handle parameters?
Thank you for your help in advance! I've read a ton of posts on this forum on this subject and while they've been super helpful I still don't feel entirely confident in what the right approach I should take it. Forgive my very obvious noob questions - I'm still learning! The problem: I am launching a site (coursereport.com) which will feature a directory of schools. The directory can be filtered by a handful of fields listed below. The URL for the schools directory will be coursereport.com/schools. The directory can be filtered by a number of fields listed here: Focus (ex: “Data Science”) Cost (ex: “$<5000”) City (ex: “Chicago”) State/Province (ex: “Illinois”) Country (ex: “Canada”) When a filter is applied to the directories page the CMS produces a new page with URLs like these: coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago coursereport.com/schools?cost=$>5000&city=buffalo&state=newyork My questions: 1) Is the above parameter-based approach appropriate? I’ve seen other directory sites that take a different approach (below) that would transform my examples into more “normal” urls. coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago VERSUS coursereport.com/schools/focus/datascience/cost/$<5000/city/chicago (no params at all) 2) Assuming I use either approach above isn't it likely that I will have duplicative content issues? Each filter does change on page content but there could be instance where 2 different URLs with different filters applied could produce identical content (ex: focus=datascience&city=chicago OR focus=datascience&state=illinois). Do I need to specify a canonical URL to solve for that case? I understand at a high level how rel=canonical works, but I am having a hard time wrapping my head around what versions of the filtered results ought to be specified as the preferred versions. For example, would I just take all of the /schools?focus=X combinations and call that the canonical version within any filtered page that contained other additional parameters like cost or city? Should I be changing page titles for the unique filtered URLs? I read through a few google resources to try to better understand the how to best configure url params via webmaster tools. Is my best bet just to follow the advice on the article below and define the rules for each parameter there and not worry about using rel=canonical ? https://support.google.com/webmasters/answer/1235687 An assortment of the other stuff I’ve read for reference: http://www.wordtracker.com/academy/seo-clean-urls http://www.practicalecommerce.com/articles/3857-SEO-When-Product-Facets-and-Filters-Fail http://www.searchenginejournal.com/five-steps-to-seo-friendly-site-url-structure/59813/ http://googlewebmastercentral.blogspot.com/2011/07/improved-handling-of-urls-with.html
Technical SEO | | alovallo0 -
Site (Subdomain) Removal from Webmaster Tools
We have two subdomains that have been verified in Google Webmaster Tools. These subdomains were used by 3rd parties which we no longer have an affiliation with (the subdomains no longer serve a purpose). We have been receiving an error message from Google: "Googlebot can't access your site. Over the last 24 hours, Googlebot encountered 1 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 100.00%". I originally investigated using Webmaster Tools' URL Removal Tool to remove the subdomain, but there are no indexed pages. Is this a case of simply 'deleting' the site from the Manage Site tab in the Webmaster Tools interface?
Technical SEO | | Cary_PCC0 -
I just found something weird I can't explain, so maybe you guys can help me out.
I just found something weird I can't explain, so maybe you guys can help me out. In Google http://www.google.nl/#hl=nl&q=internet. The number 3 result is a big telecom provider in the Netherland called Ziggo. The ranking URL is https://www.ziggo.nl/producten/internet/. However if you click on it you'll be directed to https://www.ziggo.nl/#producten/internet/ HttpFox in FF however is not showing any redirects. Just a 200 status code. The URL https://www.ziggo.nl/#producten/internet/ contains a hash, so the canonical URL should be https://www.ziggo.nl/. I can understand that. But why is Google showing the title and description of https://www.ziggo.nl/producten/internet/, when the canonical URL clearly is https://www.ziggo.nl/? Can anyone confirm my guess that Google is using the bulk SEO value (link juice/authority) of the homepage at https://www.ziggo.nl/ because of the hash, but it's using the relevant content of https://www.ziggo.nl/producten/internet/ resulting in a top position for the keyword "internet".
Technical SEO | | NEWCRAFT0