Google said that low-quality pages on your site may affect rankings on other parts
-
One of my sites got hit pretty hard during the latest Google update. It lost about 30-40% of its US traffic and the future does not look bright considering that Google plans a worldwide roll-out.
Problem is, my site is a six year old heavy linked, popular Wordpress blog. I do not know why the article believes that it is low quality.
The only reason I came up with is the statement that low-quality pages on a site may affect other pages (think it was in the Wired article).
If that is so, would you recommend blocking and de-indexing of Wordpress tag, archive and category pages from the Google index? Or would you suggest to wait a bit more before doing something that drastically.
Or do you have another idea what I could to do?
I invite you to take a look at the site www.ghacks.net
-
Depending on the nature and quality of your content, you may also want to consider removing article pages that might be hurting your domain.
-
Are you syndicating the content for link building? Or are scrapers just pulling in your content?
On the note of tags and archives, in most cases this is best practice anyway. However it seems you may have been hit with the content udate, and as Shailendra suggests, you may want to get webmaster support people to look into your site.
Without a full analysis it is difficult to say what else is affecting your site.
-
I believe too much of ads and linking (internal as well as external) may be the probable cause. I suggest to submit your site @ http://www.google.com/support/forum/p/Webmasters/thread?tid=76830633df82fd8e&hl=en and wait for some time before indulging in some action
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
One of my pages doesn't appear in Google's search
Our page has been indexed (I just checked) but literally doesn't exist in the first 300 results despite having a respectable DA & PA. Is there something I can do? There's no reason why this specific page doesn't rank, as far as I can see. It's not a new page. Cheers, Rhys
Algorithm Updates | | SwanseaMedicine0 -
Google & Site Architecture
Hi I've been reading the following article about Google's quality signals here: https://searchenginewatch.com/2016/10/10/guide-to-google-ranking-signals-part-6-trust-authority-and-expertise/?utm_source=Search+Engine+Watch&utm_campaign=464594db7c-11_10_2016_NL&utm_medium=email&utm_term=0_e118661359-464594db7c-17828341 They mention - 3) All your categories should be accessible from the main menu. All your web pages should be labelled with the relevant categories. Is this every category? We have some say 3 levels deep, and they aren't all in the menu. I'd like them to be, so would be good to make a case for it. Thank you
Algorithm Updates | | BeckyKey1 -
Drop in Traffic from Google, However no change in the rankings
I have seen a 20% drop in traffic from google last week (After April 29th). However when I try to analyze the rank of the keywords in the google results that send me traffic they seem to be the same. Today (6th March) Traffic has fallen further again with not much/any visible change in the rankings. Any ideas on what the reason for this could be? I have not made any changes to the website recently.
Algorithm Updates | | raghavkapur0 -
Webpage is ranking on google.ie / google.co.uk but not google.com?
One of our site webpage appears to be found in the first few pages on google.ie / google.co.uk but not on google.com. Is there such a thing being penalised on a specific Google domain? Traffic is healthy despite this but I want to rank well for the page in google.com. Any ideas?
Algorithm Updates | | notnem0 -
Having issues claiming a Google+ Business page (phone number not associated with business address)
When attempting to claim my Google+ account, it asks for the phone number. When I enter the number listed on my business listing, it says that number cannot be found... It then tells me to re-enter all my business info. If I do this, will I lose all my existing photos, videos etc.? Has anyone found this?
Algorithm Updates | | DCochrane0 -
Has Google problems in indexing pages that use <base href=""> the last days?
Since a couple of days I have the problem, that Google Webmaster tools are showing a lot more 404 Errors than normal. If I go thru the list I find very strange URLs that look like two paths put together. For example: http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm If I check on which page Google found that path it is showing me the following URL: http://www.domain.de/languages/languageschools/havanna/spanishcourse.htm If I check the source code of the Page for the Link leading to the London Page it looks like the following: [...](languages/languageschools/london/london.htm) So to me it looks like Google is ignoring the <base href="..."> and putting the path together as following: Part 1) http://www.domain.de/laguages/languageschools/havanna/ instead of base href Part 2) languages/languageschools/london/london.htm Result is the wrong path! http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm I know finding a solution is not difficult, I can use absolute paths instead of relative ones. But: - Does anyone make the same experience? - Do you know other reasons which could cause such a problem? P.s.: I am quite sure that the CMS (Typo3) is not generating these paths randomly. I would like to be sure before we change the CMS's Settings to absolute paths!
Algorithm Updates | | SimCaffe0 -
Problems with Google results
Hi Everybody, I ve been dealing with this issue for a while now. i have a multilingual website: www.vallnord.com When a search for Vallnord in Google it always shows the result in Catalan, but it does not show what I specified in the meta description, it displays what it crawls from the home page. I have 2 problems here: It is not showing my meta description. What can I do? It is not showing the language from which the search was made. Example: if you search from Google.com and your default language is english it should been displayed the result from the english HTML. www.vallnord.com/en but it is not like this. It is always the catalan (default language of the site) the one that is displayed. I have tried several things already: Inserting the Hreflang function Changing the descriptions Resubmitting the sitemap via Google Webmaster I can not figure out what is going on because if you search: "Vallnord Castellano" it will display the spanish URL but still not the proper description. Moreover if you search: "www.vallnord.com/es" on google , it will display the proper URL and description. FYI, I am using 301 redirects for the languages: es.vallnord.com it is the sames as www.vallnord.com/es In addition to this, If using Yahoo search engine there is no problem. it will show the proper language. from yahoo.com the first result is in english and from yahoo.es the first result Spanish. So any idea what would be the problem?And furthermore, any Idea which would be the solution? Thanks in advance, Guido.
Algorithm Updates | | SilbertAd0 -
Large site with faceted navigation using rel=canonical, but Google still has issues
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question..... "Googlebot found an extremely high number of URLs on your site:" Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them. First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x. So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table. Thanks for those who take the time to respond in advance.
Algorithm Updates | | PeteGregory0