Best Way To Go About Fixing "HTML Improvements"
-
So I have a site and I was creating dynamic pages for a while, what happened was some of them accidentally had lots of similar meta tags and titles. I then changed up my site but left those duplicate tags for a while, not knowing what had happened. Recently I began my SEO campaign once again and noticed that these errors were there. So i did the following.
-
Removed the pages.
-
Removed directories that had these dynamic pages with the remove tool in google webmasters.
-
Blocked google from scanning those pages with the robots.txt.
I have verified that the robots.txt works, the pages are longer in google search...however it still shows up in in the html improvements section after a week. (It has updated a few times). So I decided to remove the robots.txt file and now add 301 redirects.
Does anyone have any experience with this and am I going about this the right away? Any additional info is greatly appreciated thanks.
-
-
Great advise here,
Just to add Google Search Console seems to update it's index slower than the search index so it is possible to see old errors longer than they exists until it is re-indexed.
Kind Regards
Jimmy
-
Hi there
I wouldn't remove pages just because they had issues. Some of that content may hold value, it's just a matter of making sure that your on-site SEO is unique to those pages. Your users maybe searching for it - make sure you research and tailor those pages to your user's intent.
Google also offers advice on duplicate content, including parameters and dynamic pages, so make sure you read through that before you just start discarding pages/content.
Hope this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
When "pruning" old content, is it normal to see an drop in Domain Authority on Moz crawl report?
After reading several posts about the benefits of pruning old, irrelevant content, I went through a content audit exercise to kick off the year. The biggest category of changes so far has been to noindex + remove from sitemap a number of blog posts from 2015/2016 (which were very time-specific, i.e. software release details). I assigned many of the old posts a new canonical URL pointing to the parent category. I realize it'd be ideal to point to a more relevant/current blog post, but could this be where I've gone wrong? Another big change was to hide the old posts from the archive pages on the blog. Any advice/experience from anyone doing something similar much appreciated! Would be good to be reassured I'm on the right track and a slight drop is nothing to worry about. 🙂 If anyone is interested in having a look: https://vivaldi.com https://vivaldi.com/blog/snapshots [this is the category where changes have been made, primarily] https://vivaldi.com/blog/snapshots/keyboard-shortcut-editing/ [example of a pruned post]
Intermediate & Advanced SEO | | jonmc1 -
Google indexing "noindex" pages
1 weeks ago my website expanded with a lot more pages. I included "noindex, follow" on a lot of these new pages, but then 4 days ago I saw the nr of pages Google indexed increased. Should I expect in 2-3 weeks these pages will be properly noindexed and it may just be a delay? It is odd to me that a few days after including "noindex" on pages, that webmaster tools shows an increase in indexing - that the pages were indexed in other words. My website is relatively new and these new pages are not pages Google frequently indexes.
Intermediate & Advanced SEO | | khi50 -
Does my website have an Exact Match Domain or a "brand"?
I'd like to get some input from the Moz community about the domain name I use on a travel website I run as a hobby. I got heavily whacked by an update in September 2012 which some have said was because my site is an EMD. Others said it was because I had poor quality backlinks (but in fact I hardly had any). With the benefit of hindsight, I'd love to know what really happened. The website is www.traveltipsthailand.com (now www.asiantraveltips.com) and the "brand" I use is "Travel Tips Thailand.The traffic penalty I incurred was around 80% and despite a LOT of work overhauling the site and trying to build some better quality links, I don't believe it has really recovered much. It ranks for non-competitive, low-traffic key phrases (which means it's not penalised as such), but struggles to rank anywhere meaningful on any phrase likely to drive traffic to the site. At this stage I really just want to know whether to persist with the site (it's heartbreaking, to be honest) or drop it an build something new from scratch. I monitor the site's progress using Moz Pro, so I can see all the search ranking, authority and backlink data. 5254ab15dcaa91-52423790
Intermediate & Advanced SEO | | Gavin.Atkinson0 -
What's the best way to redirect categories & paginated pages on a blog?
I'm currently re-doing my blog and have a few categories that I'm getting rid of for housecleaning purposes and crawl efficiency. Each of these categories has many pages (some have hundreds). The new blog will also not have new relevant categories to redirect them to (1 or 2 may work). So what is the best place to properly redirect these pages to? And how do I handle the paginated URLs? The only logical place I can think of would be to redirect them to the homepage of the blog, but since there are so many pages, I don't know if that's the best idea. Does anybody have any thoughts?
Intermediate & Advanced SEO | | kking41200 -
Panda Recovery - What is the best way to shrink your index and make Google aware?
We have been hit significantly with Panda and assume that our large index with some pages holding thin/duplicate content being the reason. We have reduced our index size by 95% and have done significant content development on the remaining 5% pages. For the old, removed pages, we have installed 410 responses (Page does not exist any longer) and made sure that they are removed from the sitempa submitted to Google; however after over a month we still see Google spider returning to the same pages and the webmaster tools shows no indicator that Google is shrinking our index size. Are there more effective and automated ways to make Google aware of a smaller index size in hope of Panda recovery? Potentially using the robots.txt file, GWT URL removal tool etc? Thanks /sp80
Intermediate & Advanced SEO | | sp800 -
How Google treat internal links with rel="nofollow"?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner. "Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)." It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search. But, what about internal links? I have defined rel="nofollow" attribute on too many internal links. I have archive blog post of Randfish with same subject. I read following question over there. Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007] A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
Intermediate & Advanced SEO | | CommercePundit
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.) Matt has given excellent answer on following question. [In 2011] Q: Should internal links use rel="nofollow"? A:Matt said: "I don't know how to make it more concrete than that." I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all. For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course. I am still using nofollow attributes on my website. So, what is current trend? Will it require to use nofollow attribute for internal pages?0 -
Questions regarding Google's "improved url handling parameters"
Google recently posted about improving url handling parameters http://googlewebmastercentral.blogspot.com/2011/07/improved-handling-of-urls-with.html I have a couple questions: Is it better to canonicalize urls or use parameter handling? Will Google inform us if it finds a parameter issue? Or, should we have a prepare a list of parameters that should be addressed?
Intermediate & Advanced SEO | | nicole.healthline0