Cleaning up user generated nofollow broken links in content.
-
We have a question/answer section on our website, so it's user generated content. We've programmed all user generated links to be nofollow. Over time... we now have many broken links and some are even structurally invalid. Ex. 'http:///.'. I'm wanting to go in and clean up the links to improve user experience, but how do I justify it from an SEO standpoint and is it worth it?
-
Applying Broken Windows Theory to SEO is such an underrated tactic. It's totally worth the time. Will you be able to directly attribute revenue to the cleanup? Probably not. Will it improve the overall quality and user experience of the site? Absolutely, 100%, and that's where it becomes an SEO play - because that better quality and better UX exactly what Google is aiming to reward in the long run. And because your site no longer looks like an easy mark for spammers, it should attract less spam in the long run.
Also, adding to MattAntonino's comment, Paul Haahr said a few weeks ago that the quality rater guidelines are basically Google's ideal algorithm, so you can count on Google working to incorporate as much of that as they can into the algorthm over time as they figure out how to automate it instead of relying on human maintenance. So even if it's not there now, count on it being there in the future. Future-proofing is always a good idea.
-
I would definitely argue in favor of this. Cleaning up broken links, changing the copyright date on websites, adding new content - it all sends signals to Google that the site is maintained regularly and has active management. A site that is regularly updated is more valuable than one that is created and then left to rot.
While Matt Cutts said in 2013 (eons ago in SEO) that broken links weren't a ranking factor, the Google Search Quality Raters Handbook says they are a factor for manual review.
They actually say:
Webmasters need to make sure their websites function well for users as web browsers change. How can you tell that a website is being maintained and cared for? Poke around: Links should work, images should load, content should be added and updated over time, etc. Exercise caution relying on dates: Some webpages automatically display the current date. Rather than just looking for a recent date, search for evidence that effort is being made to keep the website up to date and running smoothly.
When the Raters Handbook says that, I fix broken links.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there any ratio of dofollow and nofollow in back-links profile?
Hi, Is there any ratio between dofollow and nofollow back-links of a website? Do a website really need some nofollow back-links? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Duplicate content for hotel websites - the usual nightmare? is there any solution other than producing unique content?
Hiya Mozzers I often work for hotels. A common scenario is the hotel / resort has worked with their Property Management System to distribute their booking availability around the web... to third party booking sites - with the inventory goes duplicate page descriptions sent to these "partner" websites. I was just checking duplication on a room description - 20 loads of duplicate descriptions for that page alone - there are 200 rooms - so I'm probably looking at 4,000 loads of duplicate content that need rewriting to prevent duplicate content penalties, which will cost a huge amount of money. Is there any other solution? Perhaps ask booking sites to block relevant pages from search engines?
Intermediate & Advanced SEO | | McTaggart0 -
Do links from twitter count in SEOMoz's Toolbar link count?
I am using the Chrome extension and looking at a SERP, when a page is said to have 2000 incoming links, does that include tweets with a link back to this page? What about retweets. Are those counted separately or as one? And what about independent tweets that have exactly the same content (tweet text + link)
Intermediate & Advanced SEO | | davhad0 -
Linking across categories
On a website when I link across in the same category should all the categories all pear on each page. Let's say I have 6 categories and 6 pages should I have the 6 links on all the pages ( such as A, B, C, D, E, on page 1 ( let's imagine this page is page F ), then on page A have link B, C D, E, F and so on for the 6 pages ( meaning all the links appear on all the pages across the category ) or should i just have let's say 3 links on page 1 ( link A, B, C ) , then link ( D, E, F ) on page 2, then A, E, F on page 3, link B, C F on page 4 and so on... ( which means that i vary the links that appear and that it is naturally ( at least I think ) going to boost the link that appears the most of the 6 pages ? I hope this is not too confusing, Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Duplicate page content
Hi. I am getting error of having duplicate content on my website and pages its showing there are: www.mysitename.com www.mysitename.com/index.html As my best knowledge it only one page, I know this can be solved with some conical tag used in header, but do not know how. Can anyone please tell me about that code or any other way to get this solved. Thanks
Intermediate & Advanced SEO | | onlinetraffic0 -
Reciprocal link finder tool - not looking to do reciprocal links.
The company I work for had an old SEO company that did a lot of reciprocal links with websites that are not what we want to be associated with. Does anyone know of a tool that might be able to tell us if there are still reciprical links to our site? I want to try and find them, but the old pages we had with links going out have been deleted.
Intermediate & Advanced SEO | | b2bcfo0 -
First Link Priority question - image/logo in header links to homepage
I have not found a clear answer to this particular aspect of the "first link priority" discussion, so wanted to ask here. Noble Samurai (makers of Market Samurai seo software) just posted a video discussing this topic and referencing specifically a use case example where when you disable all the css and view the page the way google sees it, many times companies use an image/logo in their header which links to their homepage. In my case, if you visit our site you can see the logo linking back to the homepage, which is present on every page within the site. When you disable the styling and view the site in a linear path, the logo is the first link. I'd love for our first link to our homepage include a primary keyword phrase anchor text. Noble Samurai (presumably seo experts) posted a video explaining this specifically http://www.noblesamurai.com/blog/market-samurai/website-optimization-first-link-priority-2306 and their suggested code implementations to "fix" it http://www.noblesamurai.com/first-link-priority-templates which use CSS and/or javascript to alter the way it is presented to the spiders. My web developer referred me to google's webmaster central: http://www.google.com/support/webmasters/bin/answer.py?answer=66353 where they seem to indicate that this would be attempting to hide text / links. Is this a good or bad thing to do?
Intermediate & Advanced SEO | | dcutt0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0