Help with pages Google is labeling "Not Followed"
-
I am seeing a number of pages that I am doing 301 redirects on coming up under the "Not Followed" category of the advanced index status in google webmasters. Google says this might be because the page is still showing active content or the redirect is not correct. I don't know how to tell if the page is still showing active content, and if someone can please tell me how to determine this it would be greatly appreciated. Also if you can provide a solution for how to adjust my page to make sure that the content is not appearing to be active, that would be amazing. Thanks in advance, here is a few links to pages that are experiencing this:
-
Hi Joshua -
If someone is linking to the www version then it doesnt pass as much juice as it would if it wasnt redirected (theres lots of info on this on the internet with varied options). Overall, most SEO's agree that an inbound link that points directly to a page without being 301 redirected has more of a positive SEO effect.
With that being said, in your case Google Webmaster Tools may be detecting this double redirect error simply because there is an external website somewhere linking to the 'www' version. You can find this using OSE or using the WMT by going to CRAWL ERRORS and looking for the sunny-isles url. Clicking on it (if its there) will show who is linking to you and from where.
BTW - when did you do the redirects, and how long since you noticed the new url wasnt indexed (and was the old URL indexed?)
-
The 301 will preserve some of the authority passed through from the www version of the link.
One note - Google sometimes has a rough time with consecutive 301s. Normally it's only a problem if there are several in a row. Here you have two. You might consider reducing that to 1...?
-
MIght as well, yes.
-
Hello Ian,
Thanks for your help as well. Question for you, I current have not set a preferred version in my google webmasters account. Do you think I should go ahead and establish the non www version as my setting?
Thanks.
-
Hello Ian,
Thanks for your help as well. Question for you, I current have not set a preferred version in my google webmasters account. Do you think I should go ahead and establish the non www version as my setting?
Thanks.
-
Hi Jared,
Thank you very much for answering my question. So if someone is linking to me from another site, but uses the www version of a url does it not help my seo?
And if this is the case, what do you recommend I do?
Thanks.
-
Hi Joshua,
It looks like you're redirecting from the 'www' version to the non 'www' version. The 301 redirect is set up just fine.
2 things to check first:
- In Google Webmaster Tools, do you have the preferred domain set to the 'www' version? That might cause this confusion.
- In robots.txt, you're blocking Google Image Bot from crawling that folder. Once, I saw an instance where that screwed up Googlebot as well, and removing the disallow fixed the problem.
Ian
-
The link you referenced has 'www' in it, is that how the link is targeted on your website? If so, its probably the double redirect that is causing the issue. Since WP is set to 'non-www' - every time there is a call for the www version of a url, WP automatically 301 redirects it to the non-www version. There is nothing wrong with this.
Its when there is a call for a 'www' version of a URL that has also been redirected, as the one you cited has, where a double redirect now takes place:
http://www.luxuryhome..../sunnyilses.html
to the 'non-ww' version:
http://luxuryhome.../sunnyisles.html
then from there to the new html file version:
http://luxuryhome.../sunny-isles.html
The header check shows a normal www to non-www redirect first (WP is doing this), and then the 301 redirect that changes the sunnyisles to sunny-isles. Both server responses seem OK so the redirects themselves seem to be working. What you want to make sure of is:
Any internal links linking to the old sunnyisles.html page do not contain 'www'. (And in any event, these links should be changed to point to the new page anyway).
Any inbound links from external sources do not reference the 'www' version.
It would be helpful if we cound see the htaccess file as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I hope someone can help me with page indexing problem
I have a problem with all video pages on www.tadibrothers.com.
Technical SEO | | TadiBrothers
I can not understand why google do not index all the video pages?
I never blocked them with the robots.txt file, there are no noindex/nofollow tags on the pages. The only video page that I found in search results is the main video category page: https://www.tadibrothers.com/videos and 1 video page out of 150 videos: https://www.tadibrothers.com/video/front-side-rear-view-cameras-for-backup-camera-systems I hope someone can point me to the right way0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Help with Getting Googlebot to See Google Charts
We received a message from Google saying we have an extremely high number of URLs that are linking to pages with similar or duplicate content. The main difference between these pages are the Google charts we use. It looks like Google isn't able to see these charts (most of the text are very similar) and the charts (lots of it) are the main differences between these pages. So my question is what is the best approach to allowing Google to see the data that exists in these charts? I read from here http://webmasters.stackexchange.com/questions/69818/how-can-i-get-google-to-index-content-that-is-written-into-the-page-with-javascr that a solution would be to have the text that is displayed on the charts coded into the html and hidden by CSS. I'm not sure but it seems like a bad idea to have it seen by Google but hidden to the user by CSS. It just sounds like a cloaking hack. Can someone clarify if this is even a solution or is there a better solution?
Technical SEO | | ERICompensationAnalytics1 -
Can i use "nofollow" tag on product page (duplicated content)?
Hi, im working on my webstore SEO. I got descriptions from official seller like "Bosch". I got more than 15.000 items so i cant create unique content for each product. Can i use nofollow tag for each product and create great content on category pages? I dont wanna lose rankings because duplicated content. Thank you for help!
Technical SEO | | pejtupizdo0 -
Does using data-href="" work more effectively than href="" rel="nofollow"?
I've been looking at some bigger enterprise sites and noticed some of them used HTML like this: <a <="" span="">data-href="http://www.otherodmain.com/" class="nofollow" rel="nofollow" target="_blank"></a> <a <="" span="">Instead of a regular href="" Does using data-href and some javascript help with shaping internal links, rather than just using a strict nofollow?</a>
Technical SEO | | JDatSB0 -
NOINDEX,FOLLOW on product pages
Hi Can I have people's thoughts on something please. We sell wedding stationery and whilst we can generate lots of good content describing a particular range of stationery we can't relistically differentiate at a product level. So imagine we have three ranges Range 1 - A Bird Range 2 - A Heart Range 3 - A Flower Within each of these ranges we would have invitations, menus, place cards, magnets etc. The ranges vary quite alot so we can write good textual keyword rich descriptions that attract traffic (i.e. one about the bird, one about the heart and one about the flower). However the individual products within a range just reflect the design for the range as a whole (as all items in a range match). Therefore we can't just copy the content down to the product level and if we just describe the generic attributes of the products they will alll be very similar. We have over 1,000 "products" easily so I am conscious of creating too much duplication over the site in case Mr Panda comes to call. So I was thinking that I "might" NOINDEX, FOLLOW the product pages to avoid this duplication and put lots of effort into making my category pages much better and content rich. The site would be smaller in the index BUT I do not really expect to generate traffic from the product pages because they are not branded items and any searches looking for particular features of our stationery would be picked up, much more effectively, by the category pages. Any thoughts on this one? Gary
Technical SEO | | gtrotter6660 -
Why isn't Google pushing my Schema data to the search results page
I believe we have it set up right. I'm noticing all my competitors schema data is showing up which is really giving them a leg up on us. We have a high ranking website so I'm just not sure why it's now showing up. Here is an example URL http://www.airgundepot.com/3576w.html I've used the Google webmaster tools tester and it all looks fine. Any ideas? Thanks in advance.
Technical SEO | | AirgunDepot0 -
How to add "no follow" to feeds
Hey all, I just had a crawl test done on my site(created using wordpress) and I received a ton of missing meta tag descriptions to fix. The odd thing is though I use "All in One" SEO Tool and the actual pages or posts on the site do have meta tag descriptions, however I noticed for every post an RSS Feed is being automatically generated and this Feed is the culprit without meta tag descriptions. I am totally clueless on how to resolve these errors as I havent installed any WP plugins that generate feeds automatically. Has anyone encountered this problem before or know how to fix this?? The site url is http:// GovernmentGrantsAustralia . org I have left spaces above to avoid being a link dropper 🙂 Would really appreciate if anyone can help! Thanks a million, Jus
Technical SEO | | justin990