Help with pages Google is labeling "Not Followed"
-
I am seeing a number of pages that I am doing 301 redirects on coming up under the "Not Followed" category of the advanced index status in google webmasters. Google says this might be because the page is still showing active content or the redirect is not correct. I don't know how to tell if the page is still showing active content, and if someone can please tell me how to determine this it would be greatly appreciated. Also if you can provide a solution for how to adjust my page to make sure that the content is not appearing to be active, that would be amazing. Thanks in advance, here is a few links to pages that are experiencing this:
-
Hi Joshua -
If someone is linking to the www version then it doesnt pass as much juice as it would if it wasnt redirected (theres lots of info on this on the internet with varied options). Overall, most SEO's agree that an inbound link that points directly to a page without being 301 redirected has more of a positive SEO effect.
With that being said, in your case Google Webmaster Tools may be detecting this double redirect error simply because there is an external website somewhere linking to the 'www' version. You can find this using OSE or using the WMT by going to CRAWL ERRORS and looking for the sunny-isles url. Clicking on it (if its there) will show who is linking to you and from where.
BTW - when did you do the redirects, and how long since you noticed the new url wasnt indexed (and was the old URL indexed?)
-
The 301 will preserve some of the authority passed through from the www version of the link.
One note - Google sometimes has a rough time with consecutive 301s. Normally it's only a problem if there are several in a row. Here you have two. You might consider reducing that to 1...?
-
MIght as well, yes.
-
Hello Ian,
Thanks for your help as well. Question for you, I current have not set a preferred version in my google webmasters account. Do you think I should go ahead and establish the non www version as my setting?
Thanks.
-
Hello Ian,
Thanks for your help as well. Question for you, I current have not set a preferred version in my google webmasters account. Do you think I should go ahead and establish the non www version as my setting?
Thanks.
-
Hi Jared,
Thank you very much for answering my question. So if someone is linking to me from another site, but uses the www version of a url does it not help my seo?
And if this is the case, what do you recommend I do?
Thanks.
-
Hi Joshua,
It looks like you're redirecting from the 'www' version to the non 'www' version. The 301 redirect is set up just fine.
2 things to check first:
- In Google Webmaster Tools, do you have the preferred domain set to the 'www' version? That might cause this confusion.
- In robots.txt, you're blocking Google Image Bot from crawling that folder. Once, I saw an instance where that screwed up Googlebot as well, and removing the disallow fixed the problem.
Ian
-
The link you referenced has 'www' in it, is that how the link is targeted on your website? If so, its probably the double redirect that is causing the issue. Since WP is set to 'non-www' - every time there is a call for the www version of a url, WP automatically 301 redirects it to the non-www version. There is nothing wrong with this.
Its when there is a call for a 'www' version of a URL that has also been redirected, as the one you cited has, where a double redirect now takes place:
http://www.luxuryhome..../sunnyilses.html
to the 'non-ww' version:
http://luxuryhome.../sunnyisles.html
then from there to the new html file version:
http://luxuryhome.../sunny-isles.html
The header check shows a normal www to non-www redirect first (WP is doing this), and then the 301 redirect that changes the sunnyisles to sunny-isles. Both server responses seem OK so the redirects themselves seem to be working. What you want to make sure of is:
Any internal links linking to the old sunnyisles.html page do not contain 'www'. (And in any event, these links should be changed to point to the new page anyway).
Any inbound links from external sources do not reference the 'www' version.
It would be helpful if we cound see the htaccess file as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Steady, but continous Google traffic drop. Help?
I am facing a steady, but deteriorating Google traffic drop and i am struggling to find the real reasons. The site is in the video entertainment niche. Here is some data regarding the site in general: Site redesign happened around June We are not doing a lot of off-site and link building in 2013, besides social activities and content distribution to partner sites (top links from OSE are gained long time ago) GWT crawls errors are clean (besides 404s) No manual actions from Google or penalty notifications Serious fluctuations in indexed content in June, but no real effect on traffic Removal of old content during August Top keywords are dropping (may be caused by category pages removal and flattening site structure) Robots file disallows only search-generated pages No sitemap errors or warnings 2K of duplicate meta-titles and descs Thank you and appreciate your help. wyOyoA9
Technical SEO | | dimicos1 -
Google has said i am losing revenue please help
Hi, i have had a message from google telling me i am losing revenue because i have not authorised the following sites translate.googleusercontent.com authorize
Technical SEO | | ClaireH-184886
www.google.co.uk authorize can someone please tell me if i should authorise them, i am a bit puzzled about the google.co.uk one and i think i understand the other one as it will be people using a translater service on my site. please can someone help me on this.0 -
Pages removed from Google index?
Hi All, I had around 2,300 pages in the google index until a week ago. The index removed a load and left me with 152 submitted, 152 indexed? I have just re-submitted my sitemap and will wait to see what happens. Any idea why it has done this? I have seen a drop in my rankings since. Thanks
Technical SEO | | TomLondon0 -
Rankings for Google Play Pages
Hey all, I'm relatively new here and certainly new to posting in the forums and interacting with the community but I hope to be much more active in the coming months. I have what might be a silly question regarding search results for a Google Play store-specific query. The company in question has their main North American app that's been out for a month and a half and then an International version that was released just a few days ago. If you run a Google search (NOT a search witin Google Play) for 'Google Play Company Name' the more recent (but less used and ultimately less important, at least for the time being) International app is higher in the SERP than the more used and reviewed North American app. I'm guessing that this is something that will correct itself over the next week as the North American app establishes itself as the more important of the two, but I figured it couldn't hurt to ask just in case there's something they can do to affect the results a little quicker. Any advice, input or just a verification of my guess would be greatly appreciated!
Technical SEO | | JDMcNamara0 -
Does Google see page with Trailing Slash as different
My company is purchasing another company's website. We are moving their entire site onto our CMS and the IT guys are working hard to replicate the URL structure. Several of the category pages are changing slightly and I am not sure if it matters: Old URL - http://www.DOMAIN.com/products/adults New URL - http://www.DOMAIN.com/products/adults**/** Notice the trailing slash? Will Google treat the new page as the same as the old one or as completely different (i.e. new) page? P.S. - Yes, I can setup 301s but since these pages hold decent rankings I'd really like to keep it exactly the same.
Technical SEO | | costume0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0 -
Is having "rel=canonical" on the same page it is pointing to going to hurt search?
i like the rel=canonical tag and i've seen matt cutts posts on google about this tag. for the site i'm working on, it's a great workaround because we often have two identical or nearly identical versions of pages: 1 for patients, 1 for doctors. the problem is this: the way our content management system is set up, certain pages are linked up in a number of places and when we publish, two different versions of the page are created, but same content. because they are both being made from the same content templates, if i put in the rel=canonical tag, both pages get it. so, if i have: http://www.myhospital.com/patient-condition.asp and http://www.myhospital.com/professional-condition.asp and they are both produced from the same template, and have the same content, and i'm trying to point search at http://www.myhospital.com/patient-condition.asp, but that tag appears on both pages similarly, we have various forms and we like to know where people are coming from on the site to use those forms. to the bots, it looks like there's 600 versions of particular pages, so again, rel=canonical is great. however, because it's actually all the same page, just a link with a variable tacked on (http://www.myhospital.com/makeanappointment.asp?id=211) the rel=canonical tag will appear on "all" of them. any insight is most appreciated! thanks! brett
Technical SEO | | brett_hss0 -
Getting Google to index new pages
I have a site, called SiteB that has 200 pages of new, unique content. I made a table of contents (TOC) page on SiteB that points to about 50 pages of SiteB content. I would like to get SiteB's TOC page crawled and indexed by Google, as well as all the pages it points to. I submitted the TOC to Pingler 24 hours ago and from the logs I see the Googlebot visited the TOC page but it did not crawl any of the 50 pages that are linked to from the TOC. I do not have a robots.txt file on SiteB. There are no robot meta tags (nofollow, noindex). There are no 'rel=nofollow' attributes on the links. Why would Google crawl the TOC (when I Pinglered it) but not crawl any of the links on that page? One other fact, and I don't know if this matters, but SiteB lives on a subdomain and the URLs contain numbers, like this: http://subdomain.domain.com/category/34404 Yes, I know that the number part is suboptimal from an SEO point of view. I'm working on that, too. But first wanted to figure out why Google isn't crawling the TOC. The site is new and so hasn't been penalized by Google. Thanks for any ideas...
Technical SEO | | scanlin0