Noindex,follow - linked pages not showing
-
We have a blog on our site where the homepage and category pages have "noindex,follow" but the articles have "index,follow".
Recently we have noticed that the article pages are no longer showing in the Google SERPs (but they are in Bing!) - this was done by using the "site:" search operator.
Have double-checked our robots.txt file too just in case something silly had slipped in, but that's as it should be...
Has anyone else noticed similar behaviour or could suggest things I could check?
Thanks!
-
Well you're on Wordpress and are using YoastSEO. When a Wordpress category is created, a URL is generated for that category.
Your sitemap was created with Yoast:
Sitemap Last Modified |
|
- 2018-08-23 08:10 +01:00
|
- 2018-08-23 08:21 +01:00
|
- 2018-08-23 08:08 +01:00
|
- 2018-08-07 10:40 +01:00
|
- 2018-08-23 08:10 +01:00
|
- 2018-08-23 08:13 +01:00
|
I can see your articles are indexed now, but I would still recommend removing the Wordpress category URL's from your sitemap. Since the sitemap is commonly used for the things you want Google to crawl and index, I would add the article urls and content with "index,follow" webpages directly to your xml sitemap instead of linking the category pages you don't want indexed.
(IE: ie: http://www.genetex.com/sitemap.xml)
Yoast should give you this option in the settings for xml sitemap generation. If not, I would recommend using Screaming Frog to generate the sitemap.
-
Just as an update to this question, I submitted XML sitemaps directly to the blog articles and those pages are still not showing in the Google SERPs. It seems that new pages are discovered quite quickly (as per Google Alerts) but are then dropped from the index within a day or so.
The only pages which are returned consistently are links to the page which allows comments to be added.
The links which were initially identified as broken, were not actually broken so there was nothing to fix there.
Next step I can think of is to attempt some page sculpting by setting a noindex on the comments pages...
If anyone has any more thoughts or ideas, I'd appreciate your input
-
Great - thanks for your help
-
I resubmitted the sitemap for the blog in GWT and no errors were found...
I have to say I am v surprised at the number of dead links - we don't have that many blog posts so unless this is indicating content on our main site (where the pages are still . Even then, as I mentioned to Alan, the only missing content Google Webmaster Tools picks up on is where event tracking is used and it thinks the label is a link.... I did ask Google about these erroneous missing page and they said there was nothing that can be done to indicate they're not meant to be pages and that it would not affect the site's quality.
BTW, An article we published a few hours ago is now showing up in the Google results so it does seem like the rest of the pages have been penalised
Time to figure out what's going on with the missing pages...
Thanks, Irving
-
i sent the list, i had a bit of a look and it may be that they were timing out
-
Thanks Alan, have DM'ed you.
-
Submit a sitemap.xml file for these pages you want indexed, If they are linked to on the site and not blocked in robots.txt they will get indexed again. Definitely fix that sick amount of broken links, Google could be determining that these pages are not worth anything because the links on them are all dead ends.
-
The broken links were found using the Bing api. so bing will see them as such,
If yougive me a email, i willl send you the list
-
39 no-index pages on the blog could be correct with the category pages.
I'm quite surprised at the number of broken links - is this specific to /blog and are they actual links? GWT usually picks up event tracking as broken links...
Good point about the homepage - I should get a canonical tag on that...
Thanks!
-
I found 39 pages that have been no-index, does that add up?
I also found 33,000 broken links.
anouther problem you have is that both http://www.abcam.com/blog/ and http://www.abcam.com/blog/index.cfm are linked to in your site, this means that the pagerank is split. you should link to only http://www.abcam.com/blog/
-
The blog homepage is http://www.abcam.com/blog
@Alan: The rest of the site is indexable, just the the blog area where noindex has been used (the blog homepage and category pages are auto-generated and repeat a lot of the content in the articles)
@Shailendra: Yes, they were indexed - the last Google Alert which specifically highlights content from the blog is mid-June.
-
Firstly, you don't need to write index,follow on normal pages. Secondly, as you say, "no longer showing in Google SERPs", this means that it was earlier indexed, right? Now if it is no longer in Google's index, it means penalization. Please give the url of your website.
-
It may have something to do with the homepage being noindex, as that is unusual.
Can we get a url, I may find what you missed?
-
Hi,
Can you please share URL ?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt & meta noindex--site still shows up on Google Search
I have set up my robots.txt like this: User-agent: *
Technical SEO | | RoxBrock
Disallow: / and I have this meta tag in my on a Wordpress site, set up with SEO Yoast name="robots" content="noindex,follow"/> I did "Fetch as Google" on my Google Search Console My website is still showing up in the search results and it says this: "A description for this result is not available because of this site's robots.txt" This site has not shown up for years and now it is ranking above my site that I want to rank for this keyword. How do I get Google to ignore this site? This seems really weird and I'm confused how a site with little content, that has not been updated for years can rank higher than a site that is constantly updated and improved.1 -
Why are my 301 redirects and duplicate pages (with canonicals) still showing up as duplicates in Webmaster Tools?
My guess is that in time Google will realize that my duplicate content is not actually duplicate content, but in the meantime I'd like to get your guys feedback. The reporting in Webmaster Tools looks something like this. Duplicates /url1.html /url2.html /url3.html /category/product/url.html /category2/product/url.html url3.html is the true canonical page in the list above._ url1.html,_ and url2.html are old URLs that 301 to url3.html. So, it seems my bases are covered there. _/category/product/url.html _and _/category2/product/url.html _ do not redirect. They are the same page as url3.html. Each of the category URLs has a canonical URL of url3.html in the header. So, it seems my bases are covered there as well. Can I expect Google to pick up on this? Why wouldn't it understand this already?
Technical SEO | | bearpaw0 -
Are image pages considered 'thin' content pages?
I am currently doing a site audit. The total number of pages on the website are around 400... 187 of them are image pages and coming up as 'zero' word count in Screaming Frog report. I needed to know if they will be considered 'thin' content by search engines? Should I include them as an issue? An answer would be most appreciated.
Technical SEO | | MTalhaImtiaz0 -
How come only 2 pages of my 16 page infographic are being crawled by Moz?
Our Infographic titled "What Is Coaching" was officially launched 5 weeks ago. http://whatiscoaching.erickson.edu/ We set up campaigns in Moz & Google Analytics to track its performance. Moz is reporting No organic traffic and is only crawling 2 of the 16 pages we created. (see first and third attachments) Google Analytics is seeing hundreds of some very strange random pages (see second attachment) Both campaigns are tracking the url above. We have no idea where we've gone wrong. Please help!! 16_pages_seen_in_wordpress.png how_google_analytics_sees_pages.png what_moz_sees.png
Technical SEO | | EricksonCoaching0 -
GWT shows 38 external links from 8 domains to this PDF - But it shows no links and no authority in OSE
Hi All, I found one other discussion about the subject of PDFs and passing of PageRank here: http://moz.com/community/q/will-a-pdf-pass-pagerank But this thread didn't answer my question so am posting it here. This PDF: http://www.ccisolutions.com/jsp/pdf/YAM-EMX_SERIES.PDF is reported by GWT to have 38 links coming from 8 unique domains. I checked the domains and some of them are high-quality relevant sites. Here's the list: Domains and Number of Links
Technical SEO | | danatanseo
prodiscjockeyequipment.com 9
decaturilmetalbuildings.com 9
timberlinesteelbuildings.com 6
jaymixer.com 4
panelsteelbuilding.com 4
steelbuildingsguide.net 3
freedocumentsearch.com 2
freedocument.net 1 However, when I plug the URL for this PDF into OSE, it reports no links and a Page Authority if only "1". This is not a new page. This is a really old page. In addition to that, when I check the PageRank of this URL, the PageRank is "nil" - not even "0" - I'm currently working on adding links back to our main site from within our PDFs, but I'm not sure how worthwhile this is if the PDFs aren't being allocated any authority from the pages already linking to them. Thoughts? Comments? Suggestions? Thanks all!0 -
Dealing with high link juice/low value pages?
How do people deal with low value pages on sites which tend to pool pagerank and internal links? For example log in pages, copyright, privacy notice pages, etc. I know recently Matt Cutts did a video saying don't worry about them, and in the past we all know various strategies like nofollow, etc. were effective but no more. Are there any other tactics or techniques with dealing with these pages and leveraging them for SEO benefit? Maybe having internal links on these pages to strategically pass off some of the link juice?
Technical SEO | | IrvCo_Interactive0 -
How much effect does number of outbound links have on link juice?
I am interested in your thoughts on the effect of number of outbound links (obls) on link juice passed? ie If a page linking to you has a high number of obls, how do you compute the effect of these obls and relative negative effect on linkjuice. In the event that there are three sites on which you have been offered the opportunity of a link Site A PA 30 DA50 Obls on page 10 Site B PA 40 DA50 Obls on page 15 Site C PA 50 DA50 Obls on page 20 How would you appraise each of these prospective page links (ignoring anchor text, relevancy, etc which will be constant) Is there a rule of thumb on how to compare the linkjuice passed from a site relative to its PA and the number of obls? Is it as simple as page with 10 obls passes 10x juice of page with 100 obls?
Technical SEO | | seanmccauley0 -
Mask links with JS that point to noindex'ed paged
Hi, in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content. We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements. Thanks, Sebastian
Technical SEO | | derderko0