Submitted URL marked 'noindex'
-
Search Console is giving this issue for near 100 pages of my website. I have checked the Yoast plugin settings. We haven't used any meta robots tag for these pages, neither have these pages been disallowed in robots.txt.
Previosuly this issue was for some 20+ pages. I tried to reindex them by submitting the urls again. Now the count has risen to 100+.
There is also this "Submitted URL blocked by robots.txt" issue for pages which are NOT disallowed in robots.txt.
Can anyone please suggest me a solution here?
-
Then we DO need to see an example to work out why it's firing
-
Those pages are allowed everywhere.
-
No, we haven't used Meta no-index tag in our html code. We don't even have no-index in X-Robots.
-
Forget robots.txt, it has nothing to do with pages being marked no-index. Either somewhere in your code (the HTML) the Meta no-index tag is being used, or it is being fired through your HTTP header via X-Robots. If you share a URL example we can work out which of those it is, which at least will narrow it down a little for you!
-
If you have not set the robots-tag to noindex in Yoast and you don't have hardcoded it somewhere in your head, there is still the wordpress option to disallow search-engines to crawl/index pages. Somewhere under settings is a checkbox.
without more details we can just guess...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can I remove parameters from the GSC URL blocking tool?
Hello Mozzers My client's previous SEO company went ahead and blindly blocked a number of parameters using the GSC URL blocking tool. This has now caused Google to stop crawling many pages on my client's website and I am not sure how to remove these blocked parameters so that they can be crawled and reindexed by Google. The crawl setting is set to "Let Google bot decide" but still there has been a drop in the number of pages being crawled. Can someone please share their experience and help me delete these blocked parameters from GSC's URL blocking tool. Thank you Mozzers!
Reporting & Analytics | | Vsood0 -
Campaign Tracking URLs and the Impact on SEO
Hi Guys, I am setting up tracking for a couple of offline campaigns in Analytics and would just like to know if this will cause issues. The situation is below: Two URLs Ranking Well Organically: www.domain.co.uk/area-covered/area-a www.domain.co.uk/area-covered/area-b URLs Setup for the Purpose of Offline Campaigns www.domain.co.uk/campaign1 www.domain.co.uk/campaign2 Plan to Track this in Analytics www.domain.co.uk/campaign1 redirecting to www.domain.co.uk/area-covered/area-a_&utm_medium=qr&utm_source=test1&utm_campaign=test1_ www.domain.co.uk/campaign2 redirecting to www.domain.co.uk/area-covered/area-a&utm_medium=qr&utm_source=test2&utm_campaign=test1__ So the idea is that the user gets a nice simple URL to input from the off-line media (www.domain.co.uk/campaign1). This then redirects to one of the pages thats performing well organically (www.domain.co.uk/area-covered/area-a) but with the relevant tracking (www.domain.co.uk/campaign1 redirecting to www.domain.co.uk/area-covered/area-a&utm_medium=qr&utm_source=test1&utm_campaign=test1). The only way that the tracking URL can be accessed by the user is if the off-line media URL is entered. My main concern here is how Google will treat this. Obviously I don't want to cause issues with the two URLs that are ranking well organically. Would having a version of exactly the same URL, just with tracking do so? Any help is appreciated! Thanks in advance.
Reporting & Analytics | | CarlWint0 -
Can't seem to rank for keyword "home care grand rapids" - need some advice
I am trying to rank for "home care grand rapids" and am having a really difficult time. My site: http://healthcareassociates.net has better backlinks, keywords and other seo markers than my competitors but I still can't seem to rank. The keyword and associated keywords (home care grand rapids michigan, home health care grand rapids, etc.) are only 31-33% difficulty and my site/page rank is better than the leading sites. What gives? Todd
Reporting & Analytics | | t1kuslik0 -
Why we shouldn't use AWstats to measure marketing efforts?
and what are the disadvantages of awstats compared to Google Analytics?
Reporting & Analytics | | esiow20130 -
Analytics Filter for URL's
Hi Fellow Mozzers I am setting my analytics and need to set some filters and need some help. I have a number of Local Sites i need to include and can't find how to do it. some of the the paths are local.imsm.com/new-york/ local.imsm.com/chicago/ local.imsm.com/long-beach/ local.imsm.com/atlanta/ each of the local URL's are /name/ any help would be great
Reporting & Analytics | | imsmlouis0 -
List all URL's indexed by google
Hi all i need a list of all urls google has indexed from my site i want this in excel format or csv how do i go about getting this thanks in advance
Reporting & Analytics | | Will_Craig0 -
Will Bing/Google's engine index a page that has only been on social media?
Will Bing's engine index and rank a page that has only been seen on social media and has no inbound links? Will Google's? Are inbound links absolutely required to get a page indexed and ranking and getting traffic? If unknown, how would you go about testing this?
Reporting & Analytics | | SarahGoliger0 -
Search within search? Weird google URLs
Good morning afternoon, how are you guys doing today? I'm experiencing a few Panda issues I'm trying to fix, and I was hoping I could get some help here about one of my problems. I used Google analytics to extract pages people land on after a Google search. I'm trying to identify thin pages that potentially harm my website as a whole. It turns out I have a bunch of pages in the likes of the following: /search?cd=15&hl=en&ct=clnk&gl=uk&source=www.google .co.uk, and so on for a bunch of countries (.fi, .com, .sg, .pk, and so on, maybe 50 of them) My question is: what are those pages? their stats are awful, usually 1 visitor, 100% bounce rate, and 0 links. Do you think they can explain my dramatic drop in traffic following Panda? If so, what should I do with them? NOINDEX? Deletion? What would you suggest? I also have a lot of links in the likes of the following: /google-search?cx=partner-pub-6553421918056260:armz8yts3ql&cof=FORID:10&ie=ISO-8859-1&sa=Search&siteurl=www.mysite.com/content/article They lead to custom search pages. What should I do with them? Almost two weeks ago, Dr. Pete posted an article untitled Fat Panda and Thin Content in which he deals with "search within search" and how they might be targeted by Panda. Do you think this is the issue I'm facing? Any suggestion/help would be much appreciated! Thanks a lot and have a great day 🙂
Reporting & Analytics | | Ericc220