Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to change noindex to index?
-
Hey,
I've recently upgraded to a pro SEOmoz account and have realised i have 14574 issues to do with 'blocked by meta-robot' and that 'This page is being kept out of the search engine indexes by the meta tag , which may have a value of "noindex", keeping this page out of the index.'
How can i change this so my pages get indexed?
I read somewhere that i need to change my privacy settings but that thread was 3 years old and now the WP Dashboard has updated..
Please let me know
Many thanks, Jamie
P.s Im using WordPress 3.5
And i have the XML sitemap plugin
And i have no idea where to look for this robots.txt file..
-
Answered below as well, but wanted to drop this here in case anyone else is looking. WP has changed the location of what used to be "Privacy" under settings. The functionality (which blocks search engines from your wordpress installation) is now under Settings->Reading
[Screenshot](Hi Mark Did you find it? I struggled too for a bit, but it's moved to Settings->Reading See this screenshot -Dan)
-Dan
-
Hi Mark
Did you find it? I struggled too for a bit, but it's moved to Settings->Reading
-Dan
-
Just updated it to this: http://gyazo.com/4a8a008055abbd563f96bf29b6b259a6.png?1357651763
And then checked my page sources and they're still 'noindex' - why can't i correct this?!
-
Just installed it and now its added this field into my settings>Reading
http://gyazo.com/0be601793fc1cb866d918ea61e7d8ec1.png?1357649141
What do i need to change to allow it to index all my pages?
(Don't want to type something in that will block all my pages
)
-
Just asked the WordPress Forums and one of their reply was to install this plugin: http://wordpress.org/extend/plugins/wp-robots-txt/
Just seems to add the privacy tab again so i can set the settings to: I would like my blog to be visible to everyone, including search engines (like Google, Bing, Technorati) and archivers
Like you first stated
Will install it now and see how it goes
-
It could be that the older wordpress had a setting that this new version has decided to ignore. This is typical of programmers!
The next possibility is to look in the database, but the options part of the database is hard to read.
Another idea is to look in the code of the the theme and hack it, so it is permanently index, follow or just remove that altogether.
Maybe someone else has a better idea?
Alan
-
If i remember correctly my pages were still not being indexed before i installed the all in one SEO pack.
Here is my settings for the SEO pack: http://gyazo.com/6b4dddacb307bdacfdd7741894e0356b.png?1357647136
As you can see they are as you explained.
Any other ideas?
-
Yes, I would have them indexed in that case too.
I think it is the categories that are noindex.
I think this is an 'All in one SEO pack' adjustable feature.
In the setup for that, look for a checkbox:
"use noindex for categories"
uncheck that if it is checked.
If that isn't it, I don't know the answer
-
Thanks again for your reply Alan.
Currently the site is still in its final stages of development and once my automated system is finally built and implemented then I won't need to be changing any of the index pages except posting a few blogs once in a week or so.
So i think it would benefit me more to have each of my index pages getting indexed but then again I'm not sure on how to go about allowing them to be indexed due to WordPress' update.
My plugins are all highly downloaded and i use the 'All in one SEO pack' - if that may be the problem? I've gone through all the settings and the noindex buttons are all anticked.
Perhaps it could be the initial theme i used?
-
Thank you Mark
Nice looking site!
Your front page is index, follow.
Index pages are noindex, follow
Final pages are index, follow
I do something very close to this on my site.
Often, index pages are useless to searchers, because the index page changes so quickly that by the time the info gets into a search result, the information is no longer on that page, and the searcher will either just click away, cursing you and your site, or they will go looking through a few index pages and then curse you when they can't find what they wanted.
So I agree with the way you're doing that - if it is the case that the content changes quickly. If the index pages are just collectors of groups of items, then index, follow would be better, provided that you have enough text on the page to make it worthwhile.
As to how to make that happen, it isn't obvious.
I need to upgrade some of my sites to 3.5.
It could be that you have a plugin or a "custom field" that sets the index follow.
I suggest you edit a post and a page and scroll down to see if you have a field that handles it, such as "robotsmeta" that is set to "noindex,follow" for those pages
-
Hi Alan, thanks for your quick response.
My website is: www.FifaCoinStore.com
Here is a printscreen of my settings: http://gyazo.com/0cd3d21c5ec1797873a5c7cacc85a588.png?1357600674
I believe since the WordPress 3.5 update they have removed this privacy option which is why i can't seem to find it. I read this page from WordPress on it: http://codex.wordpress.org/Settings_Reading_Screen
Or am i just looking in the wrong place?
Thanks again
-
Hello Mark.
Please send me a bitly shortened link to your website so I can see what you are seeing
It probably isn't your robots file.
First try this.
In the Admin section, you should see "Settings" on the left navigation
Click that and you should see "Privacy"
Click that and you should see two radio buttons
<label for="blog-public">I would like my blog to be visible to everyone, including search engines (like Google, Bing, Technorati) and archivers</label>
<label for="blog-norobots">I would like to block search engines, but allow normal visitors</label>
Obviously, choose the top one and save it.
Then, refresh your front page or inner pages and look in the code to see if it still says noindex
If you have a cache, you will need to flush it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Good to use disallow or noindex for these?
Hello everyone, I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Technical SEO | | williamhuynh
Below are the specific areas I would like to discuss: a. Double and Triple filter pages: I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows: https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabric Considering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter. b. Page URLs with parameters: I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter. Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me. Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you. Cheers!0 -
URL path randomly changing
Hi eveyone, got a quick question about URL structures: I'm currently working in ecommerce with a site that has hundreds of products that can be accessed through different URL paths: 1)www.domain.com/productx 2)www.domain.com/category/productx 3)www.domain.com/category/subcategory/productx 4)www.domain.com/bestsellers/productx 5)... In order to get rid of dublicate content issues, the canoncial tag has been installed on all the pages required. The problem I'm witnessing now is the following: If a visitor comes to the site and navigates to the product through example 2) at time the URL shown in the URL browser box is example 4), sometimes example 1) or whatever. So it is constantly changing. Does anyone know, why this happens and if it has any impact on GA tracking or even on SEO peformance. Any reply is much appreciated Thanks you
Technical SEO | | ennovators0 -
Fake Links indexing in google
Hello everyone, I have an interesting situation occurring here, and hoping maybe someone here has seen something of this nature or be able to offer some sort of advice. So, we recently installed a wordpress to a subdomain for our business and have been blogging through it. We added the google webmaster tools meta tag and I've noticed an increase in 404 links. I brought this up to or server admin, and he verified that there were a lot of ip's pinging our server looking for these links that don't exist. We've combed through our server files and nothing seems to be compromised. Today, we noticed that when you do site:ourdomain.com into google the subdomain with wordpress shows hundreds of these fake links, that when you visit them, return a 404 page. Just curious if anyone has seen anything like this, what it may be, how we can stop it, could it negatively impact us in anyway? Should we even worry about it? Here's the link to the google results. https://www.google.com/search?q=site%3Amshowells.com&oq=site%3A&aqs=chrome.0.69i59j69i57j69i58.1905j0j1&sourceid=chrome&es_sm=91&ie=UTF-8 (odd links show up on pages 2-3+)
Technical SEO | | mshowells0 -
Correct linking to the /index of a site and subfolders: what's the best practice? link to: domain.com/ or domain.com/index.html ?
Dear all, starting with my .htaccess file: RewriteEngine On
Technical SEO | | inlinear
RewriteCond %{HTTP_HOST} ^www.inlinear.com$ [NC]
RewriteRule ^(.*)$ http://inlinear.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://inlinear.com/ [R=301,L] 1. I redirect all URL-requests with www. to the non www-version...
2. all requests with "index.html" will be redirected to "domain.com/" My questions are: A) When linking from a page to my frontpage (home) the best practice is?: "http://domain.com/" the best and NOT: "http://domain.com/index.php" B) When linking to the index of a subfolder "http://domain.com/products/index.php" I should link also to: "http://domain.com/products/" and not put also the index.php..., right? C) When I define the canonical ULR, should I also define it just: "http://domain.com/products/" or in this case I should link to the definite file: "http://domain.com/products**/index.php**" Is A) B) the best practice? and C) ? Thanks for all replies! 🙂
Holger0 -
Pages removed from Google index?
Hi All, I had around 2,300 pages in the google index until a week ago. The index removed a load and left me with 152 submitted, 152 indexed? I have just re-submitted my sitemap and will wait to see what happens. Any idea why it has done this? I have seen a drop in my rankings since. Thanks
Technical SEO | | TomLondon0 -
What to do with 302 redirects being indexed
Hi there, Our site's forums include permalinks that for some reason uses an intermediary URL that 302 redirects to the URL with the permalink anchor. For example: http://en.tradimo.com/learn/chart-analysis/time-frames/ In the comments, there is a permalink to the following URL; en.tradimo.com/co/50c450005f2b949e3200001b/ (there is no content here, and never has been). This URL 302 redirects to the following final URL: http://en.tradimo.com/learn/chart-analysis/time-frames/?offset=0&limit=20#50c450005f2b949e3200001b The problem is, Google is indexing the redirect URL (en.tradimo.com/co/50c450005f2b949e3200001b/) and showing duplicate content even though we are using the nofollow tag on these links. Ideally, we would directly use the last link rather than redirecting. Alternatively, I'd say a 301 redirect would be preferable. But if both aren't available, is there a way to get these pages out of the index? Is the canonical tag the best way? I really wish I could just add /co/ to the robots.txt file, but I think they would still be in the index, right? Thanks for your help!
Technical SEO | | etruvian0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
My .htaccess has changed, what do i do to avoid it again...?
Hello Today i notice that our site did not auto changed from without www to with, when i checked the .htaccess file i notice # in-front of each line and i know we did not insert it in there, after i removed it it worked fine. The only changes that we did recently was to a mobile version to the site but the call to autodirect is in a JS and not in the .htaccess, could it be the server..? is there any way that anything else might cause this...? The site is HTML and WP could it be because of that...? Thank's Simo
Technical SEO | | Yonnir0