Robots.txt file
-
How do i get Google to stop indexing my old pages and start indexing my new pages even months down the line?
Do i need to install a Robots.txt file on each page?
-
What CMS are you using? If it is wordpress, there is a plugin that allows you to configure when google is asked to crawl your site and you can set it up so that they only crawl your "archives" monthly and set up how often you have new content on the site. it is called WP Robots Txt
-
there should be only one robots.txt file for the entire site. Do you mean "Do I need to add a separate line item within the robots.txt file that disallows each page I want deindexed?" ? The answer is that would not be the best solution.
Other questions need to be asked as well.
1. Have you created a sitemap.xml file with only the new pages in it? And if so, have you submitted that to Google through Webmaster Tools?
2. Have you obtained any off-site links pointing to the new pages?
3. Have you performed a site:mydomain.com search in Google to see whether those new pages have been indexed?
4. Have you checked Google webmaster tools to see whether there are serious errors on your site their system has come across that might be related?
5. Does your site's robots.txt file currently block those new pages from being crawled?
All of these need to be answered to help determine a course of action.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No: 'noindex' detected in 'robots' meta tag
I'm getting an error in Search Console that pages on my site show No: 'noindex' detected in 'robots' meta tag. However, when I inspect the pages html, it does not show noindex. In fact, it shows index, follow. Majority of pages show the error and are not indexed by Google...Not sure why this is happening. Unfortunately I can't post images on here but I've linked some url's below. The page below in search console shows the error above... https://mixeddigitaleduconsulting.com/ As does this one. https://mixeddigitaleduconsulting.com/independent-school-marketing-communications/ However, this page does not have the error and is indexed by Google. The meta robots tag looks identical. https://mixeddigitaleduconsulting.com/blog/leadership-team/jill-goodman/ Any and all help is appreciated.
Technical SEO | | Sean_White_Consult0 -
Bloking pages in roborts.txt that are under a redirected subdomain
Hi Everyone, I have a lot of Marketo landing pages that I don't want to show in SERP. Adding the noindex meta tag for each page will be too much, I have thousands of pages. Blocking it in roborts.txt could have been an option, BUT, the subdomain homepage is redirected to my main domain (with a 302) so I may confuse search engines ( should they follow the redirect or should they block) marketo.mydomain.com is redirected to www.mydomain.com disallow: / (I think this will be confusing with the redirect) I don't have folders, all pages are under the subdomain, so I can't block folders in Robots.txt also Would anyone had this scenario or any suggestions? I appreciate your thoughts here. Thank you Rachel
Technical SEO | | RaquelSaiz0 -
301 redirect file question
Hi Everyone, I am creating a list of 301 redirects to give to a developer to put into Magento. I used Screaming Frog to crawl the site, but I have noticed that all of their urls 302 to another page. I am wondering if I should 301 the first URL to the url on the new site, or the second. I am thinking the first, but would love some confirmation. Thank you!
Technical SEO | | mrbobland0 -
Robots.txt checker
Google seems to have discontinued their robots.txt checker. Is there another tool that I can use to check my text instead? Thanks!
Technical SEO | | theLotter0 -
Where Is This Being Addended to Our Page File Names?
I have worked over the last several months to eliminate duplicate page titles at our site. Below is one situation that I need your advice on. Google Webmaster Tools is reporting several of our pages with
Technical SEO | | lbohen
duplicate title such as this one: This is a valid page at our Web store: http://www.audiobooksonline.com/159179126X.html This is an invalid page that Google says is a duplicate of the one above: http://www.audiobooksonline.com/159179126X.html?gdftrk=gdfV2138_a_7c177_a_7c432_a_7c9781591791263 Where might the code ?gdftrk=.... be coming from? How to get rid of it?0 -
301 Redirect on a PDF, DOCX files?
Hi, I have to rename many pdf and docx files. How can I implement 301 redirect on them as they are linked from 'n' number of places? Regards, Shailendra Sial
Technical SEO | | IM_Learner1 -
Robots.txt question
I want to block spiders from specific specific part of website (say abc folder). In robots.txt, i have to write - User-agent: * Disallow: /abc/ Shall i have to insert the last slash. or will this do User-agent: * Disallow: /abc
Technical SEO | | seoug_20050 -
Does RogerBot read URL wildcards in robots.txt
I believe that the Google and Bing crawlbots understand wildcards for the "disallow" URL's in robots.txt - does Roger?
Technical SEO | | AspenFasteners0