Crawl solutions for landing pages that don't contain a robots.txt file?
-
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
-
No problem Tom. Thanks for the additional info — that is helpful to know.
-
Bryan,
I’m glad that you found what you where looking for.
I must have missed the part about it being 100% Instapage when you said CMS I thought meant something on else with instapage I think of it as landing pages not a CMS
I want to help so you asked about Google search console how often you need to request google index your site.
First make sure
You should have 5 urls in Google search console
your domain, http://www. , http:// , https://www. & https://
- nomader.com
- https://www.nomader.com
- https://nomader.com
- http;//www.nomader.com
- http://nomader.com
you should not have to requests google index once you’re pages are in googles index. There is no time line to make you need to requests google index.
Use search consoles index system to see if you need to make a request and look for notifications
Times you should request google crawl when adding new unlinked pages , when making big changes to your site , whatever adding pages with out a xml sitemap or fixing problems / testing.
I want to help so as you said you’re going to be using Shopify.
Just before you go live running on Shopify in the future you should make a xml sitemap of the Instapage site
You can do it for free using https://www.screamingfrog.co.uk/seo-spider/
you’re running now name it /sitemap_ip.xml or /sitemap2.xml upload it to Shopify
& make sure it’s not the same name so it will work with your Shopify xml sitemap /sitemap.xml
submit the /sitemap._ip.xml to search console then add the Shopify /sitemap.xml
You can run multiple xml sitemaps as long as they are not overlapping
just remember never add non-200 page, 404s, 300sno flow , no index or redirects to a xml sitemap ScreamingFrog will ask if you want to when you’re making the sitemap.
Shopify will make its own xml sitemaps and and having the current site as a second xml sitemap will help to make sure your change to the site will not hurt the intipage par of the Shopify site
https://support.google.com/webmasters/answer/34592?hl=en
know adding a XML Sitemap is a smart move
I hope that was of help I’m so about miss what you meant.
respectfully,
Tom
-
Thanks so much for your thoughtful, detailed response. That answers my question.
-
Bryan,
If I understand your intent, you want your pages indexed. I see that your site has 5 pages indexed (/, /help, /influencers, /wholesale, /co-brand). And that you have some other pages (e.g. /donations), which are not indexed, but these have "noindex" tags explicitly in their HEAD sections.
Not having a robots.txt file is equal to having a robots.txt file with a directive to allow crawling of all pages. This is per http://www.robotstxt.org/orig.html, where they say "The presence of an empty "/robots.txt" file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome."
So, if you have no robots.txt file, the search engine will feel free to crawl everything it discovers, and then whether or not it indexes those pages will be guided by presence or absence of NOINDEX tags in your HEAD sections. From a quick browse of your site and its indexed pages, this seems to be working properly.
Note that I'm referencing a distinction between "crawling" and "indexing". The robots.txt file provides directives for crawling (i.e. access discovered pages, and discovering pages linked to those). Whereas the meta robots tags in the head provide directives for indexing (i.e. including the discovered pages in search index and displaying those as results to searchers). And in this context, absence of a robots.txt file simply allows the search engine to crawl all of your content, discover all linked pages, and then rely on meta robots directives in those pages for any guidance on whether or not to index those pages it finds.
As for a sitemap, while they are helpful for monitoring indexation, and also provide help to search engines to discover all desired pages, in your case it doesn't look especially necessary. Again, I only took a quick look, but it seems you have your key pages all linked from your home page, and you have meta directives in pages you wish to keep out of the index. And you have a very small number of pages. So, it looks like you are meeting your crawl and indexation desires.
-
Hi Tom,
Unfortunately, Instapage is a proprietary CMS that does not currently support robots.txt or site maps. Instapage is primarily built for landing pages, and not actual websites so that's their reasoning for not adding SEO support for basics like robots.txt and site maps.
Thanks anyway for your help.
Best,
-Bryan
-
hi
so I see the problem now
https://www.nomader.com/robots.txt
Does not have a robots.txt file upload it to the root of your server or specific place where Developer and/or CMS / Hosting company recommends I could not figure out what to type of CMS you’re useing if you’re using one
make a robots.txt file using
http://tools.seobook.com/robots-txt/generator/
https://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/exportrobots.php
https://moz.com/learn/seo/robotstxt
It will look like this below.
User-Agent: *
Disallow:Sitemap: https://www.nomader.com/sitemap.xml
it looks like you’re using Java for your website?
https://builtwith.com/detailed/nomader.com
I am guessing you’re not using a subdomain to host the Landing Pages?
If you are using a subdomain you would have to create a robots.txt file for that but from everything I can see you’re using your regular domain. So you would simply create these files ( i’m in a car on a cell phone so I did quick to see check if you have a XML site map file but I do think you do
https://www.nomader.com/sitemap.xml
You can purchase a tool called Screaming Frog SEO spider if your site is over 500 pages you will need to pay for it it’s approximately $200 however you will be able to create a wonderful site map you can also create a XML site map by googling xml sitemap generators. However I would recommend Screaming Prod because you can separate the images and it’s a very good tool to have.
Because you will need to generate a new site map whenever you update your site or add Landing Pages it will be done using screaming frog and uploaded to the same place in the server. Unless you can create a dynamic sitemap using whatever website of the infrastructure structure using.
Here are the directions to add your site Google Search Console / Google Webmaster Tools
https://support.google.com/webmasters/answer/34592?hl=en
If you need any help with any of this please do not hesitate to ask I am more than happy to help you can also generate a site map in the old version of Google Webmaster Tools / Google Search Console.
Hope this helps,
Tom
-
Thanks for the reply Thomas. Where do you see that my site has the robots.txt file? As far as I can tell, it is missing. Instapage does not offer robots.txt as I mentioned in my post. Here's a community help page of theirs where this question was asked and answered: https://help.instapage.com/hc/en-us/community/posts/213622968-Sitemap-and-Robotx-txt
So in the absence of having a robots.txt file, I guess the only way to counter this is to manually request a fetch/index from Google console? How often do you recommend I do this?
-
You don’t need to worry about instapage & robot.txt your site has the robots.txt & instapage is not set to no index.
so yes use google search console to fetch / index the pages it’s very easy if you read the help information I posted below
https://help.instapage.com/hc/en-us#
hope that helps,
Tom
-
If you cannot turn off “Meta Noindex“ you cannot fix it with robots.txt I suggest you contact the developer of the Instapage landing pages app. If it’s locked to no index as you said that is the only of for countering a pre coded by the company Meta Noindex issue?
I will look into this for you I bet that you can change it but not via robots.txt. I
will update it in the morning for you.
All the best,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Organic search traffic has dropped by 35% since 18 September, we don't know why.
Organic traffic to our website has dropped 35% since 18 September 2017 to date. From 1 January to 18 September 2017 organic traffic was up by just under 1% over all (Google up by 1.32%). Paid search traffic over the same time has remained steady. There is nothing we can think of that we've done that has caused the drop. We had an issue with Google page speed test failing when running a test but we resolved this issue on 20 November and in that time we've seen an even greater drop (44% in the last week). The drop is seen across the 3 main search engines, not just Google, which points toward something we've done, but as mentioned, we can't think of any significant change we made in September that would have such negative effects. There is little difference across devices. Is anyone aware of a significant event in September in the search engine world that may have influenced our organic traffic? Any help gratefully received.
Technical SEO | | imaterus0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
BEST Wordpress Robots.txt Sitemap Practice??
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/robotstxt Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read other questions. but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt http://www.seomoz.org/q/robots-txt-question-2 http://www.seomoz.org/q/quick-robots-txt-check. http://www.seomoz.org/q/xml-sitemap-instruction-in-robots-txt-worth-doing I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. User-agent: *
Technical SEO | | joony2008
Disallow:
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-login.php
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /comments **ERASE EVERYTHING??? and changed it to** <code> <code>
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
How to allow one directory in robots.txt
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked? For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/. I hope that makes sense... Thanks!
Technical SEO | | poolguy0 -
How can I exclude display ads from robots.txt?
Google has stated that you can do this to get spiders to content only, and faster. Our IT guy is saying it's impossible.
Technical SEO | | GregBeddor
Do you know how to exlude display ads from robots.txt? Any help would be much appreciated.0 -
Robots.txt file question? NEver seen this command before
Hey Everyone! Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant). the command line is as follows: Disallow: /*?* I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me 😞 Any help would be greatly appreciated! Thanks, Rob
Technical SEO | | RobMay0 -
Robots.txt question
I want to block spiders from specific specific part of website (say abc folder). In robots.txt, i have to write - User-agent: * Disallow: /abc/ Shall i have to insert the last slash. or will this do User-agent: * Disallow: /abc
Technical SEO | | seoug_20050 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0