Negative impact on crawling after upload robots.txt file on HTTPS pages
-
I experienced negative impact on crawling after upload robots.txt file on HTTPS pages. You can find out both URLs as follow.
Robots.txt File for HTTP: http://www.vistastores.com/robots.txt
Robots.txt File for HTTPS: https://www.vistastores.com/robots.txt
I have disallowed all crawlers for HTTPS pages with following syntax.
User-agent: *
Disallow: /Does it matter for that? If I have done any thing wrong so give me more idea to fix this issue.
-
Hi CP,
If you wish to use robots.txt to block crawlers, then your two robots.txt files should be as follows:
For your http protocol (http://vistastores.com/robots.txt
User-agent: * Allow: /
For the https protocol (https://vistastores.com/robots.txt
User-agent: * Disallow: / Personally, I prefer to use the noindex meta tag for page blocking because it is a more reliable way of ensuring that the pages are not indexed. (Never try to use both at once) This link explains the difference between the two: [Google Webmaster Tools Help.](http://www.google.com/support/webmasters/bin/answer.py?answer=35302 "Robots blocking crawlers") Hope that helps, Sha ```You can use a robots.txt file to request that search engines remove your site and prevent robots from crawling it in the future. (It's important to note that if a robot discovers your site by other means - for example, by following a link to your URL from another site - your content may still appear in our index and our search results. To entirely prevent a page from being added to the Google index even if other sites link to it, use a [noindex meta tag](http://www.google.com/support/webmasters/bin/answer.py?answer=61050).)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 redirects and impact on page authority
I need to restructure a section of my website, changing some page titles and moving some pages to other sections. This will then change the URLs but the CMS I use will automatically create 301 redirects so the old URLs still work. The question is, will this have any negative impacts on page authority/page rank? From what I've read, it seems having 301's used to have a negative impact but doesn't anymore?
Intermediate & Advanced SEO | | ciehmoz0 -
Crawling/indexing of near duplicate product pages
Hi, Hope someone can help me out here. This is the current situation: We sell stones/gravel/sand/pebbles etc. for gardens. I will take a type of pebbles and the corresponding pages/URL's to illustrate my question --> black beach pebbles. We have a 'top' product page for black beach pebbles on which you can find different types of quantities (differing from 20kg untill 1600 kg). There is not any search volume related to the different quantities The 'top' page does not link to the pages for the different quantities The content on the pages for the different quantities is not exactly the same (different price + slightly different content). But a lot of the content is the same. Current situation:
Intermediate & Advanced SEO | | AMAGARD
- Most pages for the different quantities do not have internal links (about 95%) But the sitemap does contain all of these pages. Because the sitemap contains all these URL's, google frequently crawls them (I checked the logfiles) and has indexed them. Problems: Google spends its time crawling irrelevant pages --> our entire website is not that big, so these quantity URL's kind of double the total number of URL's. Having url's in the sitemap that do not have an internal link is a problem on its own All these pages are indexed so all sorts of gravel/pebbles have near duplicates. My solution: remove these URL's from the sitemap --> that will probably stop Google from regularly crawling these pages Putting a canonical on the quantity pages pointing to the top-product page. --> that will hopefully remove the irrelevant (no search volume) near duplicates from the index My questions: To be able to see the canonical, google will need to crawl these pages. Will google still do that after removing them from the sitemap? Do you agree that these pages are near duplicates and that it is best to remove them from the index? A few of these quantity pages do have intenral links (a few procent of them) because of a sale campaign. So there will be some (not much) internal links pointing to non-canonical pages. Would that be a problem? Thanks a lot in advance for your help! Best!1 -
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
Dynamic pages
Hello Team, How can we create dynamic pages or more pages on website but maintaining SEO standards.
Intermediate & Advanced SEO | | Obbserv0 -
I have 2 keywords I want to target, should I make one page for both keywords or two separate pages?
My team sells sailboats and pontoon boats all over the country. So while they are both boats, the target market is two different types of people... I want to make a landing page for each state so if someone types in "Pontoon Boats for sale in Michigan" or "Pontoon boats for sale in Tennessee," my website will come up. But I also want to come up if someone is searching for sailboats for sale in Michigan or Tennessee (or any other state for that matter). So my question is, should I make 1 page for each state that targets both pontoon boats and sailboats (total of 50 landing pages), or should I make two pages for each state, one targeting pontoon boats and the other sailboats (total of 100 landing pages). My team has seen success targeting each state individually for a single keyword, but have not had a situation like this come up yet.
Intermediate & Advanced SEO | | VanMaster0 -
Using Meta Header vs Robots.txt
Hey Mozzers, I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes. For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense). I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues. Thoughts?
Intermediate & Advanced SEO | | evan890 -
Recovering from robots.txt error
Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!
Intermediate & Advanced SEO | | RikkiD220