Blocking https from being crawled
-
I have an ecommerce site where https is being crawled for some pages. Wondering if the below solution will fix the issue
www.example.com will be my domain
In the nav there is a login page www.example.com/login which is redirecting to the https://www.example.com/login
If I just disallowed /login in the robots file wouldn't it not follow the redirect and index that stuff?
The redirect part is what I am questioning.
-
Correct once /login gets redirected to https://www.example.com/login all nav links etc are https
What I ended up doing was blocking /login in robots and now doing canonicals on https as well as nofollow the /login link that is in the nav that redirects
Willl see what happens now.
-
So, the "/login" page gets redirected to https: and then every link on that page goes secure and Google crawls them all? I think blocking the "/login" page is a perfectly good way to go here - cut the crawl path, and you'll cut most of the problem.
You could request removal of "/login" in Google Webmaster Tools, too. Sometimes, I find that Robots.txt isn't great at removing pages that are already indexed. I would definitely add the canonical as well, if it's feasible. Cutting the path may not cut the pages that have already been indexed with https:.
Sorry, I'd actually reverse that:
(1) Add the canonicals, and let Google sweep up the duplicates
(2) A few weeks later, block the "/login" page
Sounds counter-intuitive, but if you block the crawl path to the https: pages first, then Google won't crawl the canonical tags on those versions. Use canonical to clean up the index, and then block the page to prevent future problems.
-
Gotcha. Yea I commented above how I was going to add a canonical as well as a noindex in the meta but was curious how it handled the redirect that was happening.
thanks for your help
-
Yea I was going to nofollow the link in the nav and add a meta tag but was curious how the robots file would handle this since the url is a redirect.
Thanks for your input
-
The pages that are being crawled under https, are the same pages available under http as well ? If yes, can you just add a canonical tag on these pages to go to the http version. That should fix it. And if your login page is the entry point, your fix will help as well. But then as Rebekah said, what if somebody is linking to your https page. I would suggest you look into making a canonical tag on these pages to http if that makes sense and is doable.
-
You can disallow the https portion in robots.txt, but remember robots.txt isn't always a sure fire way of not getting an area of your site crawled. If you have other important content to crawl from the secured page, be careful you are not blocking robots from there.
If this is linked to other places on the web, and the link doesn't include no-follow, search engines may still crawl the page. Can you change the link in your navigation to no-follow as well? I would also add a meta noindex tag to the page itself, and a canonical tag to the https version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirect to http to https - Pros and Cons
Hi, I know its best practice to redirect a website from http to https, instead of having many entry point to your website. When a website has been running for a long time on http and https, what are the SEO Pros and Cons of implementing a redirect from Http to Https?
Technical SEO | | FreddyKgapza1 -
Https & http
I have my website (HTTP://thespacecollective.com) marked on Google Webmaster Tools as being the primary domain, as opposed to https. But should all of my on page links be http? For instance, if I click the Home button on my home page it will take the user to http, but if you type in the domain name in the address bar it will take you to https. Could this be causing me problems for SEO?
Technical SEO | | moon-boots0 -
Http to https - Copy Disavow?
If the switch is made from http to https (with 301 redirects from http to https) should the disavow file be copied over in GWT so it is also uploaded against the https as well as the http version?
Technical SEO | | twitime0 -
Value of having a good crawl budget?
Hi, I've seen several questions where people give advice on how to increase the crawl budget. What I haven't seen anyone comment is what the value of this really is if you have many pages that doesn't get updated very often. Take for example the typical agency page - 50 pages, most of them rarely gets updates. In a monthly basis normally 10% of the website gets updated. Is there really any value then of having 100% of the website crawled on a daily basis?
Technical SEO | | Inevo0 -
Pages crawled is only 23 even after 8 days??
Hello all, My site www.practo.com has at least more than 500+ pages. Still seomoz says its only 23 crawled till date even after 8 -10 days of the trial period. Now most of the pages on my site are in-site search pages. They appear when you search relevant terms with combinations etc. Is that hindering the moz crawler to look for those pages? Aditya
Technical SEO | | shanky11 -
Nofollow links appear to be still included in SEOMOZ crawl and Google
I have added the nofollow tag to links throughout my site to hide duplicate content from Google but these pages are still being shown in my SEOMOZ crawl. I also fetched an example page with the Googlebot within Webmaster tools and it showed all nofollow links. An example is http://www.adventurepeaks.com/news All News tags have nofollow but each tag is appearing in my SEOMOZ crawl report as duplicate content. Any suggestions on whether this is a problem or if i have applied the tag incorrectly? Many thanks in advance
Technical SEO | | adventure340 -
Temporarily suspend Googlebot without blocking users
We'll soon be launching a redesign, on a new platform, migrating millions of pages to new URLs. How can I tell Google (and other crawlers) to temporarily (a day or two) ignore my site? We're hoping to buy ourselves a small bit of time to verify redirects and live functionality before allowing Google to crawl and index the new architecture. GWT's recommendation is to 503 all pages - including robots.txt, but that also makes the site invisible to real site visitors, resulting in significant business loss. Bad answer. I've heard some recommendations to disallow all user agents in robots.txt. Any answer that puts the millions of pages we already have indexed at risk is also a bad answer. Thanks
Technical SEO | | lzhao0 -
Mobile SEO or Block Crawlers?
We're in the process of launching mobile versions of many of our brand sites and our ecommerce site and one of our partners suggested that we should block crawlers on the mobile view so it doesn't compete for the same keywords as the standard site (We will be automatically redirecting mobile handsets to the mobile site). Does this advice make sense? It seems counterintuitive to me.
Technical SEO | | BruceMillard0