Reason for robots.txt file blocking products on category pages?
-
Hi
I have a website with thosands of products. On the category pages, all the products are linked to with the code “?cgid” in the URL. But “?cgid” is also blocked in the robots.txt file for some reason. So I'm thinking it's stopping all my products getting crawled by Google.
Am I right here? Is there any reason why a website would want to limit so many URL's? I'm only here a week and the sites getting great traffic, so don't want to go breaking it!!!
Thanks
-
Thanks again AL123al!
I would be concerned about my internal linking because of this problem. I've always wanted to keep important pages within 3 clicks of the Homepage. My worry here is that while these products can get clicked by a user within 3 clicks of the Homepage, they're blocked to Googlebot.
So the product URLS are only getting crawled in the sitemap, which would be hugely ineffcient? So I think I have to decide whether opening up these pages will improve my linking structure for Google to crawl the product pages, but is that important than increasing the amount of pages it's able to crawl and wasting crawl budget?
-
Hello,
The canonical product URLS will be getting crawled just fine as they are not blocked in the robots.txt. Without understanding your problem completely, I think the guys before you were trying to stop all the duplicate URLS with parameters being crawled and just leaving Google to crawl the canonicals - which is what you want.
If you remove the parameter from robots.txt then Google will crawl everything including the parameter URLS. This will waste crawl budget. So better that Google is only crawling the canonicals.
Regarding the sitemap, being present on the sitemap will help Googlebot decide what to prioritise crawling but won't stop it finding other URLS if there is good internal linking.
-
Thanks AL123al! The base URL's (www.example.com/product-category/ladies-shoes) do seem to be getting crawled here & there, and some are ranking which is great. But I think the only place they can get crawled is the sitemap, which has has over 28,000 URLs on one page (another thing I need to fix)!
So if Googlebot gets to the parameter URL through category pages (www.example.com/product-category/ladies-shoes?cgid...) and sees it's blocked, I'm guessing it can't see it's important to us (from the website hierarchy) or the canonical tag, so I'm presuming it's seriously damaging or power in getting products ranked
In Screaming Frog, 112,000 get crawled and 68% are blocked by robots. 17,000 are URL's which contain "?cgid", which I don't think is too big for Googlebot to crawl, the websites has a pretty good authority so I think we have a pretty deep crawl.
So I suppose what really want to know is will removing "?cgid" from the robots file really damage the site? I my opinion, I think it'll really help
-
This looks like the products are being appended by a parameter ?cgid - there may be other stuff attached to the end of each URL like this below:
e.g. www.example.com/product-category/ladies-shoes?cgid-product=19&controller=product etc
but canonical URL is www.example.com/product-category/ladies-shoes
These products may have had a canonical to the base URL which means that there won't be any problem with duplicates being indexed. So all well and good.
Except.....Google has to crawl each of these parameter URLs to find the canonical. In a huge website this means that crawl budget is being consumed by unnecessary crawling of these parameterised URLs.
You can tell Google not to crawl the parameter URLs in search console (at least in the old version you can). But you can also stop Google crawling these URLS unnecessarily by blocking them in robots txt if you are sure that the parameters are not changing how the page is looking in search.
So long story short is that is why you may see that the URLS with parameters are being blocked in robots.txt. The canonical version URLS will be getting crawled just fine since they don't have any parameters and hence not being blocked.
Hope that makes sense?
-
Yes, it's in the robot.txt, that's the problem. Someone had to physically put it in there, but I've no idea why they would.
-
Did you check your robot txt file? Or check if any plugin creating this problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
CAPTCHA Alternatives to Improve Page Load Speed
Recently I had to install reCAPTCHA on my site. The site contains domain name generators and they were being misused, in the words of my host: _Addition of a Captcha will go one of two ways - hit the bruting on the head as intended - OR it will increase the load and the impact by rendering the Captcha's. _ Have noticed that reCAPTCHA adds a fair amount of code 32% of page size and 5 requests. I want to replace reCAPTCHA with an alternative, has anyone got any ideas? Cheers. Justin
Web Design | | GrouchyKids0 -
Https pages indexed but all web pages are http - please can you offer some help?
Dear Moz Community, Please could you see what you think and offer some definite steps or advice.. I contacted the host provider and his initial thought was that WordPress was causing the https problem ?: eg when an https version of a page is called, things like videos and media don't always show up. A SSL certificate that is attached to a website, can allow pages to load over https. The host said that there is no active configured SSL it's just waiting as part of the hosting package just in case, but I found that the SSL certificate is still showing up during a crawl.It's important to eliminate the https problem before external backlinks link to any of the unwanted https pages that are currently indexed. Luckily I haven't started any intense backlinking work yet, and any links I have posted in search land have all been http version.I checked a few more url's to see if it’s necessary to create a permanent redirect from https to http. For example, I tried requesting domain.co.uk using the https:// and the https:// page loaded instead of redirecting automatically to http prefix version. I know that if I am automatically redirected to the http:// version of the page, then that is the way it should be. Search engines and visitors will stay on the http version of the site and not get lost anywhere in https. This also helps to eliminate duplicate content and to preserve link juice. What are your thoughts regarding that?As I understand it, most server configurations should redirect by default when https isn’t configured, and from my experience I’ve seen cases where pages requested via https return the default server page, a 404 error, or duplicate content. So I'm confused as to where to take this.One suggestion would be to disable all https since there is no need to have any traces to SSL when the site is even crawled ?. I don't want to enable https in the htaccess only to then create a https to http rewrite rule; https shouldn't even be a crawlable function of the site at all.RewriteEngine OnRewriteCond %{HTTPS} offor to disable the SSL completely for now until it becomes a necessity for the website.I would really welcome your thoughts as I'm really stuck as to what to do for the best, short term and long term.Kind Regards
Web Design | | SEOguy10 -
Does an age verification home page hurt SEO?
There's a microbrewery in our area that just launched its first website. It has the "verify your age" homepage (which is not really their homepage, but I don't know what it's called) before you can enter. It looks like this: http://angrychairbrewing.com/ Anyway, does this hurt them at all from a rankings standpoint? Also, assuming bots/spiders/ROGER can crawl sites like this, (which I think they would have to be able to do) how do they get around this verification? Thanks, Ruben
Web Design | | KempRugeLawGroup0 -
Site with no ads hit by Page Layout update?
Hi there! Can a site that has no ads on it be hit by Google's latest Page Layout update? Can it be hit for just one or two keywords? My site (www.ink2paper.com) has a decline in Google organic traffic in early Feb so my suspicion is the Page Layout update. However I have no ads on the site. Digging into GWMT I find that it is only one or 2 keywords that seems to have taken a dive, mainly [photo paper]. I used to get around 80 imps a day for this term. Then on 6 Feb it was down to 50; 7 Feb = 34; 8 Feb just 4 impressions! I got a spike back at usual levels on 10 & 11 Feb, but since then it has been back down to only 5 or so impressions a day. [photographic paper] took a small hit at the start of February, but has nose dived since the start of April. The homepage performs well for Google organic traffic - low bounce (22%) and good ecom conversion rate (14%) - although this is likely to be largely branded traffic. I feel my site is a 'good' result for the search term [photo paper], although there is always room for improvement of course! Any suggestions as to why Google has stopped showing my site for these keywords? All help is greatly appreciated. Cheers,
Web Design | | SimonHogg
Simon0 -
How to create this image effect for my home page
How do I make a wide, somewhat fast loading image effect like this home page has: 3dcart.com work on my website bobweikel.com I'm asking for how to create the effect (with small enough image kb to load) and what the image should be of in your opinion. Thanks!
Web Design | | BobGW0 -
404 page in Windows IIS. HELP!
I run a real estate website.
Web Design | | Jeepster
My webmaster needs to create a 404 page for listings when they get deleted.
So far all he's come up with is 302-redirect to a standard "error template" page.
Can anyone suggest a 404 how-to guide I can show him?
Thanks0 -
Do Pages That Rearrange Set Off Any Red Flags for Google?
We have a broad content site that includes crowdsourced lists of items. A lot of the pages allow voting, which causes the content on the pages (sometimes the content is up to 10 pages deep) to completely rearrange, and therefore spread out and switch pages often among the (up to 10) pages of content. Now, could this be causing any kind of duplicate content or any other kind of red flags for Google? I know that the more the page changes the better, but if it's all the same content that is being moved up and down constantly, could Google think we're pulling some kind of "making it look like we have new content" scheme and ding us for these pages? If so, what would anyone recommend we do? Let's take an example of a list of companies with bad customer service. We let the internet vote them up and down all the time, the order changes depending on the votes in real time. Is that page doomed, or does Google see it and love it?
Web Design | | BG19850 -
Do iFrames embedded in a page get crawled?
Do iFrames embedded in a page get crawled? I have an iFrame which prints a page hosted by another company embedded in my page. Their links don't include rel=nofollow attributes, so I don't want Google to see them. Do spiders crawl the content in iFrames, or do I have to ensure that the links on this page include the nofollow attribute?
Web Design | | deuce1s0