Reason for robots.txt file blocking products on category pages?
-
Hi
I have a website with thosands of products. On the category pages, all the products are linked to with the code “?cgid” in the URL. But “?cgid” is also blocked in the robots.txt file for some reason. So I'm thinking it's stopping all my products getting crawled by Google.
Am I right here? Is there any reason why a website would want to limit so many URL's? I'm only here a week and the sites getting great traffic, so don't want to go breaking it!!!
Thanks
-
Thanks again AL123al!
I would be concerned about my internal linking because of this problem. I've always wanted to keep important pages within 3 clicks of the Homepage. My worry here is that while these products can get clicked by a user within 3 clicks of the Homepage, they're blocked to Googlebot.
So the product URLS are only getting crawled in the sitemap, which would be hugely ineffcient? So I think I have to decide whether opening up these pages will improve my linking structure for Google to crawl the product pages, but is that important than increasing the amount of pages it's able to crawl and wasting crawl budget?
-
Hello,
The canonical product URLS will be getting crawled just fine as they are not blocked in the robots.txt. Without understanding your problem completely, I think the guys before you were trying to stop all the duplicate URLS with parameters being crawled and just leaving Google to crawl the canonicals - which is what you want.
If you remove the parameter from robots.txt then Google will crawl everything including the parameter URLS. This will waste crawl budget. So better that Google is only crawling the canonicals.
Regarding the sitemap, being present on the sitemap will help Googlebot decide what to prioritise crawling but won't stop it finding other URLS if there is good internal linking.
-
Thanks AL123al! The base URL's (www.example.com/product-category/ladies-shoes) do seem to be getting crawled here & there, and some are ranking which is great. But I think the only place they can get crawled is the sitemap, which has has over 28,000 URLs on one page (another thing I need to fix)!
So if Googlebot gets to the parameter URL through category pages (www.example.com/product-category/ladies-shoes?cgid...) and sees it's blocked, I'm guessing it can't see it's important to us (from the website hierarchy) or the canonical tag, so I'm presuming it's seriously damaging or power in getting products ranked
In Screaming Frog, 112,000 get crawled and 68% are blocked by robots. 17,000 are URL's which contain "?cgid", which I don't think is too big for Googlebot to crawl, the websites has a pretty good authority so I think we have a pretty deep crawl.
So I suppose what really want to know is will removing "?cgid" from the robots file really damage the site? I my opinion, I think it'll really help
-
This looks like the products are being appended by a parameter ?cgid - there may be other stuff attached to the end of each URL like this below:
e.g. www.example.com/product-category/ladies-shoes?cgid-product=19&controller=product etc
but canonical URL is www.example.com/product-category/ladies-shoes
These products may have had a canonical to the base URL which means that there won't be any problem with duplicates being indexed. So all well and good.
Except.....Google has to crawl each of these parameter URLs to find the canonical. In a huge website this means that crawl budget is being consumed by unnecessary crawling of these parameterised URLs.
You can tell Google not to crawl the parameter URLs in search console (at least in the old version you can). But you can also stop Google crawling these URLS unnecessarily by blocking them in robots txt if you are sure that the parameters are not changing how the page is looking in search.
So long story short is that is why you may see that the URLS with parameters are being blocked in robots.txt. The canonical version URLS will be getting crawled just fine since they don't have any parameters and hence not being blocked.
Hope that makes sense?
-
Yes, it's in the robot.txt, that's the problem. Someone had to physically put it in there, but I've no idea why they would.
-
Did you check your robot txt file? Or check if any plugin creating this problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
What Is The Best Way To Categorize 3 Different Top Level Categories Each With 20 Sub Categories
Hello, We are trying to figure out the best way to categorize our app review website. We have 3 platforms, iPhone, iPad and Android and each platform has several sub categories and numerous apps subcategories totaling around 50 to 60 categories for each platform. Any suggestions how to do this properly? thank you Mike
Web Design | | crazymikesapps10 -
Using a query string for linked, static landing pages - is this good practice?
My company has a page with links for each of our dozen office locations as well as a clickable map. These offices are also linked in the footer of every page along with their phone number. When one of these links is clicked, the visitor is directed to a static page with a picture of the office, contact information, a short description, and some other information. The URL for these pages is displayed as something like http:/example.com/offices.htm?office_id=123456, with seemingly random ID numbers at the end depending on the office that remain static. I know first off that this is probably bad SEO practice, as the URL should be something like htttp://example.com/offices/springfield/ My question is, why is there a question mark in the page URL? I understand that it represents a query string, but I'm not sure why it's there to begin with. A search query should not required if they are just static landing pages, correct?. Is there any reason at all why they would be queries? Is this an issue that needs to be addressed or does it have little to no impact on SEO?
Web Design | | BD690 -
How do I optimize a site designed to be one scrolling page of content?
Our website uses section ID's as its navigation so all the content is on one page. When you click About Us, the page scrolls down to About Us. Products, the page scrolls to Products section, and etc. I am getting crawl errors for meta descriptions but will this go away once the main domain has this info? We just added the meta keywords and description to the header and since the navigation sections use the same page, I assume it will correct the errors. Any other advice on optimizing for site designs like ours would be great. www.theicecubekit.com is the site. Thanks,
Web Design | | bangbang
Chris0 -
Site Ranks on Page 1 - Would launching new site hurt that
Hello, I currently have a website ranking in the top 7 for my main keyword. The website was built in 2004 and is definitely outdated, yet still ranks very high and brings in business. If i launched a new site on this domain, what would happen to my rankings? Would they drop? would they rise? If i don't launch the new site, will this site eventually drop due to being old and outdated? Any advice would be helpful...
Web Design | | Prime850 -
Site Redesign: Bounce rate, converstion, page views, etc.
Hi Fellow Mozzers, I had a few questions regarding some analytics data we have been seeing since our redesign. Just last week we did a site design overhaul at www.lylif.com. One of the biggest changes we immediately saw was a 15-20% increase in our bounce rate. However, our conversion rates, page views, pages per visit, and site duration has increased. If anyone has some insight as to why we may be having such a large increase in our bounce rate that would be most helpful!
Web Design | | lylif11 -
Does page speed worth for SEO?
I always broken my head to try to follow all pagespeed guidelines. I increase my pagespeed significantly, but i didnt saw any effect in my SEO performance. In my keywords, my concorrents are crap on it (I have score of 90 and they are at 60-70).Does google gives importance to it?
Web Design | | Naghirniac0 -
Any discussions on the actual web page design and how it might affect SEO?
Are there any links to previous discussions or tips, techniques for how creative design has any impact on seo??
Web Design | | theideapeople1