Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Good to use disallow or noindex for these?
-
Hello everyone,
I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Below are the specific areas I would like to discuss:a. Double and Triple filter pages:
I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows:
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabricConsidering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter.
b. Page URLs with parameters:
I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter.
Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me.
Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you.
Cheers!
-
@williamhuynh You're correct to pay attention to parameters in your URLs, as they can have an impact on how search engines crawl and index your site. It's crucial, however, to handle them strategically.
Using canonical tags on these pages is already a good move. It signals to search engines which version of the page should be treated as the main one. Canonicalization helps avoid potential duplicate content issues and makes your website easier to understand from a search engine's perspective.
However, I'd be careful to disallow these pages or use a "noindex" tag. Disallowing these URLs in your robots.txt file might seem like a good way to save the crawl budget, but it can have unintended side effects. When you disallow a URL, it means that search engines can't access it at all, which could impact the crawling and indexing of your main (canonical) pages. This is especially true if these parameterized URLs have unique backlinks or user engagement signals that could be beneficial for your canonical URLs.
As for the "noindex" approach, this tells search engines not to include the page in their index. However, if these pages have valuable backlinks or user engagement signals, you might be missing out on some SEO value by not indexing them.
In my opinion, if your website is large and you're genuinely concerned about the crawl budget, a more suitable approach might be to use Google Search Console's URL Parameters tool. This tool lets you inform Google how to handle specific URL parameters.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best redirect destination for 18k highly-linked pages
Technical SEO question regarding redirects; I appreciate any insights on best way to handle. Situation: We're decommissioning several major content sections on a website, comprising ~18k webpages. This is a well established site (10+ years) and many of the pages within these sections have high-quality inbound links from .orgs and .edus. Challenge: We're trying to determine the best place to redirect these 18k pages. For user experience, we believe best option is the homepage, which has a statement about the changes to the site and links to the most important remaining sections of the site. It's also the most important page on site, so the bolster of 301 redirected links doesn't seem bad. However, someone on our team is concerned that that many new redirected pages and links going to our homepage will trigger a negative SEO flag for the homepage, and recommends instead that they all go to our custom 404 page (which also includes links to important remaining sections). What's the right approach here to preserve remaining SEO value of these soon-to-be-redirected pages without triggering Google penalties?
Technical SEO | | davidvogel1 -
Unsolved Question about a Screaming Frog crawling issue
Hello, I have a very peculiar question about an issue I'm having when working on a website. It's a WordPress site and I'm using a generic plug in for title and meta updates. When I go to crawl the site through screaming frog, however, there seems to be a hard coded title tag that I can't find anywhere and the plug in updates don't get crawled. If anyone has any suggestions, thatd be great. Thanks!
Technical SEO | | KyleSennikoff0 -
Why Product pages are throwing Missing field "image" and Missing field "price" in Wordpress Woocommerce
I have a wordpress wocommerce website where I have uploaded 100s of products but it's giving me error in GSC under merchant listing tab. When I tested it show missing field image and missing field price. I have done everything according to https://developers.google.com/search/docs/appearance/structured-data/product#merchant-listing-experiences and applied fixed i.e. images are 800x800 and price range is also there. What else can be done here?!merchant listing.jpg
Technical SEO | | Ravi_Rana0 -
Does using a canonical with ?utm_source=gmb cause any issues?
All of our URLs in Google My Business are tagged with ?utm_source=gmb. This way when people click on it within a Google Map listing, knowledge graph, etc we know it came from there. I'm assuming using a canonical on all ?_utm_source _pages (we have others, including some in the index) won't cause any problems with this, correct? Since they're not technically traditional organic SERPs? Dumb question I know, but better safe than sorry. Thanks.
Technical SEO | | Alces1 -
How to change noindex to index?
Hey, I've recently upgraded to a pro SEOmoz account and have realised i have 14574 issues to do with 'blocked by meta-robot' and that 'This page is being kept out of the search engine indexes by the meta tag , which may have a value of "noindex", keeping this page out of the index.' How can i change this so my pages get indexed? I read somewhere that i need to change my privacy settings but that thread was 3 years old and now the WP Dashboard has updated.. Please let me know Many thanks, Jamie P.s Im using WordPress 3.5 And i have the XML sitemap plugin And i have no idea where to look for this robots.txt file..
Technical SEO | | markgreggs0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
How do you disallow HTTPS?
I currently have a site (startuploans.org) that runs everything as http, recently we decided to start an online application to process loan apps. Now, for one certain section we configured ssl to work (https://www.startuploans.org/secure/). If I go to the HTTPS url for any of my other pages they show up...I was going to just 301 everything from https but because it is in a subdirectiory I can't... Also, canonical URL's won't work either because it's a totally different system and the pages are generated in an odd manor. It's really just 1 page that needs to be disallowed.. Is there any way to disallow all HTTPS requests from robots.txt while keeping all the HTTP requests working as normal?
Technical SEO | | WebsiteConsultants0