Robots.txt for Facet Results
-
Hi
Does anyone know how to properly add facets URL's to Robots txt?
E.g. of our facets URL -
Everything after the # will need to be blocked on all pages with a facet.
Thank you
-
Great thank you!
-
This is the right answer.
Great way to check is to see if you have multiple versions of that URL indexed, which you don't: https://www.google.com/search?q=site:http://www.key.co.uk/en/key/platform-trolleys-trucks
-
Google ignores everything after the hash to start with, so you do not need to block it to finish with. It is a clever way to pass parameters without having to worry about Google getting lost.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
Does we need to add a canonical tag with the mobile url in each desktop version as a result of mobile first index?
Hi, Does we need to add a canonical tag with the mobile url in each desktop version as a result of mobile first index? Thanks Roy
Intermediate & Advanced SEO | | kadut0 -
Facet Values as Anchor text
Hi I've reviewing our internal linking structure & have found that the facets/filter buttons on a category, are crawled and have anchor text to each link, for example: The anchor text to filter the product listing results by those under £50 would be: | Facet Value Less than £50.00 (15) Less than £50.00 (15) | This also has the source URL & destination URL of http://www.key.co.uk/en/key/lockers I haven't come across this before - is this an issue?
Intermediate & Advanced SEO | | BeckyKey0 -
Meta Robot Tag:Index, Follow, Noodp, Noydir
When should "Noodp" and "Noydir" meta robot tag be used? I have hundreds or URLs for real estate listings on my site that simply use "Index", Follow" without using Noodp and Noydir. Should the listing pages use these Noodp and Noydr also? All major landing pages use Index, Follow, Noodp, Noydir. Is this the best setting in terms of ranking and SEO. Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Omitted results
Google used to display all my pages now most are under repeat the search with the omitted results included."_ what dose that mean? Dose this predict something bad?_ All pages are unique.
Intermediate & Advanced SEO | | Joseph-Green-SEO0 -
Issue with Robots.txt file blocking meta description
Hi, Can you please tell me why the following error is showing up in the serps for a website that was just re-launched 7 days ago with new pages (301 redirects are built in)? A description for this result is not available because of this site's robots.txt – learn more. Once we noticed it yesterday, we made some changed to the file and removed the amount of items in the disallow list. Here is the current Robots.txt file: # XML Sitemap & Google News Feeds version 4.2 - http://status301.net/wordpress-plugins/xml-sitemap-feed/ Sitemap: http://www.website.com/sitemap.xml Sitemap: http://www.website.com/sitemap-news.xml User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Other notes... the site was developed in WordPress and uses that followign plugins: WooCommerce All-in-One SEO Pack Google Analytics for WordPress XML Sitemap Google News Feeds Currently, in the SERPs, it keeps jumping back and forth between showing the meta description for the www domain and showing the error message (above). Originally, WP Super Cache was installed and has since been deactivated, removed from WP-config.php and deleted permanently. One other thing to note, we noticed yesterday that there was an old xml sitemap still on file, which we have since removed and resubmitted a new one via WMT. Also, the old pages are still showing up in the SERPs. Could it just be that this will take time, to review the new sitemap and re-index the new site? If so, what kind of timeframes are you seeing these days for the new pages to show up in SERPs? Days, weeks? Thanks, Erin ```
Intermediate & Advanced SEO | | HiddenPeak0 -
Robots.txt file - How to block thosands of pages when you don't have a folder path
Hello.
Intermediate & Advanced SEO | | Unity
Just wondering if anyone has come across this and can tell me if it worked or not. Goal:
To block review pages Challenge:
The URLs aren't constructed using folders, they look like this:
www.website.com/default.aspx?z=review&PG1234
www.website.com/default.aspx?z=review&PG1235
www.website.com/default.aspx?z=review&PG1236 So the first part of the URL is the same (i.e. /default.aspx?z=review) and the unique part comes immediately after - so not as a folder. Looking at Google recommendations they show examples for ways to block 'folder directories' and 'individual pages' only. Question:
If I add the following to the Robots.txt file will it block all review pages? User-agent: *
Disallow: /default.aspx?z=review Much thanks,
Davinia0 -
3 results for a site on page one?!?
Hi, I've never seen a website rank on page 1 in position 2, 3 and 4 for one query, completely separate results as well. I thought they limited the amount of results from a website on each page?
Intermediate & Advanced SEO | | activitysuper0