Partial Match or RegEx in Search Console's URL Parameters Tool?
-
So I currently have approximately 1000 of these URLs indexed, when I only want roughly 100 of them.
Let's say the URL is www.example.com/page.php?par1=ABC123=&par2=DEF456=&par3=GHI789=
All the indexed URLs follow that same kinda format, but I only want to index the URLs that have a par1 of ABC (but that could be ABC123 or ABC456 or whatever). Using URL Parameters tool in Search Console, I can ask Googlebot to only crawl URLs with a specific value. But is there any way to get a partial match, using regex maybe?
Am I wasting my time with Search Console, and should I just disallow any page.php without par1=ABC in robots.txt?
-
No problem
Hope you get it sorted!
-Andy
-
Thank you!
-
Haha, I think the train passed the station on that one. I would have realised eventually... XD
Thanks for your help!
-
Don't forget that . & ? have a specific meaning within regex - if you want to use them for pattern matching you will have to escape them. Also be aware that not all bots are capable of interpreting regex in robots.txt - you might want to be more explicit on the user agent - only using regex for Google bot.
User-agent: Googlebot
#disallowing page.php and any parameters after it
disallow: /page.php
#but leaving anything that starts with par1=ABC
allow: page.php?par1=ABC
Dirk
-
Ah sorry I missed that bit!
-Andy
-
Disallowing them would be my first priority really, before removing from index.
The trouble with this is that if you disallow first, Google won't be able to crawl the page to act on the noindex. If you add a noindex flag, Google won't index them the next time it comes-a-crawling and then you will be good to disallow
I'm not actually sure of the best way for you to get the noindex in to the page header of those pages though.
-Andy
-
Yep, have done. (Briefly mentioned in my previous response.) Doesn't pass
-
I thought so too, but according to Google the trailing wildcard is completely unnecessary, and only needs to be used mid-URL.
-
Hi Andy,
Disallowing them would be my first priority really, before removing from index. Didn't want to remove them before I've blocked Google from crawling them in case they get added back again next time Google comes a-crawling, as has happened before when I've simply removed a URL here and there. Does that make sense or am I getting myself mixed up here?
My other hack of a solution would be to check the URL in the page.php, and if URL includes par1=ABC then insert noindex meta tag. (Not sure if that would work well or not...)
-
My guess would be that this line needs an * at the end.
Allow: /page.php?par1=ABC* -
Sorry Martijn, just to jump in here for a second - Ria, you can test this via the Robots.txt testing tool in search console before going live to make sure it work.
-Andy
-
Hi Martijn, thanks for your response!
I'm currently looking at something like this...
**user-agent: *** #disallowing page.php and any parameters after it
disallow: /page.php #but leaving anything that starts with par1=ABC
allow: /page.php?par1=ABCI would have thought that you could disallow things broadly like that and give an exception, as you can with files in disallowed folders. But it's not passing Google's robots.txt Tester.
One thing that's probably worth mentioning really is that there are only two variables that I want to allow of the par1 parameter. For example's sake, ABC123 and ABC456. So would need to be either a partial match or "this or that" kinda deal, disallowing everything else.
-
Hi Ria,
I have never tried regular expressions in this way, so I can't tell you if this would work or not.
However, If all 1000 of these URL's are already indexed, just disallowing access won't then remove them from Google. You would ideally be able to place a noindex tag on those pages and let Google act on them, then you will be good to disallow. I am pretty sure there is no option to noindex under the URL Parameter Tool.
I hope that makes sense?
-Andy
-
Hi Ria,
What you could do, but it also depends on the rest of your structure is Disallow these urls based on the parameters (what you could do in a worst case scenario is that you would disallow all URLs and then put an exception Allow in there as well to make sure you still have the right URLs being indexed).
Martijn.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to use redirects on a massive site consolidation
We are migrating 13 websites into a single new domain and with that we have certain pages that will be terminated or moved to a new folder path so we need custom 301 redirects built for these. However, we have a huge database of pages that will NOT be changing folder paths and it's way too many to write custom 301's for. One idea was to use domain forwarding or a wild card redirect so that all the pages would be redirected to their same folder path on the new URL. The problem this creates though is that we would then need to build the custom 301s for content that is moving to a new folder path, hence creating 2 redirects on these pages (one for the domain forwarding, and then a second for the custom 301 pointing to a new folder). Any ideas on a better solution to this?
Intermediate & Advanced SEO | | MJTrevens0 -
Canonical URL's searchable in Google?
Hi - we have a newly built site using Drupal, and Drupal likes to create canonical tags on pretty much everything, from their /node/ url's to the URL Alias we've indicated. Now, when I pull a moz crawl report, I get a huge list of all the /node/ plus other URL's. That's beside the point though... Question: when I directly enter one of the /node/ url's into a google search, a result is found. Clicking on it redirects to the new URL, but should Google even be finding these non-canonical URL's?? I don't feel like I've seen this before.
Intermediate & Advanced SEO | | Jenny10 -
What's the best redirect to use for a newer version of a blog post?
For example: suppose you have a post "The Best Games to Play for YouTube Gamers in 2016" and you want to make this a yearly series. Should you 301 the 2016 version to the new 2017 one? Should you use the canonical attribute? If 2016 isn't in the URL, should you make the 2017 one the new URL?
Intermediate & Advanced SEO | | Edward_Sturm0 -
URL structure with broad search phrase but specific intent
My question is regarding some difficult URL structure questions in an online real estate marketplace. Our problem is that our customers search behavior is very broad, but their intent very narrow. For IRL examples go to objektia (dot) se. Example: Lease commercial space Stockholm Is a usual search query, wherein the user searches for the **broad category **commercial space, in the geography of Stockholm. The problem is that their intent is actually much more specific, since: Commercial space === [Office, Retail, Industrial, Storage, Properties] I have previously asked the forum for help regarding the placement of products in our URL-hierarchy, in which I got some good answers. We chose to go the route of alternative #3, ie placing our products (real estate listings), directly beneath their respective category (neighborhoods). https://moz.com/community/q/placement-of-products-in-url-structure-for-best-category-page-rankings Basically we chose to have the following URL structure: Structure: domain.se/category/subcategory/product Example: domain.se/Stockholm/suburb-of-stockholm/specific-listing-12 Now the question is, how do we deal with the **space type **modifier in our URL structure. Nobody wants to see retail space when they are after office space, so our current search page solution (category page) is the following: Structure: domain.se/space-type/neighborhood/sub-neighborhood All space types: domain.se/commercial-space/neighborhood/sub-neighborhood Specific space type: domain.se/office-space/neighborhood/sub-neighborhood Now, the problem with our current solution in combination with our intent to move our product pages into this hierarchy, is that every product page will be (and is today) linking towards the specific type category. Our internal link network would be built around type categories that are extremely relevant from a UX standpoint, but almost worthless (surprisingly) from an organic traffic standpoint. Also, every search page (category page) for each space type would be competing for the same search broad search phrase. The alternative is to place the type modifier at the end of the URL: Category page type at the end: domain.se/neighborhood/sub-neighborhood/type Listing page (product page), type at the end: domain.se/neighborhood/sub-neighborhood/street-address/type/listing-12
Intermediate & Advanced SEO | | Viktorsodd0 -
'?q=:new&sort=new' URL parameters help...
Hey guys, I have these types of URLs being crawled and picked up on by MOZ but they are not visible to my users. The URLs are all 'hidden' from users as they are basically category pages that have no stock, however MOZ is crawling them and I dont understand how they are getting picked up as 'duplicate content'. Anyone have any info on this? http://www.example.ch/de/example/marken/brand/make-up/c/Cat_Perso_Brand_3?q=:new&sort=new Even if I understood the technicality behind it then I could try and fix it if need be. Thanks Guys Kay
Intermediate & Advanced SEO | | eLab_London0 -
301's - Do we keep the old sitemap to assist google with this ?
Hello Mozzers, We have restructured our site and have done many 301 redirects to our new url structure. I have seen one of my competitors have done similar but they have kept the old sitemap to assist google I guess with their 301's as well. At present we only have our new site map active but am I missing a trick by not have the old one there as well to assist google with 301's. thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
What is happening with this page's rankings? (G Analytics screenprint attached) help me.
Hi, At the moment im confused. I have a page which shows up for the query 'bank holidays' first page solid for 2 years - this also applies to the terms 'mothers day', 'pancake day' and a few others (UK Google). And there still ranking. Here is the problem: Usually I would rank for 'bank holidays 2014' (the terms with the year in are the real traffic drivers) and would be position 3/5. Over the last 3 months this has decayed dropping position to 30+. From the screenprint you can see the term 'Bank Holidays' is holding on but the term 'bank holidays 2014' is slowly decaying. If you query 'bank holidays 2015' we don't appear in rankings at all. What is causing this? The content is ok, social sharing happens and the odd link is picked up hear and there. I need help, how do I start pushing this back in the other direction, its like the site is slowly dying. And what really kills me, is 2 pages are ranking on page1 off link farms. URL: followuk.co.uk/bank-holidays serp-decay.jpg
Intermediate & Advanced SEO | | followuk0 -
Pagination Question: Google's 'rel=prev & rel=next' vs Javascript Re-fresh
We currently have all content on one URL and use # and Javascript refresh to paginate pages, and we are wondering if we transition to the Google's recommended pagination if we will see an improvement in traffic. Has anyone gone though a similar transition? What was the result? Did you see an improvement in traffic?
Intermediate & Advanced SEO | | nicole.healthline0