Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Need help understanding search filter URL's and meta tags
-
Good afternoon Mozzers,
One of our clients is a real estate agent and on that site there is a search field that will allow a person to search by filtered categories. Currently, the URL structure makes a new URL for each filter option and in my Moz reports I get the report that there is missing meta data. However, the page is the same the filter options are different so I am at a loss as to how to proper tag our site to optimize those URL's. Can I rel canonical the URL's or alt rel them?
I have been looking for a solution for a few days now and like I said I am at a loss of how to properly resolve these warning messages, or if I should even be concerned with the warning messages from Moz (obviously I should be concerned, they are warning messages for a reason).
Thank you for your assistance in advance!
-
Thank you Moosa, I really appreciate the help!
-
I believe what you are saying is that every time, someone search for property, every new filter creates a new URL on runtime.
If this is the case, my advice is to exclude the search URLs from the index or rewrite the URL to make it one URL for all searches.
For instance, instead of new URLs on every filter, try to make a single URL for search like http://www.domain.com/seach.php. This way, the URL for the search will be similar for all the search combinations.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Links to Your Site: No Data Available in Google Search Console
The site I am working on did not have their site submitted to Google Search Console (formerly Google Webmaster Tools). I submitted the site and a sitemap that auto updates. Google is crawling the site daily (about 30 pages a day). Under Search Traffic > Links to Your Site it shows no data is availible. I thought it was because it was a newly submitted site, but it has been two months now. Moz seems to have the same issue. Moz does show inbound links, but their are some that we think should really help us that are not shown. For instance, the Dallas Morning News wrote this article. They have a high DA and PA. Also, iliveindallas.com has an article about us that is still on the front page. That was a few weeks ago but also does not show up on Moz or Google SC. We are trying to be selective about the links we are getting. That they are follow links from reputable sites. Worried that both Google and Moz are not showing them.
Moz Pro | | TapGoods1 -
Youtube traffic page url referral
Hello, How can I see which videos from Youtube that has my domain inserted in their description url drive traffic to my domain? I can see in GA how many visitors are coming from Youtube to my domain, but I can't see what Youtube video pages has driven traffic. Any help?
Moz Pro | | xeonet320 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What's the best way to eliminate "429 : Received HTTP status 429" errors?
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
Moz Pro | | ryanjcormier
Ryan0 -
Site Explorer - No Data Available for this URL
Hi All I have just joined on the trial offer, im not sure if i can afford the monthly payments, but im hoping SEOmoz will show me that i also cannot afford to be without it! In my proses of learning this site and flicking through each section to see what things do. However when i enter my URL into Site Explorer i get the following message "No Data Available for this URL" My site should be crawl-able, so how do i get to see data for my site/s. I wont post my URL here, as the site has a slightly adult theme.
Moz Pro | | jonny512379
If anyone could confirm if i can post "slightly adult" sites. Best Regards
Jon0 -
Estimating the number of LRD I need to outrank competitor
I just ran a SERP/keyword diffculty report for a keyword I want one of my pages to rank Also, I just conducted the on-the-page-optimization and now I am going to start buidling links. => I would like to estimate how many linking root domains I need to overrank one of my competitor. These are the MOZ data:
Moz Pro | | He_Jo
1. My page:
Page Linking Root domains: 0
Root Domain Linking Root Domains: 151 2. Competitor:
Page linking root domains: 1
Root Domain Linking Root Domains: 5,786 I don't really know on which metric (Page or domain LRD) to rely on in order to make an estimation and I would be glad for some help! To simplyfy the problem, assume that all toher factors (code, on-the-page keyword use., social etc.) are equal for both sites. Can I just get 2LRD to that page in order to likely outrank my competitor or do I need around 5000 more links poiting to my site? I think an answer to this question could help a lot of users here, since I saw similar questions/difficulties regarding the use of page LRD vs. root domain LRD P.S. Non of the pages of my website do currently rank in the top 100 for that keyword.2 -
Blogger Duplicate Content? and Canonical Tag
Hello: I previously asked this question, but I would love to get more perspectives on this issue. In Blogger, there is an archive page and label(s) page(s) created for each main post. Firstly, does Google, esp. considering Blogger is their product, possibly see the archive and tag pages created in addition to the main post as partial duplicate content? The other dilemma is that each of these instances - main post, archive, label(s) - claim to be the canonical. Does anyone have any insight or experience with this issue and Blogger and how Google is treating the partial duplicates and the canonical claims to the same content (even though the archives and label pages are partial?) I do not see anything in Blogger settings that allows altering these settings - in fact, the only choices in Blogger settings are 'Email Posting' and 'Permissions' (could it be that I cannot see the other setting options because I am a guest and not the blog owner?) Thanks so much everyone! PS - I was not able to add the blog as a campaign in SEOmoz Pro, which in and of itself is odd - and which I've never seen before - could this be part of the issue? Are Blogger free blogs not able to be crawled for some reason via SEOmoz Pro?
Moz Pro | | holdtheonion0