Why is MOZ crawl is returning URLs with variable results showing Missing Meta Desc? Example: http://nw-naturals.net/?page_number_0=47
-
Can you help me dive down into my website guts to find out why the MOZ crawl is returning URLs with variable results? And saying this is missing a description when it's not really a page? Example: http://nw-naturals.net/?page_number_0=47.
I've asked MOZ but it's a web development issue so they can't help me with it. Has anyone had an issue with this on their website? Thank you!
-
Hi Jocelyn
First thing, is missing meta description (especially on those pages) are not an issue really.
I also just crawled your whole site with Screaming Frog SEO Spider, and didn't find those links or pages internally either. And I also don't see any of them indexed in Google.
I would maybe wait a few weeks and see if the error sin the Moz report go away. It could have been something temporary.
-Dan
-
You have a WordPress website, depending on the SEO plugin you use, you can nofollow /page/5 posts and pages. We use All in One SEO and it's easy to do. Yoast is popular too.
KJr
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to set up Goals/Conversions in Google Analytics with Moz
With my new Moz account, how would I go about setting up a useful Goal/Conversion on Google Analytics?
Moz Pro | | Brandon320 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Is SeoMOZ Crawl Diagnostics wrong here?
We've been getting a ton of critical errors (about 80,000) in SeoMoz' Crawl Diagnostics saying we have duplicate content in our client's E-commerce site. Some of the errors are correct, but a lot of the pages are variations like: www.example.com/productlist?page=1 www.example.com/productlist?page=2 However, in our source code we have used rel="prev" and rel="next" so in my opinion we should be alright. Would love to hear from you if we have made a mistake or if it is an error in SeoMoz. Here's a full paste of the script:
Moz Pro | | Webdannmark0 -
Why is my crawl STILL in progress?
I'm a bit new here, but we've had a few crawls done already. They are always finished by Wednesday night. Our website is not large (by any means), but the crawl still says it's in progress now 3 days later. What's the deal here?!?
Moz Pro | | Kibin0 -
Seomoz shows issue with canconical
Hi When I use the on-page research tool seomoz tells me I have an issue with the rel canconical tag pointing to the wrong url, but I have it set so that on this particular page it points to itself (as per recommendation from seomoz) full url is http://www.growingyourownveg.com/how-to-grow/garlic.html in head section i have <base href="http://www.growingyourownveg.com/"> Have I got this wrong? Google and Bing appear to accept it OK Thanks
Moz Pro | | spes1230 -
Week ending 10/28 - zero organic results?!?!
According to my Google Analytics account - this is not correct - but SEOMoz is reporting on the dashboard that I had a 100% decrease in organic search visits - a paltry zero visits in one week. Is there a known issue or something that SEOMoz is working on?
Moz Pro | | csingsaas0 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0 -
On the Crawl Diagnostics Summary, its reporting over 100 "Title Missing or Empty" issues, but they all check out fine?
Wondering if there Is a bug with the crawler or known timeout issues? Site speed is fast, but we do run a couple of large cron jobs out of hours, which may be the cause of any timeouts, but shouldn't the crawler report that, rather saying no title tags on 100 pages, when there are? SEOmoz newbie, so still finding my feet 🙂
Moz Pro | | sjr4x40