How to set the crawler or reports to ignore
-
I have a mobile version of a site with a URL string that disables the mobile view on smartphones (view full site), string is like this example.com/page-name.html?mobile=off
I need the seomoz pro reports or crawler to ignore it because the crawler visits both versions of the site then reports them as duplicate content. Is there a setting page I haven't visited yet that will set this?
-
Thanks for the confirmation. I hadn't wanted to put that in the robots file because if inquisitive, non-robot traffic if you get my meaning, but in it goes; can't stand seeing errors.
-
Got it, thanks for jumping in so quick.
-
Hey Str8,
Donford is correct. You would need to specifically block our crawler from the mobile pages in your robots.txt file. Unfortunately, we don't currently have a way to disregard specific pages or errors in the web app, but we are looking to add that function sometime in the future.
I hope this helps. Please let me know if you have any other questions.
-
Hi Str8
**user-agent: rogerbot disallow: /*mobile=**
I think may do the trick... somebody may need to confirm this.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Custom Report Dates
Is it possible to run manual reports for set dates? I would like to give a current standing report to a client rather than wait a whole month.
Moz Pro | | creati540 -
How can I create a keyword ranking report with a custom date range?
I need to see change since campaign inception, and would like the flexibility of other date ranges. (in a list form, not per keyword) Does Moz have a way to do this yet? This is the one feature I really need that Moz hasn't historically had. Can the API be modified for this? Would appreciate some feedback guys!
Moz Pro | | dspete0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
Why does Crawl Diagnostics report this as duplicate content?
Hi guys, we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools. Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler. Here's an example, taken directly from our Crawl Diagnostics Report: URL with 4 Duplicate Content errors:
Moz Pro | | yacpro13
/safety-lights.html Duplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514 What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.html So why is SEOMoz crawler still flagging this as duplicate content?0 -
Broken links in the pdf of the On Page Report
Hi, I run an individual On Page report for a particular URL, then I export as pdf. The URL appears in the pdf and looks fine but when you click on it it goes to a 'page not found'. I know the URL is correct. When I hover over the URL in the pdf i notice that the word 'Good' is at the end of my URL but I did not put this in there. if I give the report to a client it doesn't look so good. http://www.narellanpools.com.au/local-contact/narellan-pools-alburywodongaGood Is this a bug? Cheers Virginia
Moz Pro | | VirginiaC0 -
Very Slow Advanced Reports
Hello All - I've been running some Advanced Reports again lately, and they seem much slower than I remember from last time I ran some. Currently I've got one (Inbound Links) report at 2,500 out of 10,000 links retrieved through LSAPI, and it's been at that point for about 6 hours. Did something get cloggered on the reports, or is it this just the expected performance?
Moz Pro | | icecarats0 -
How often are open site explorer reports updated?
I collect the information contained in open site explorer reports and csv backlinks audits on the 15th of every month. I noticed that the numbers are unchanged from 9/15-10/15. How often are the reports typically updated?
Moz Pro | | seagreen0