Moz Crawler suddenly reporting 1000s of duplicates (BE.net)
-
In the last 3-4 days we've had several thousand 'duplicate content' warnings appear in our crawl report, 99% of them related to our on-site blog. The blog is BlogEngine.Net, but the pages simply don't exist. The majority seem to be Roger trying quasi-random URLs like:
/?page=410/?page=151
Etc. etc. The blog will present content for these requests, but it is of course the same empty page since there's only unique content for up to /?Page=10 or so.
Two questions:
1. Did something change recently? These blogs have been up for months, and this problem has only come up this week. Did Roger change to become more aggressive lately?
2. Suggested remediation? On one of the blogs I've put no-index no-follow for any page that has a /?page querystring, and we'll see what effect that has come next crawl next week. However, I'm not sure this will work as per:
http://moz.com/community/q/functionality-of-seomoz-crawl-page-reports
Anyone else had dynamic blogs suddenly blossom into thousands of duplicate content warnings? Google (rightly) ignores these pages completely.
-
Hate to bump my own question, but it appears I spoke too soon about no-index,no-follow solving this. The duplicate errors went away for about 5 days, but then yesterday spiked with the same problem. I've confirmed that no-index, no-follow are present on the pages being detected as bad.
As per the best practices document:
http://moz.com/learn/seo/robotstxt
Using meta robots no index no follow is the recommended option:
Block with Meta NoIndex
This tells engines they can visit, but are not allowed to display the URL in results. This is the recommended method
But it apparently isn't working, as evidenced by the new surge of duplicate errors. Is there anything else I can do? I don't want to explicitly block Roger in robots.txt as that seems rather backward. Should Roger be included the Bad Robots List?
-
Peter -
Thanks for the clarification. I understand the philosophy at hand, and I kind of even understood it before I had asked the question. I'm handling these with a mix of canonical and no-index/no-robot.
Related to that, update:
By marking the superfluous pages no-index/no-follow the error count for the site has diminished by about 10,000 and the warning count by about 28,000 so that seems to be the way to go. The pages that had content are 'low value' in this context, since that content was readily available elsewhere.
-
Hi there!
Thanks for writing in with a great question.
We definitely count those dynamic URLs as duplicate content. While we are pretty sure that search engines can figure this stuff out and know which URL to index, it's still considered best practices to canonicalize or otherwise direct crawlers to the original URL (as far as I know. I'm not a professional SEO so you might be better off asking the Pro Q&A community at www.moz.com/community/q - they are all SEOs like you).
Since some dynamic URL generators can cause problems for crawlers, we do try to be overly-inclusive of these issues rather than overly-exclusive. We want people to know about potential issues with sites, even if they're not really issues in the scheme of the site owner's specific SEO implementation plan.
In sum, we'd rather leave those judgments up to you and at the same time, provide you with the data you need to make these decisions. I hope this helps explain our thinking here! However, if you think that our crawler might be having issues, and you do not want to post your site urls here you could always send us a support ticket at help@moz.com. That way can can examine it a bit further and provide some insights into why our crawler thinks this way!
Hope this helps!
Peter
Moz Help Team.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Stripping Out Referral Spam From Past Reports
Hi, I'm looking to confirm the best approach for retroactively stripping away referral spam (free buttons, SEMalt, etc.). Now to be clear, I already have filters in place to ignore them from current stats, so moving forward I'm fine. However, I'd love to go back and check untainted stats. I've setup segments using a regex to strip the root words away and it seems to be working. I have a regex setup to strip out things like: social-buttons|seoanalyses|copyrightclaims|classifiedads|jobsense|free-share-buttons|e-buyeasy|acrobats.hol|cheap-online|amezon|search-help|qut-smoking and so forth. I've been going through my referral data, noticing obvious spam, and adding their domains to my segment. Is this the optimal way for me to get a clear, untainted view of my past stats?
Reporting & Analytics | | kirmeliux0 -
New GSC Search Analytics report: position mixes web and image
Dear all, I am auditing a site in Google Seach Console (GSC, formerly Google Webmaster Tools) and find the Position data in the new Search Analytics report very, very improbable. I suspect that even if you filter by "SearchType = web", the Position data does count the ranking of images in the Image search widget as a search position. Has anybody observed this as well? Here is the case: the site targets a quite broad search query in the bath room domain. I have made a number of searches with private browser sessions, different browsers, alternative IP address via a VPN, etc, and the look of the search result in the relevant geographical market is consistently the following. Three Adwords ads #1 organic result Images universal results widget #2-10 organic results The site’s first page ranks consistently around #15 of the organic results, hence on the second SERP. But it also consistently has an image in the Images universal results widget (usually #2 or #3). This is consistent with the data I have in Moz Analytics. Yet, the GSC Search Analytics report shows 2.2 as average position with the default SearchType=Web setting. I have done the search over and over, and never has a PAGE of the site ranked that high. Is there any public information how exactly the position is calculated? I mean, something more precise than the very general information on https://support.google.com/webmasters/answer/6155685?hl=en Is there any way to get the correct position/ranking? Thanks for sharing your experience!
Reporting & Analytics | | QRN0 -
No Data in Custom Report set to 'Hit' Scope
Hi Guys, Been having a problem recently with a custom report I have set up... I want to find out number of sessions, bounce rate, session duration etc for different dimensions on my site - store area, store name, product type etc but I cannot seem to get the data to filter through to the report I have set up when 'Session' scope is selected. If I set it to 'hit' then I do get the data but this will only record the first instance of a dimension being triggered (from what I can gather) rather than all dimensions that might be triggered during a complete session. Has anyone experienced similar problems? Thanks, Dan
Reporting & Analytics | | SEOBirmingham810 -
Landing pages report - Meaning of clics metric
Hi there, I am looking at the landing pages report on Google Analytics, I see 4 columns: Impressiones Clics Average position CTR Regarding the clics metric, this shouldn't be equal to the sessions of organic traffic that you get? In Adwords, a clic is a session. What I see is that clics are not sessions and I am a bit surprised of this. Why are they different in this report? Thanks and regards Thanks and regards
Reporting & Analytics | | footd0 -
'Search Queries Report' in Webmaster Tools Question
Hi, How much do you use the search queries report in webmaster tools to research current rankings/movements? It does look like a great tool but the data doesn't seem to be spot on. For example a keyword over a week might have flux in position so lets say 6.0 then 9.2 for 3 days then back to 6.0. But I check the serp's for this keyword everyday and didn't see any movement?!?! Is this a good tool for you?
Reporting & Analytics | | activitysuper0 -
What does "on first page" mean in seomoz ranking reports?
Hi - When reports here show numbers of keywords appearing "on first page", there must be some implicit assumption made about the number of results listed per page. 1. Can anyone tell me what that assumption is? Is it 10? 20? 2. What about universal results Local links? If the answer to number one is, for instance, 20 results per page, then are there any assumptions made about the number of universal results Local links included? I'm just trying to understand what the reports mean. Thanks, Tim
Reporting & Analytics | | tcolling0 -
How are 301s reported in GA?
Does anyone have any insight on exactly how on-site 301s are reported in Google Analytics? My direct traffic seems to climb at the same rate as my organic with absolutely no off-line promotion. I have a suspicion that the 301s that I have built to re-coupe traffic being sent to old pages are being reported as direct. Any validity to this?
Reporting & Analytics | | NextGenEDU0 -
Google Analytics - April Search Report Data Gone For All Clients - Anyone Else Seeing This?
Hey everyone, In the last hour I was doing some SEO referral reportingin Google Analytics and discovered that my April data was completely gone (flat lined to zero). Specifically, if i select a date range for any point in april and select search from the advanced segments area I am seeing this issue. This is happening for ALL of my active clients... anyone else seeing this?
Reporting & Analytics | | dpeddle0