7,608 High Priority Crawl Diagnostic problems
-
Hey There,
I have an e-commerce site that is showing 7,608 High Priorities to fix - 7,536 are duplicate content. What's the most effective process to start with?
I'm open to outsourcing some of the work to an expert - email me on dave@emanbee.com
Thanks for your time,
Dave
-
Cheers Kate.
From doing more reading, MOZ/ Google views thin content (300 words or less) or webpages with 95% of the same HTML code as duplicate. That will be the majority of what is showing in my crawl diagnostics.
That means I'm back to your original advice of fixing up duplicate page titles from GWT.
Currently, the canonical tags are generated sitewide through a template function. Without full control over the canonical tag I can't fix or structure things as easily as I'd like so I will see if a web dev can help out with this. We should be able to add the whole link too.
Thanks again,
Dave
-
Looks like moz isn't taking the canonical into effect, as long as it's there, you're fine. But I'd warn you not to use relative canonical links ( /directory/page/ vs http://www.domain.com/directory/page/), link to the whole thing. I've seen this go wrong in the past. It's not causing issues now but could in the future.
-
Hi Katemorris,
Thanks again for getting back to me.
I have started going through and fixing up pages. I'm hoping you can clarify something from MOZ for me?
In MOZ > crawl diagnostics> duplicate page content (the largest and only high priority issue listed for me) > the first link in the list > show the duplicate pages
Below is an example of 4 links that are all showing as duplicates of http://www.mooloolabamusic.com.au/page/brands in the moz software:
http://www.mooloolabamusic.com.au/live-sound-lighting/lighting/atmospheric-effects/?pr=72-82&rf=pr
http://www.mooloolabamusic.com.au/live-sound-lighting/lighting/atmospheric-effects/?pr=0-72&rf=pr
http://www.mooloolabamusic.com.au/studio-production/?pr=1732-1828&rf=pr
http://www.mooloolabamusic.com.au/studio-production/?pr=1770-1827&rf=pr
Can you please clarify how these pages have duplicate content and how to fix this? There are thousands like this.
When I have a look at them using the moz search bar there is already a cononical tag in the header which is either not working or the moz software does not pick it up or is the site template creating 'duplicate content'?
Thanks so much for your time,
Dave
-
Start in Google Webmaster Tools or in the Moz Crawl. Identify those pages with the same title tag and work through that list. The title tag is usually a good indication of duplicate content.
If the content is duplicate for sure, determine if it's a useful duplicate. If so, use a canonical from the duplicate to the original. If it's just duplicated with no real reason, find out how to get rid of the duplicate. This can be anything from unnecessary parameters, to tag pages, and so many more.
You'll start to see trends in the data, try to fix the bigger problems as you see them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Having 1 page crawl error on 2 sites
Help! A few weeks back, my dev team did some "changes" (that I don't know anything about), but ever since then, my Moz crawl has only shown one page for either http://betamerica.com or http://fanex.com. Moz service was helpful in talking about a redirect loop that existed, and I asked my team to fix it, which it looks to me like they have. Still, 1 page. I used SEO Book's spider tool and it also only sees 1 page, and sees the sites as http://https://betamerica.com (for example), which is just weird. I don't know enough about HT Access or server stuff to figure out what's going on, so if someone can help me figure that out, I'd appreciate it.
Moz Pro | | BetAmerica0 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Is there a easy way to see what pages are crawled?
Hello! Like the questions says... Is there a easy way to see what pages are crawled? I don't mean the ones that have issues, but just the ones that have been crawled? Regards,
Moz Pro | | MattDG0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
SEOmoz Dashboard Report: Crawl Diagnostic Summary
Hi there, I'm noticing that the total errors for our website has been going up and down drastically almost every other week. 4 weeks ago there were over 10,000 errors. 2 weeks ago there were barely 1,000 errors. Today I'm noticing it's back to over 12,000 errors. It says the majority of the errors are from duplicate page content & page title. We haven't made any changes to the titles or the content. Some insight and explanation for this would be much appreciated. Thanks, Gemma
Moz Pro | | RBA1 -
How long would a SEOMoz crawl usually take for a site with around 4000 pages?
We are working through optimising a site for one of our clients and the SEOMoz crawl progress says it has been running since the 8th Feburary. It's now almost a week later and it still hasn't finished. The first run took a few days, is there any way of restarting the process?
Moz Pro | | TJSSEO0 -
Initial Crawl Questions
Hello. I just joined and used the Crawl tool. I have many questions and hoping the community can offer some guidance. 1. I received an Excel file with 3k+ records. Is there a friendly online viewer for the Crawl report? Or is the Excel file the only output? 2. Assuming the Excel file is the only output, the Time Crawled is a number (i.e. 1305798581). I have tried changing the field to a date/time format but that did not work. How can I view the field as a normal date/time such as May 15, 2011 14:02? 3. I use the ™ symbol in my Title. This symbol appears in the output as a few ascii characters. Is that a concern? Should I remove the trademark symbol from my Title? 4. I am using XenForo forum software. All forum threads automatically receive a Title Tag and Meta Description as part of a template. The Crawl Test report shows my Title Tag and Meta Description as blank for many threads. I have looked at the source code of several pages and they all have clean Title tags and I don't understand why the Crawl Report doesn't show them. Any ideas? 5. In some cases the HTTP Status Code field shows a result of "3". Why does that mean? 6. For every URL in the Crawl Report there is an entry in the Referrer field. What exactly is the relationship between these fields? I thought the Crawl Tool would inspect every page on the site. If a page doesn't have a referring page is it missed? What if a page has multiple referring pages? How is that information displayed? 7. Under Google Webmaster Tools > Site Configurations > Settings > Parameter Handling I have the options set as either "Ignore" or "Let Google Decide" for various URL parameters. These are "pages" of my site which should mostly be ignored. For example a forum may have 7 headers, each on of which can be sorted in ascending or descending order. The only page that matters is the initial page. All the rest should be ignored by Google and the Crawl. Presently there are 11 records for many pages which really should only have one record due to these various sort parameters. Can I configure the crawl so it ignores parameter pages? I am anxious to get started on my site. I dove into the crawl results and it's just too messy in it's present state for me to pull out any actionable data. Any guidance would be appreciated.
Moz Pro | | RyanKent0