Help with URL parameters in the SEOmoz crawl diagnostics Error report
-
The crawl diagnostics error report is showing tons of duplicate page titles for my pages that have filtering parameters. These parameters are blocked inside Google and Bing webmaster tools. I do I block them within the SEOmoz crawl diagnostics report?
-
Thanks Ben! So even if I rel=canonical all of the pages, they'll still show up in the error report. That makes it hard get a clean report but I get it.
-
Fixing Crawl Diagnostic Issues
http://www.seomoz.org/help/fixing-crawl-diagnostic-issues
Crawl Diagnostics
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Crawl Report more urls?
Hi. I have used Moz Crawl Test and get my 3,000 urls crawled no issue. However, my site has more than that, is it possible to crawl the entire website? Alot of the crawl urls in the Moz test are search string urls and filters so Ive probably wasted about 2,500 urls on filter urls. Any advise or alternative software that wont cost a fortune?
Moz Pro | | YNWA
Thanks0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
Link reporting.
Is there a way in the Pro reporting where I can see a summary of the number of incoming links by type (blogs / news / wiki / dir / forums etc)? Even better, could the report give me an average Page Rank for each link type? Thanks,
Moz Pro | | CarlDarby0 -
Crawl Diagnostics Shows thousands of 302's from a single url. I'm confused
Hi guys I just ran my first campaign and the crawl diagnostics are showing some results I'm unfamiliar with.. In the warnings section it shows 2,838 redirects.. this is where I want to focus. When I click here it shows 5 redirects per page. When I go to click on page 2, or next page, or any other page than page 1 for that matter... this is where things get confusing. Nothing shows. Downloading the csv reveals that 2,834 of these are all showing: URL: http://www.mydomain.com/401/login.php url: http://www.mydomain.com/401/login.php referrer: http://www.mydomain.com/401/login.php location_header: http://www.mydomain.com/401/login.php I guess I'm just looking for an explanation as to why it's showing so many to the same page and what possible actions can be taken on my part to correct it (if needed). Thanks in advance
Moz Pro | | sethwb0 -
Excluding parameters from seomoz crawl?
I'm getting a ton of duplicate content errors because almost all of my pages feature a "print this page" link that adds the parameter "printable=Y" to the URL and displays a plain text version of the same page. Is there any way to exclude these pages from the crawl results?
Moz Pro | | AmericanOutlets0 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1 -
How can I clean up my crawl report from duplicate records?
I am viewing my Crawl Diagnostics Report. My report is filled with data which really shouldn't be there. For example I have a page: http://www.terapvp.com/forums/Ghost/ This is a main forum page. It contains a list of many threads. The list can be sorted on many values. The page is canonicalized, and has been since it was created. My crawl report shows this page listed 15 times. http://www.terapvp.com/forums/Ghost/?direction=asc http://www.terapvp.com/forums/Ghost/?direction=desc http://www.terapvp.com/forums/Ghost/?order=post_date and so forth. Each of those pages uses the same canonicalization reference shared above. I have three questions: Why is this data appearing in my crawl report? These pages are properly canonicalized. If these pages are supposed to appear in the report for some reason, how can I remove them? My desire is to focus on any pages which may have an issue which needs to be addressed. This site has about 50 forum pages and when you add an extra 15 pages per forum, it becomes a lot harder to locate actionable data. To make matters worse, these forum indexes often have many pages. So if I have a "Corvette" forum there that is 10 pages long, then there will be 150 extra pages just for that particular forum in my crawl report. Is there anything I am missing? To the best of my knowledge everything is set up according to the best SEO practices. If there is any other opinions, I would like to hear them.
Moz Pro | | RyanKent0