Campaign. Only 1 page is crawled
-
I have a campaign setup a couple weeks ago and noticed that only 1 page has been crawled. Is there something I need to do to get all pages crawled?
-
Hi!
It looks like Roger is getting stuck on the 301 redirects you have set up for your campaigns. The best way to get this resolved is to archive the existing campaigns and create new ones with the updated domain. For example, if seomoz.org is redirecting to www.seomoz.org, the campaign should be set up with the subdomain (www.seomoz.org).
You can archive your existing campaigns in case you ever need to reactivate them. Hopefully this helps. Thanks!
Best,
Sam
Moz Helpster -
So it sounds like I should set my domain to whatever the canonical is?
Edit: How do I change the domain? I can't find the area to edit it
-
The most common reasons for this are at https://seomoz.zendesk.com/entries/409821-Why-Isn-t-My-Site-Being-Crawled-You-re-Not-Crawling-All-My-Pages- . If those don't solve it, send an email to help@seomoz.org and we'll help you figure it out.
Keri
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Aren't domain.com/page and domain.com/page/ the same thing?
Hi All, A recent Moz scan has turned up quite a few duplicate content notifications, all of which have the same issue. For instance: domain.com/page and domain.com/page/ are listed as duplicates, but I was under the impression that these pages would, in fact, be the same page. Is this even something to bother fixing or a fluke scan? If I should fix it does anyone know of an .htaccess modification that might be used? Thanks!
Moz Pro | | G2W0 -
Campaign link not working?
I imagine most of the initial problems with the domain migration to moz.com will be cleaned up rather quickly. I did however notice right away my links to my campaigns from the dashboard were not working. I then click the campaigns tab up top and my campaign links worked from there. Just thought I'd share real quick incase anyone else is having the same issue.
Moz Pro | | CDUBP0 -
Slowing down SEOmoz Crawl Rate
Is there a way to slow down SEOmoz crawl rate? My site is pretty huge and I'm getting 10k pages crawled every week, which is great. However I sometimes get multiple page requests in one second which slows down my site a bit. If this feature exists I couldn't find it, if it doesn't, it's a great idea to have, in a similar way to how Googlebot do it. Thanks.
Moz Pro | | corwin0 -
Why do pages with canonical urls show in my report as a "Duplicate Page Title"?
eg: Page One
Moz Pro | | DPSSeomonkey
<title>Page one</title>
No canonical url Page Two
<title>Page one</title> Page two is counted as being a page with a duplicate page title.
Shouldn't it be excluded?0 -
How do I get the Page Authority of individual URLs in my exported (CSV) crawl reports?
I need to prioritize fixes somehow. It seems the best way to do this would be to filter my exported crawl report by the Page Authority of each URL with an error/issue. However, Page Authority doesn't seem to be included in the crawl report's CSV file. Am I missing something?
Moz Pro | | Twilio0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0 -
On Page Optimization Reports - Huh?
I've been working hard to use this EXCELLENT tool for optimize some of what I consider my most important pages . . . But the automatic tool that pulls pages and grades them (the "summary" of the "on page" report) . . . I don't get it. It only graded three of my pages, and I don't understand how it chose what keywords to grade it for? I'm just very confused. I don't understand how it chose the pages to grade, not the words it chose to grade it against. 😞
Moz Pro | | damon12120