Have a Campaign, but only states 1 page has been crawled by SEOmoz bots. What needs to be done to have all the pages crawled?
-
We have a campaign running for a client in SEOmoz and only 1 page has been crawled per SEOmoz' data. There are many pages in the site and a new blog with more and more articles posted each month, yet Moz is not crawling anything, aside from maybe the Home page. The odd thing is, Moz is reporting more data on all the other inner pages though for errors, duplicate content, etc...
What should we do so all the pages get crawled by Moz? I don't want to delete and start over as we followed all the steps properly when setting up.
Thank you for any tips here.
-
That's a major glitch David if that happened .... I would suggest something that needs to be checked out, as you indicate, there was a fall back in place previously, is that not the case with the new update ?
Oh, and a glitch that took me about 6 minutes of my life responding to an issue I never assumed could be as you described the solution as
-
Hello!
It looks like you setup a subdomain campaign but left out the subdomain heading "www". There should have been a warning that would have popped up when attempting to submit the setting. Let me know if the warning never appeared. Unfortunately you will have to delete and start over.
Here's an example of how you could set it up: http://www.screencast.com/t/Buu05s3EkiU
This also applies to your other question for the other domain.
Hope this helps!
David Lee
Moz Help Team -
Insure you have no pages directories / blocked by your robots.txt file, has Google indexed your pages yet
Is it a WP website ?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Filter Pages
Howdy Moz Forum!! I have a headache of a job over here in the UK and I'd welcome any advice! - It's sunny today, only 1 of 5 days in a year and i'm stuck on this! I have a client that currently has 22,000 pages indexed to Google with almost 4000 showing as duplicate content. The site has a "jobs" and "candidates" list. This can cause all sorts of variations such as job title, language, location etc. The filter pages all seem to be indexed. Plus the static pages are indexed. For example if there were 100 jobs at Moz being advertised, it is displaying the jobs on the following URL structure - /moz
Moz Pro | | Slumberjac
/moz/moz-jobs
/moz/moz-jobs/page/2
/moz/moz-jobs/page/3
/moz/moz-jobs/page/4
/moz/moz-jobs/page/5 ETC ETC Imagine this with some going up to page/250 I have checked GA data and can see that although there are tons of pages indexed this way, non of them past the "/moz/moz-jobs" URL get any sort of organic traffic. So, my first question! - Should I use rel-canonical tags on all the /page/2 & /page/3 etc results and point them all at the /moz/moz-jobs parent page?? The reason for this is these pages have the same title and content and fall very close to "duplicate" content even though it does pull in different jobs... I hope i'm making sense? There is also a lot of pages indexed in a way such as- https://www.examplesite.co.uk/moz-jobs/search/page/9/?candidate_search_type=seo-consulant&candidate_search_language=blank-language These are filter pages... and as far as I'm concerned shouldn't really be indexed? Second question! - Should I "no follow" everything after /page in this instance? To keep things tidy? I don't want all the variations indexed! Any help or general thoughts would be much appreciated! Thanks.0 -
Crawl Diagnostics saids a page is linking but I can't find the link on the page.
Hi I have just got my first Crawl Diagnostics report and I have a questions. It saids that this page: http://goo.gl/8py9wj links to http://goo.gl/Uc7qKq which is a 404. I can't recognize the URL on the page which is a 404 and when searching in the code I can't find the %7Blink%7D in the URL which gives the problems. I hope you can help me to understand what triggers it 🙂
Moz Pro | | SebastianThode0 -
'Duplicate Page Content' for dissimilar pages
I'm using Moz's Crawl Diagnostics to try and clean up some SEO priorities for our website (http://www.craftcompany.co.uk) HOWEVER, virtually all of the pages that are being categorised as duplicate content are not the same, or indeed similar. For instance, these three pages have been deemed duplicated pages; http://www.craftcompany.co.uk/pme-rose-leaf-veined-plunger.html http://www.craftcompany.co.uk/double-faced-satin-ribbon-black-25mm-wide.html http://www.craftcompany.co.uk/double-faced-satin-maroon-10mm-wide-25mt.html Can anyone give me an insight into why this is? Many Thanks! http://www.craftcompany.co.uk/
Moz Pro | | The_Craft_Company0 -
Where has the old seomoz crawl tool gone? I can't seem to find it
I'm looking for the (SEO)moz crawl tool - but can't find it. Where has it gone?
Moz Pro | | SearchMotion0 -
How to Properly Set Up My Campaign
My url www.pcf.org redirects to ttp://www.pcf.org/site/c.leJRIROrEpH/b.5699537/k.BEF4/Home.htm because of my old CMS and this cannot be changed. Should I set it up as a Subdomain, root domain, or folder? For example using the homepage url http://www.pcf.org/site/c.leJRIROrEpH/b.5699537/k.BEF4/Home.htm I get the following results: subdomain: You've decided to set up a subdomain campaign, but entered the subfolder path:
Moz Pro | | mstanwyck
www.pcf.org/site/c.leJRIROrEpH/b.5699537/k.BEF4. Don't worry, we'll switch that for you and crawl everything on your subdomain: www.pcf.org. If you meant to set this up to only crawl pages in the subfolder /site/c.leJRIROrEpH/b.5699537/k.BEF4, click "Go back and Change" and choose the subfolder option in step 1. root domain: You've decided to set up a root domain campaign, but entered the subfolder path: www.pcf.org/site/c.leJRIROrEpH/b.5699537/k.BEF4. Don't worry, we'll switch that for you and crawl everything in the subfolder /site/c.leJRIROrEpH/b.5699537/k.BEF4. subfolder: It looks like you submitted a URL that relates to an individual file: www.pcf.org/site/c.leJRIROrEpH/b.5699537/k.BEF4/Home.htm
rather than a folder:
www.pcf.org/site/c.leJRIROrEpH/b.5699537/k.BEF4. Don’t worry, we’ll switch that for you and crawl everything in that folder instead. If you meant to set this up for a different folder, click ”Go back and change” and enter a different path. To complicate things further, as you move through the site, the only part of the URL that starys consistent 100% of the time is: http://www.pcf.org/site/c.leJRIROrEpH/ Depending on where you are on the site, it appends the url. For example: One page is http://www.pcf.org/site/c.leJRIROrEpH/b.6185669/k.9792/Sherry_Galloway.htm while another is http://www.pcf.org/site/c.leJRIROrEpH/b.5800843/k.94DE/Everyday_Heroes.htm0 -
Campaign web crawl has failed last 4 times
I have 4 websites setup in my pro dashboard. The only site that isn't getting crawled is an HTTPS site. It has worked for over a year, but the past 4 crawls (an entire month now) has returned only one page crawled. Is there something going on with the crawler? I really need to be able to see these stats. Has anyone else experienced this issue?
Moz Pro | | nbyloff0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
Status 404-pages
Hi all, One of my websites has been crawled by SEOmoz this week. The crawl showed me 3 errors: 1 missing title and 2 client errors (4XX). One of these client errors is the 404-page itself! What's your suggestion about this error? Should a 404-page have the 404 http status? I'd like to hear your opinion about this one! Thanks all!
Moz Pro | | Partouter0