Why does my crawl report show just one page result?
-
I just ran a crawl report on my site: http://dozoco.com The result report shows results for just one page - the home page, but no other pages. The report doesn't indicate any errors or "do not follows" so I'm unclear on the issue, although I suspect user error - mine.
-
Thanks Sha. The content is "ours" - at least in so far as we've pulled it from retailer sites and or affiliate networks and modified to fit our needs...so not entirely ours, not a pure duplicate either. We do operate a fund raising site which shares the content which is something which I hadn't considered until now...will have to decide how to handle the duplication across the two sites. That said - the rest of your points are well taken and appreciated. We'll have to do some further research into the javascript points and determine how to best handle.
-
Thanks Keri - very helpful.
-
Hi William,
As indicated from the help page that Keri provided, the problem is that the page is entirely rendered in javascript and SEOmoz crawlers do not follow javascript links or redirects.
Of course, the reason why the SEOmoz crawlers do not do this is most likely because Google's (and other search engines) stated position is that they are "getting better" at handling javascript, but the likelihood of trouble free crawling for googlebot is likely low or at the very least unknown.
Bing now has an option in its Webmaster Central that lets you indicate that javascript crawling is required for a site. I have not seen any information on the effectiveness of this as yet, but you could investigate that by hitting their help forum.
Even if search engines manage to crawl the javascript without issue, there are other significant problems with the content on the site. It appears that the site is a multi affiliate whitelabel? All of the text is actually being pulled in from an external page and that page contains content that is duplicated across many other websites. This is the case with every "page".
Unfortunately, all of these things add up to a fairly bad SEO situation. Your best option for generating traffic would be to become massively popular through social channels and use them to feed traffic to the site. That is assuming that this whitelabel platform does not give you the option to create your own content (which would be much better).
Another alternative would be to create a site on a new domain with awesome, unique, shareable content with links to feed traffic to this site, but if you are going that route, making people take an extra click through a second domain on the way to the retailer's site would not be optimal for conversions. So it would be better to add direct affiliate links within the pages.
So, on the whole, I would say that ramping up your social activity is your best approach.
Hope this helps,
Sha
-
Here's a post from the help desk with a couple of reasons for that. http://seomoz.zendesk.com/entries/409821-why-isn-t-my-site-being-crawled-you-only-crawled-one-page. If that doesn't take care of the problem for you, email help@seomoz.org and they'll work with you on getting the rest of the site crawled.
I'm looking at a site:dozoco.com search in Google and all the URLs I see look like http://dozoco.com/#!/store/us-pets. The #! may be the cause of the problem; I'm not exactly sure how Roger deals with crawling that.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to include in my report ?
Hi everybody, I'm not a SEO expert just an e-marketing trainee and i have to create a smart SEO report template for a real estate company. I don't know what to include in this personal monthly report : What are the key informations ?
Moz Pro | | grafmiville
Should I include Google Analytics data and wich one? Sorry for my english that's not my mother tongue. Thanks for your answers0 -
Too many on-page links
I received a warning in my most recent report for too many on-page links for the following page: http://www.fateyes.com/blog/. I can't figure out why this would be. I am counting between 60-70 including all pull downs, "read more's", archive, category and a few additional misc. links. Any ideas or suggestions on this? Or what I might do to rectify? Perhaps it's just an SEOmoz report blip... We currently don't have the post list rolling to additional pages so it's kind of passively set up to be endless, but it's in the works.
Moz Pro | | gfiedel0 -
Erroneous "Weekly Keyword Ranking & On-page Optimization Report" For Campaign
Hi, I just received an email alert from Seomoz telling me my "Weekly Keyword Ranking & On-page Optimization Report " for the period 11/06/12 - 11/13/12 is ready. It is just a copy of the previous report though, all rankings and ranking changes are the same. What is up with that? Best regards, Martin
Moz Pro | | TalkInThePark0 -
Why does Crawl Diagnostics report this as duplicate content?
Hi guys, we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools. Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler. Here's an example, taken directly from our Crawl Diagnostics Report: URL with 4 Duplicate Content errors:
Moz Pro | | yacpro13
/safety-lights.html Duplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514 What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.html So why is SEOMoz crawler still flagging this as duplicate content?0 -
Duplicate content pages
Crawl Diagnostics Summary shows around 15,000 duplicate content errors for one of my projects, It shows the list of pages with how many duplicate pages are there for each page. But i dont have a way of seeing what are the duplicate page URLs for a specific page without clicking on each page link and checking them manually which is gonna take forever to sort. When i export the list as CSV, duplicate_page_content column doest show any data. Can anyone please advice on this please. Thanks <colgroup><col width="1096"></colgroup>
Moz Pro | | nam2
| duplicate_page_content |1 -
Too Many On-Page Links: Crawl Diag vs On-Page
I've got a site I'm optimizing that has thousands of 'too many links on-page' warnings from the SeoMoz crawl diagnostic. I've been in there and realized that there are indeed, the rent is too damned high, and it's due to a header/left/footer category menu that's repeating itself. So I changed these links to NoFollow, cutting my total links by about 50 per page. I was too impatient to wait for a new crawl, so I used the On Page Reports to see if anything would come up on the Internal Link Count/External Link Count factors, and nothing did. However, the crawl (eventually) came back with the same warning. I looked at the link Count in the crawl details, and realized that it's basically counting every single '<a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p=""></a> <a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p="">1. Is no-follow a valid strategy to reduce link count for a page? (Obviously not for SeoMoz crawler, but for Google)</a> <a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p="">2. What metric does the On-Page Report use to determine if there are too many Internal/External links? Apologies if this has been asked, the search didn't seem to come up with anything specific to this.</a>
Moz Pro | | icecarats0 -
Why are inbound links not showing up?
I'm new to SEOmoz but have a question regarding inbound links that I don't see posted in the forum. In order to become more familiar with SEOmoz tools, I've been checking out sites that friends and family members have created as practice. Things have been going really smooth until I came across a 2+ year old page that should have included an inbound link from wsj.com but said link is not appearing in OSE for this page. Background: A friend of mine has a (basically) defunct blog that had a pretty well trafficked posting in 2009. However, when I use OSE to check out both the domain and page inbound links, I don't see the aforementioned inbound link from wsj.com. Why is that? Or, it's insanely late - am I missing something? Friend's blog posting: http://bcclist.com/2009/04/21/craigslist-killer-megan-philipcom-removed/ WSJ posting with a link to my friend's blog (4th paragraph...anchor text = "taken down"): http://blogs.wsj.com/digits/2009/04/21/who-is-megan-mcallister/ No rush. Again, I'm doing this as practice and being new to the site, I figure I'm overlooking something. Any feedback would be greatly appreciated. Thanks!
Moz Pro | | ICM0 -
SEOMoz says i have errors but goole webmaster doesnt show them - which one is right ?
I have about 350 websites all created in farcry 4.0 cms platform. When i do a site crawl using any seo tool ( seomoz, raven, screaming frog) it comes back telling me I have duplicate titles, description and content for a bunch of my pages. The pages are the same page its just that the crawl is showing the object Id and the friendly URL which is autocreated in the CMS as different pages. EXAMPLE these are the samge page but are recognised as different in SEOMOZ crawl test and therefore flagged as having duplicate title tags and content ... <colgroup span="1"><col style="width: 488pt; mso-width-source: userset; mso-width-alt: 23771;" span="1" width="650"></colgroup>
Moz Pro | | cassi
| www.westendautos.com.au/go/latest-news-and-specials <colgroup span="1"><col style="width: 488pt; mso-width-source: userset; mso-width-alt: 23771;" span="1" width="650"></colgroup>
| www.westendautos.com.au/index.cfm?objectid=9CF82BBD-9B98-B545-33BC644C0FA74C8E | | GOOGLE WEBMASTER however does not show me these errors ? It shows no errors at all. Now i believe i can fix this by chucking in a rel=canonical at the top of each page ? (a big job over 350 sites) But even so - my problem is that the website developers are telling me that SEOMOZ and all the other tools are wrong - that google will see these the way it should, that the object ID's would not get indexed ( although i have seen at least one object id show up in the serps.) Do i believe the developers and trust that google has it sorted or go through the process of hassling the developers to get a rel=canonical added to all the pages? (the issue sees my homepage as about 4 different pages www.domain.com/ www.domain.com/home /index AND object id.0