Crawl Rate for Lower Page Authority Websites
-
Hi,At thumbtack.com we get tons of links from low (or no) page authority websites, and I'm wondering what the crawl rate of those links looks like. I know Google pulls in the web at an astonishing rate, but I'd imagine they aren't re-crawling lower PA very frequently.Are they discovering these links a week after they're posted? A month? More? I spent a while looking around for histograms of actual crawl rates and found surprisingly little. I'd love to see average crawl rate by Domain or Page Authority if that exists anywhere.
Thanks!-MichaelP.S. Here are some random examples of the types of pages with inbound links I'm talking about. Normally we wouldn't spend too much time thinking about these, but there's just so many of them we can't ignore it!- http://www.majestic-cleaners.webs.com/- http://domchieraphotography.blogspot.com/- http://charlottepiano.musicteachershelper.com/- http://pin-upgirlphotography.vpweb.com/default.html- http://jfaithful.weebly.com/ -
I have a site that is 4 months old. Prior to today the site had a domain authority of 0, and the home page had a PA of 1. I submitted a daily sitemap and the site was crawled daily. If I ever shared an important article I would submit an extra site map and noticed the content in the search results within a couple hours. This is an active, forum based site.
I have heard others complain their site is crawled very infrequently. I am not sure if Google treated my site well because it was newer, or had good content, or decent activity. I can just share my experience that the site was crawled quite frequently.
Just checking the first site, it has only a few pages. It was designed by a basic site creation software and seems crawlable. Just small sites (around 10 pages) don't change frequently so they don't get crawled often. If the site owner doesn't submit a sitemap letting Google know a change has been made, it may be some time before Google decides to crawl them or finds a link to their site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is my page rank disapointing
Hi fairly new here, so just getting used to everything one questions please. Just ran a crawl test of the website and this page http://www.livingphilosophy.org.uk/teaching-philosophy/index.htm came back with a page authority of 1. Other pages have a rank of 18 through 26 scratched my head for a few hours and came up with no ideas. thanks andy
Moz Pro | | livingphilosophy0 -
Crawl Diagnostics saids a page is linking but I can't find the link on the page.
Hi I have just got my first Crawl Diagnostics report and I have a questions. It saids that this page: http://goo.gl/8py9wj links to http://goo.gl/Uc7qKq which is a 404. I can't recognize the URL on the page which is a 404 and when searching in the code I can't find the %7Blink%7D in the URL which gives the problems. I hope you can help me to understand what triggers it 🙂
Moz Pro | | SebastianThode0 -
How to remove 404 pages wordpress
I used the crawl tool and it return a 404 error for several pages that I no longer have published in Wordpress. They must still be on the server somewhere? Do you know how to remove them? I think they are not a file on the server like an html file since Wordpress uses databases? I figure that getting rid of the 404 errors will improve SEO is this correct? Thanks, David
Moz Pro | | DJDavid0 -
SEO on-demand crawl
what happened to the on-demand crawl you could do in PRO when they switched to the new MOZ site?
Moz Pro | | Vertz-Marketing0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
No Crawl data in dashboard
For the second straight week, I have had no crawl data in my dashboard. It seems like the crawler erased all my results in the pro dashboard. Is there a way to manually recrawl my site, since I will have to wait another week to see if it comes back to earth? Thanks
Moz Pro | | bedwards0 -
One page per campaign?
Not quite sure if I read correctly, but is it correct that one campaign tracks only one page of my site? So if I wanted to track something like a services page, this would require a second campaign?
Moz Pro | | GroundFloorSEO0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0