Unsolved URL Crawl Reports providing drastic differences: Is there something wrong?
-
A bit at a loss here. I ran a URL crawl report at the end of January on a website( https://www.welchforbes.com/ ). There were no major critical issues at the time. No updates were made on the website (that I'm aware of), but after running another crawl on March 14, the report was short about 90 pages on the site and suddenly had a ton of 403 errors. I ran a crawl again on March 15 to check if there was perhaps a discrepancy, and the report crawled even fewer pages and had completely different results again.
Is there a reason the results are differing from report to report? Is there something about the reports that I'm not understanding or is there a serious issue within the website that needs to be addressed?
Jan. 28 results:
March 14 results:
March 15 results:
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved 403 errors for assets which work fine
Hi,
Moz Tools | | Skites2
I am facing some issue with our moz pro account
We have images stored in a s3 buckets eg: https://assets2.hangrr.com/v7/s3/product/151/beige-derby-cotton-suit-mb-2.jpg
Hundreds of such images show up in link opportunities - Top pages tool - As 403 ... But all these images work fine and show status 200. Can't seem to solve this. Thanks.0 -
Google Search Console - Excluded Pages and Multiple Properties
I have used Moz to identify keywords that are ideal for my website and then I optimized different pages for those keywords, but unfortunately rankings for some of the pages have declined. Since I am working with an ecommerce site, I read that having a lot of Excluded pages on the Google Search Console was to be expected so I initially ignored them. However, some of the pages I was trying to optimize are listed there, especially under the 'Crawled - currently not indexed' and the 'Discovered - currently not indexed' sections. I have read this page (link: https://moz.com/blog/crawled-currently-not-indexed-coverage-status ) and plan on focusing on Steps 5 & 7, but wanted to ask if anyone else has had experience with these issues. Also, does anyone know if having multiple properties (https vs http, www vs no www) can negatively affect a site? For example, could a sitemap from one property overwrite another? Would removing one property from the Console have any negative impact on the site? I plan on asking these questions on a Google forum, but I wanted to add it to this post in case anyone here had any insights. Thank you very much for your time,
SEO Tactics | | ForestGT
Forest0 -
I have over 3000 4xx errors on my site for pages that don't exist! Please help!
Hello! I have a new blog that is only 1 month old and I already have over 3000 4xx errors which I've never had on my previous blogs. I ran a crawl on my site and it's showing as my social media links as being indexed as pages. For example, my blog post link is:
Technical SEO | | thebloggersi
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/
My site is then creating a link like the below:
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/twitter.com/aliciajthomps0n
But these are not real pages and I have no idea how they got created. I then paid someone to index the links because I was advised by Moz, but it's still not working. All the errors are the same, it's indexing my Twitter account and my Pinterest. Can someone please help, I'm really at a loss with it.
2f86c9fe-95b4-4df5-aeb4-73570881938c-image.png0 -
Are these Search Console crawl errors a major concern to new client site?
We recently (4/1) went live with a new site for a client of ours. The client site was originally Point2 before they made the switch to a template site with Real Estate Webmasters. Now when I look into the Search Console I am getting the following Crawl Errors: 111 Server Errors (photos) 104 Soft 404s (blogs, archives, tags) 6,229 Not Found (listings) I have a few questions. The server errors I know not a lot about so I generally ignore. My main concerns are the 404s and not found. The 404s are mostly tags and blog archives which I wonder if I should leave alone or do 301s for each to /blog. For not found, these are all the previous listings from the IDX. My assumption is these will naturally fall away after some time, as the new ones have already indexed. But I wonder what I should be doing here and which will be affecting me. When we launched the new site there was a large spike in clicks ( 250% increase) which has now tapered off to an average of ~85 clicks versus ~160 at time of launch. Not sure if the Crawl Errors have any effect, I'm guessing not so much right now. I'd appreciate your insights Mozzers!
Reporting & Analytics | | localwork0 -
How can I remove parameters from the GSC URL blocking tool?
Hello Mozzers My client's previous SEO company went ahead and blindly blocked a number of parameters using the GSC URL blocking tool. This has now caused Google to stop crawling many pages on my client's website and I am not sure how to remove these blocked parameters so that they can be crawled and reindexed by Google. The crawl setting is set to "Let Google bot decide" but still there has been a drop in the number of pages being crawled. Can someone please share their experience and help me delete these blocked parameters from GSC's URL blocking tool. Thank you Mozzers!
Reporting & Analytics | | Vsood0 -
Does Google encryption of keyword data impact SEO revenue reporting in Google analytics?
Hi there, I know Google has been encrypting SEO keyword data which they rolled out in September 2013. My question is - will this impact SEO revenue figures reported in Google analytics? I have been monitoring SEO revenue figures for a client and they are significantly down even though rankings have not lowered. Is this because of Google's encryption? Could there be another reason? Many thanks!
Reporting & Analytics | | CayenneRed890 -
The curse of (not provided) data....
Buongiorno from 23 degrees C Wetherby UK 🙂 Do you ever get the impression Google doesnt Like SEO practitioners? Thing is the (not provided) snag in the key word Analytics data is a complete pain in arse. Yes you can go into webmaster tools and get a feel for organic keyword data but the joy stops abruptly when you need a full picture of traffic acquisition from a specific keyword. So my question is please:
Reporting & Analytics | | Nightwing
"When a client asks, give me traffic data acquired from an organic phrase". How on earth can you give an accurate answer? And to add salt into the wound the traffic data is going to be less so your SEO efforts are going to take a hit". Is the answer use another analytics service?
Grazie tanto,
David0 -
Tagging URLs Linkbuilding and anchor links
Hi, I am going to publish a press release on a number of different websites. First and foremost, I want to build anchor links back to website for specific keywords. Secondly I want to measure clickthrus from each site using parameter tracking in GA. I want to know if I put in a url with ?utm_source=xxx, will this have any impact upon my linkbuilding efforts? i.e. will search engines attribute the keyword to the long url with tracking or the url without tracking. I understand that everything from the ? mark is ignored. However, i just want to double check before I publish release. Thanks for your help. Mik
Reporting & Analytics | | increation0