Crawling issues in google
-
Hi everyone,
I think i have crawling issues with one of my sites.
It has vanished form Google rankings it used to rank for all services i offered now it doesn't anymore ever since September 29th.
I have resubmitted to Google 2 times and they came back with the same answer:
"
We reviewed your site and found no manual actions by the web spam team that might affect your site's ranking in Google. There's no need to file a reconsideration request for your site, because any ranking issues you may be experiencing are not related to a manual action taken by the webspam team.
Of course, there may be other issues with your site that affect your site's ranking. Google's computers determine the order of our search results using a series of formulas known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking can happen as we make updates to present the best results to our users.
If you've experienced a change in ranking which you suspect may be more than a simple algorithm change, there are other things you may want to investigate as possible causes, such as a major change to your site's content, content management system, or server architecture. For example, a site may not rank well if your server stops serving pages to Googlebot, or if you've changed the URLs for a large portion of your site's pages. This article has a list of other potential reasons your site may not be doing well in search.
"
How i detected that it may be a crawling issue is that 2 weeks ago i changed metas - metas are very slow in getting updated and for some of my pages never did update
Do you know any good tools to check for bad code that could slow down the crawling. I really don't know where to look other than issues for crawling. I validated the website with w3c validator and ran xenu and cleaned these up but my website is still down.
Any ideas are appreciated.
-
i ran another one
Date: Saturday, November 10, 2012 4:52:04 PM PST
Googlebot Type: Web
Download Time (in milliseconds): 94
<code>HTTP/1.1 200 OK Date: Sun, 11 Nov 2012 00:52:04 GMT</code>
-
Hi,
I already did do that a while ago...i still think i have a javascript issue.
I already did do the GWMT, w3c validation - but none of that helped
Date: Monday, October 22, 2012 5:00:59 PM PDT
Googlebot Type: Web
Download Time (in milliseconds): 115
<code>HTTP/1.1 200 OK Date: Tue, 23 Oct 2012 00:00:59 GMT</code>
-
Hey Cary,
The first thing I would do is to go into Google Webmaster tools -> Health -> Fetch as Googlebot.
Now under Fetch status you should have "Success" if Google managed to fetch the information. Click on it, and check the first few rows, you should see something like:
<code>HTTP/1.1 200 OK Date: Fri, 10 Nov 2012 00:26:17 GMT</code>
If you do see an other response then leave it as a response to this question so we can have investigate further with you.
Gr.,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Appending Blog URL inbetween my homepage and product page is it issue with base url?
Hi All, Google Appending Blog URL inbetween my homepage and product page. Is it issue or base url or relative url? Can you pls guide me? Looking to both tiny url you will get my point what i am saying. Please help Thanks!
Technical SEO | | amu1230 -
Penality issues
Hi there, I'm working on site that has been badly hit by penguin. The reasons are clear, exact match blog network links and tons of spammy exact match links such as comment spam, low quality directories, the usual junk. The spammy links were mainly to 2 pages, they were targetting keyword 1 and keyword 2. I'd like to remove these two pages from google, as they dont even rank in google now and create one high quality page that targets both the keywords, as they are similar. The dilemma I have is these spammy pages still get traffic from bing and yahoo and it's profitable traffic. Is there a safe way to remove the pages from google and leave them for bing and yahoo? Peter
Technical SEO | | PeterM220 -
How to solve issues regarding canonicalization?
Today, I was searching for article which may help me in issues regarding canonicalization and found very interesting article on SEOmoz. I am facing issues regarding de-indexing of pages and down of organic search engine visits. I have done proper R & D and apply it very carefully. But, still my indexed pages and visits are going down. I have applied canonical tag to following pages. Narrow by search: http://www.vistastores.com/outdoor-umbrellas?manufacturer=California+Umbrella Sorting: http://www.vistastores.com/outdoor-umbrellas?dir=desc&order=position Pagination: http://www.vistastores.com/outdoor-umbrellas?p=2 How can I improve my performance?
Technical SEO | | CommercePundit0 -
Crawl diagnostic summary
In my crawl diagnostic summary its showing an error with duplicate page title and duplicate page content...why its been shown and how it can be rectified? I have pne page web site so i was unable to give options for sub domain name is it because of tht?I hope this error wont hamper my SEO process.
Technical SEO | | strasshgoa0 -
How long will Google take to stop crawling an old URL once it has been 301 redirected
I need to do a clean-up old urls that have been redirected in sitemap and was wondering about this.
Technical SEO | | Ant-8080 -
Will google let me do this
Hi i am working on my site at the moment www.in2town.co.uk and i am adding new sections and was thinking about buying domain names that best describe that section and which people would remember. so for example i am looking at adding a tenerife magazine to my site and would like to know if it would be wise to buy a domain name for example tenerife magazine and then have it directed to the section of my site. would this benefit my site in any way and would google allow this. instead of having in2town.co.uk and then tenerife magazine after it, sorry cannot find the slash as i am on a spanish keyborad at the moment, i would like to have something like tenerifemagazine.co..uk etc If anyone can give me advice on this then that would be great. also can anyone let me know if this is a wise idea or not, to have sub domain names on my main site. i would like to know if i had tenerifemagazine under the in2town domain name would it slow the site down or should i consider building a brand new site just for that and then making people aware that it comes under the in2town umbrella many thanks
Technical SEO | | ClaireH-1848861 -
Tracking a Crawl error
Hi All, If you find a crawl error on your page. How do you find it? The error only says the URL that is wrong but this is not the location. Can i drill down and find out more information? Thank you!
Technical SEO | | wedmonds0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0