Crawling issue
-
Hello,
I am working on 3 weeks old new Magento website. On GWT, under index status >advanced, I can only see 1 crawl on the 4th day of launching and I don't see any numbers for indexed or blocked status.
| Total indexed | Ever crawled | Blocked by robots | Removed |
| 0 | 1 | 0 | 0 |I can see the traffic on Google Analytic and i can see the website on SERPS when i search for some of the keywords, i can see the links appear on Google but i don't see any numbers on GWT.. As far as I check there is no 'no index' or robot block issue but Google doesn't crawl the website for some reason.
Any ideas why i cannot see any numbers for indexed or crawled status on GWT?
Thanks
Seda
| | | | |
| | | | | -
Thanks Davenport and Everett, I've got XML sitemap submitted already, checked robot and no index etc but no stats yet. I'll wait for a few weeks more but it just doesn't make sense to not get any stays after a month. Meanwhile, If i figure out anything, I'll reply here.
-
The data in GWT is not always updated regularly. Also, for a new site that has never been indexed before and has no, or few, external links, it would not be surprising to experience infrequent crawls. The more links you earn and the more of a history of fresh content and updated pages you develop, the more often and deeply you'll be crawled.
As Davenport-Tractor mentioned, an XML sitemap submitted to GWT will also help if you haven't done that already.
If most of your pages are indexed when you do a (site:yourdomain.com) search on Google I wouldn't worry about it too much. If they aren't indexed, you may have a problem, such as inadvertently blocking the crawlers via robots meta tag or robots.txt file. I'd have to see the site to know that though.
-
Seda,
Have you submitted a sitemap to GWMT?
That will greatly help the Google spiders crawl your site. Kind of like telling someone how to find your business vs providing them a road map. They will get there a whole lot quicker if you provide a map on how to find all the different locations.
There are quite a few different sitemap generator programs available. These programs will index your site and build the sitemap.xml file for you. Now you can save the file to your website root directory, then point GWMT to the sitemap.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
XML sitemap generator only crawling 20% of my site
Hi guys, I am trying to submit the most recent XML sitemap but the sitemap generator tools are only crawling about 20% of my site. The site carries around 150 pages and only 37 show up on tools like xml-sitemaps.com. My goal is to get all the important URLs we care about into the XML sitemap. How should I go about this? Thanks
Intermediate & Advanced SEO | | TyEl0 -
Content Strategy/Duplicate Content Issue, rel=canonical question
Hi Mozzers: We have a client who regularly pays to have high-quality content produced for their company blog. When I say 'high quality' I mean 1000 - 2000 word posts written to a technical audience by a lawyer. We recently found out that, prior to the content going on their blog, they're shipping it off to two syndication sites, both of which slap rel=canonical on them. By the time the content makes it to the blog, it has probably appeared in two other places. What are some thoughts about how 'awful' a practice this is? Of course, I'm arguing to them that the ranking of the content on their blog is bound to be suffering and that, at least, they should post to their own site first and, if at all, only post to other sites several weeks out. Does anyone have deeper thinking about this?
Intermediate & Advanced SEO | | Daaveey0 -
Any issues with having a seperate shop section?
Ive had a bit of a dilemma whether to go for a full ecommerce site or having a seperate shop section. My main goal is to push our installation services so Ive decided to go with the latter option. The main categories will be focused solely on Installation services and then Ill have a seperate category which will take the customer to mydomain.com/shop where well have our products for them to buy and fit themselves. The only issue I see is that Im going to have two pages competing against each other. Theyll both have different content but one will be focusing on us installing a particular product and the other will focus on the customer buying it to fit themselves. Will it make things more difficult to rank or wont it make a difference?
Intermediate & Advanced SEO | | paulfoz16090 -
Best way to link to 1000 city landing pages from index page in a way that google follows/crawls these links (without building country pages)?
Currently we have direct links to the top 100 country and city landing pages on our index page of the root domain.
Intermediate & Advanced SEO | | lcourse
I would like to add in the index page for each country a link "more cities" which then loads dynamically (without reloading the page and without redirecting to another page) a list with links to all cities in this country.
I do not want to dillute "link juice" to my top 100 country and city landing pages on the index page.
I would still like google to be able to crawl and follow these links to cities that I load dynamically later. In this particular case typical site hiearchy of country pages with links to all cities is not an option. Any recommendations on how best to implement?0 -
Removing massive number of no index follow page that are not crawled
Hi, We have stackable filters on some of our pages (ie: ?filter1=a&filter2=b&etc.). Those stacked filters pages are "noindex, follow". They were created in order to facilitate the indexation of the item listed in them. After analysing the logs we know that the search engines do not crawl those stacked filter pages. Does blocking those pages (by loading their link in AJAX for example) would help our crawl rate or not? In order words does removing links that are already not crawled help the crawl rate of the rest of our pages? My assumption here is that SE see those links but discard them because those pages are too deep in our architecture and by removing them we would help SE focus on the rest of our page. We don't want to waste our efforts removing those links if there will be no impact. Thanks
Intermediate & Advanced SEO | | Digitics0 -
I have an authority site with 90K visits per month. Now I have to change from non www to www. Will incur in any SEO issues while doing that? Could you please advice me on the best steps to follow to do this? Thank you very much!
Because I want to increase site speed, Siteground (my hosting) suggested I use Cloudflare Plus which needs my site to have www in order to work. I'm also using a cloud hosting. Im a bit scared of doing this, and thus decided to come to the community. I used MOZ for over 6 months now and love the tool. Please help me make the best possible decisions and what steps to follow. It would be much appreciated. Thank you!
Intermediate & Advanced SEO | | Andrew_IT0 -
Crawl budget
I am a believer in this concept, showing google less pages will increase their importance. here is my question: I manage a website with millions of pages, high organic traffic (lower than before). I do believe that too many pages are crawled. there are pages that I do not need google to crawl and followed. noindex follow does not save on the mentioned crawl budget. deleting those pages is not possible. any advice will be appreciated. If I disallow those pages I am missing on pages that help my important pages.
Intermediate & Advanced SEO | | ciznerguy2 -
Working out exactly how Google is crawling my site if I have loooots of pages
I am trying to work out exactly how Google is crawling my site including entry points and its path from there. The site has millions of pages and hundreds of thousands indexed. I have simple log files with a time stamp and URL that google bot was on. Unfortunately there are hundreds of thousands of entries even for one day and as it is a massive site I am finding it hard to work out the spiders paths. Is there any way using the log files and excel or other tools to work this out simply? Also I was expecting the bot to almost instantaneously go through each level eg. main page--> category page ---> subcategory page (expecting same time stamp) but this does not appear to be the case. Does the bot follow a path right through to the deepest level it can/allowed to for that crawl and then returns to the higher level category pages at a later time? Any help would be appreciated Cheers
Intermediate & Advanced SEO | | soeren.hofmayer0