Can Googlebot read the content on our homepage?
-
Just for fun I ran our homepage through this tool:
http://www.webmaster-toolkit.com/search-engine-simulator.shtml
This spider seems to detect little to no content on our homepage. Interior pages seem to be just fine. I think this tool is pretty old. Does anyone here have a take on whether or not it is reliable? Should I just ignore the fact that it can't seem to spider our home page?
Thanks!
-
Thanks all! Yes, I was familiar with the "Text-only" version and the Fetch as Googlebot, so I wasn't overly concerned. It just seemed odd that this particular spider couldn't get to the content. I think it is a very unsophisticated spider!
-
Assuming you've verified your site in Google Webmaster Tools, you can go in there and to go Crawl > Fetch as Googlebot. Put that page, and have Googlebot fetch it. Once it's done, you can click on the "Success" link, and this will show you exactly what Googlebot fetched with regards to that page. Make sure the source code you're seeing here is what you expect.
-
Hi Dana,
We would normally check through something like Website Auditor... I've run the tool on our home page and it seems to be missing some parts of our content, not sure why. Never had an issue before though with other tools, so would put it down to this tool....
Hope that helps.
-
Take a look at the text-only cached version of the page. If you are unsure how to do that follow my crude instructions below.
What I do to test if Googlebot can view the content of my homepage:
Do a Google search for 'site:example.com' and find your homepage. Next to the green URL in the SERP listing for your homepage there is a green arrow. Click that and select 'cached'. Then, when viewing the cached version of the homepage, click 'Text-only version' in the bottom right corner of the grey bar that appears at the top of the browser.
If the content you are questioning shows up, there is a good chance Google has obviously been able to crawl and index it. If the content is not there, there is a good chance they can't. If the content is in a hidden div it will likely still not show up in the text-only cache.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dulpicate Content being reported
Hi I have a new client whose first MA crawl report is showing lots of duplicate content. The main batch of these are all the HP url with an 'attachment' part at the end such as: www.domain.com/?attachment_id=4176 As far as i can tell its some sort of slide show just showing a different image in the main frame of each page, with no other content. Each one does have a unique meta title & H1 though. Whats the best thing to do here ? Not a problem and leave as is Use the paremeter handling tool in GWT Canonicalise, referencing the HP or other solution ? Many Thanks Dan
Technical SEO | | Dan-Lawrence0 -
Homepage 301 and SEO Help
Hi All, Does redirecting alternate versions of my homepage with a 301 only improve reporting, or are there SEO benefits as well. We recently changed over our servers and this wasn't set-up as before and I've noticed a drop in our organic search traffic. i.e. there was no 301 sending mywebsite.com traffic to www.mywebsite.com Thanks in advance for any comments or help.
Technical SEO | | b4cab0 -
How to Handle Subdomains with Irrelevant Content
Hi Everyone, My company is currently doing a redesign for a website and in the process of planning their 301 redirect strategy, I ran across several subdomains that aren't set up and are pointing to content on another website. The site is on a server that has a dedicated IP address that is shared with the other site. What should we do with these subdomains? Is it okay to 301 them to the homepage of the new site, even though the content is from another site? Should we try to set them up to go to the 404 page on the new site?
Technical SEO | | PapercutInteractive0 -
Can up a page
I do my best to optimize the on-page parameters for my page www.lkeria.com/AADL-logement-Algerie.php for the kw "aadl" but i can't understand what Ii'm doing wrong (i desapear 2 mounths ago). The page is optimize (title, description, h1, h2 etc.) few links with different ancers, but google put a spamy site www[dot]aadl[dot]biz in top 3 ratheer my page. Can you give me some advice to fix this issue? What I am doing wrong? Tanks in advance
Technical SEO | | lkeria0 -
How different does content need to be to avoid a duplicate content penalty?
I'm implementing landing pages that are optimized for specific keywords. Some of them are substantially the same as another page (perhaps 10-15 words different). Are the landing pages likely to be identified by search engines as duplicate content? How different do two pages need to be to avoid the duplicate penalty?
Technical SEO | | WayneBlankenbeckler0 -
While SEOMoz currently can tell us the number of linking c-blocks, can SEOMoz tell us what the specific c-blocks are?
I know it is important to have a diverse set of c-blocks, but I don't know how it is possible to have a diverse set if I can't find out what the c-blocks are in the first place. Also, is there a standard for domain linking c-blocks? For instance, I'm not sure if a certain amount is considered "average" or "above-average."
Technical SEO | | Todd_Kendrick0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0