Googlebot size limit
-
Hi there,
There is about 2.8 KB of java script above the content of our homepage. I know it isn't desirable, but is this something I need to be concerned about?
Thanks,
SarahUpdate: It's fine. Ran a Fetch as Google and it's rendering as it should be. I would delete my question if I could figure out how!
-
Agreed. Besides, maybe someone (a newbie like me!) with the same question could see how I figured it out, then try it on their own. Or someone can see what I did and say "wait, that's not right ... ".
I think it comes from my mentality of not to wanting waste people's time on questions I found the answer to - but, yes, we wouldn't want to punish the people putting time into answering, especially when it can help someone else. Thanks for bringing that up, Keri!
-
I would agree. Delete option is not necessary.
-
Roger is very reluctant to delete questions, and feels that it most cases, it's not TAGFEE to do so. Usually by the time the original poster wants to delete a question, there are multiple responses, and deleting the questions would also remove the effort the other community members have put in to answer the question, and remove the opportunity for other people to learn from the experience.
-
Haven't figured that one out either :). Apparantly Roger Mozbot does not like questions being deleted , only edited:)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 to 301 redirects is there a limit?
Hi We've just updated our website and have binned out alot of old thin content which has no value even if re written. We have a lot of 404 error on WMT and I am in the process of doing 301 redirects on them. Is there a limit to the number of 301 the site should have?
Technical SEO | | Cocoonfxmedia0 -
Does adding subcategory pages to an commerce site limit the link juice to the product pages?
I have a client who has an online outdoor gear company. He mostly sells high end outdoor gear (like ski jackets, vests, boots, etc) at a deep discount. His store currently only resides on Ebay. So we're building him an online store from scratch. I'm trying to determine the best site architecture and wonder if we should include subcategory pages. My issue is that I think the subcategory pages might be good from a user experience, but it'll add an additional layer between the homepage and the product pages. The problem is that I think a lot of user's might be searching for the product name to see if they can find a better deal, and my client's site would be perfect for them. So I really want to rank well for the product pages, but I'm nervous that the subcategory pages will limit the link juice of the product pages. Home --> SubCategory --> Product List --> Product Detail Home --> Men's Ski Clothing --> Men's Ski Jack --> North Face Mt Everest Jacket Should I keep the SubCategory page "Men's Ski Clothing" if it helps usability? On a separate note, the SubCategory pages would have some head keyword terms, but I don't think that he could rank well for these terms anytime soon. However, they would be great pages / terms to rank for in the long term. Should this influence the decision?
Technical SEO | | Santaur0 -
Googlebot does not obey robots.txt disallow
Hi Mozzers! We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter. We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming. But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems. What could be the reason? Best regards, Martin
Technical SEO | | TalkInThePark0 -
Googlebot cannot access your site
"At the end of July I received a message in my Google webmaster tools saying "Googlebot can't access your site" We checked our robots.txt file and removed a line break in it, and then I had Google Fetch the file again. I have not received any more messages since then. When we created the website I wrote all of the content and optimized each page for about 1 local keyword. A few weeks after I checked my keywords and did have a few on the first page of google. Since then almost all of them have completely disappeared. Because we had not link building effort I would not expect to still be on the first page, but I should definitely be seeing them before the 5th or even 10th page of Google. The address is http://www.tile-pompanobeach.com I'm not sure if these horrible results have something to do with the message from Google or something else. The problem is this client now wants to sign a contract with us for SEO and I really have no Idea what happened and if I will be able to figure it out. The main keyword for my home page is tile pompano beach and I aslo was using Pompano Beach Tile store for the About page which was previously on the first page of Google. Does anyone have some input?
Technical SEO | | DTOSI0 -
Why is either Rogerbot or (if it is the case) Googlebots not recognizing keyword usage in my body text?
I have a client that does liposuction as one of their main services, they have been ranked in the top 1-5 for their keywords "sarasota liposuction" with different variations of the words for a long time, and suddenly have dropped about 10-12 places down to #15 in the engine. I went to investigate this and actually came to the "on-page analysis" tool for SEOmoz pro, where oddly enough it says that there is no mention of the target keyword in the body content (on-page analysis tool screenshot attached). I didn't quite understand why it would not recognize the obvious keywords in the body text so I went back to the page and inspected further. The keywords have an odd featured link that links up to an internally hosted keyword glossary for definitions of terms that people might not know directly. These definitions pop up in a lightbox upon clicking the keyword (liposuction lightbox screenshots attached). I have no idea why google would not recognize these words as they have the text in between the link, yet if there is something wrong with the code syntax etc. it might possibly hender the engine from seeing the body text of the link? any help would be greatly appreciated! Thank you so much! Phn2m Phn2m.png bWr5K.png V36CL.png
Technical SEO | | jbster130 -
Images on page appear as 404s to Googlebot
When I fetch my website as Googlebot it returns 404s for all the images on the page. This despite the fact that each image is hyperlinked! What could be causing this issue? Thanks!
Technical SEO | | Netpace0 -
Why googlebot indexing one page, not the other?
Why googlebot indexing one page, not the other in the same conditions? In html sitemap, for example. We have 6 new pages with unique content. Googlebot immediately indexes only 2 pages, and then after sometime the remaining 4 pages. On what parameters the crawler decides to scan or not scan this page?
Technical SEO | | ATCnik0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0