Questions created by friendoffood
-
Home page links -- Ajax When Too Many?
My home page has links to major cities. If someone chooses a specific city I want to give them the choice to choose a suburb within the city, With say 50 cities and 50 suburbs for each city that's 2500 links on the home page. In order to avoid that many links on the home page (or any page) I would like to have just the 50 cities and pull up the suburbs as an ajax call that search engines would not read/crawl. This would be better than clicking on a main city and then getting the city page which they then can choose a suburb. Better to do it all at once. Is it a bad idea to ajax the subregions on the home page and to code it so Google, Bing, other search engines don't crawl or even see anything on the home page related to the suburbs? The search engines will still find the suburb links because they will be listed on each main city page.
Local Website Optimization | | friendoffood0 -
Upper to Lower Case and Endless Redirects
I have a site that first got indexed about a year ago for several months. I shut it down and opened it up again about 5 months ago. 2 months ago I discovered that the upper case in the pages: www.site.com/some-detail/Bizname/product was a no-no. There are no backlinks to these pages yet, so for search engines I put in 301 redirects to the lower case version thinking after a few weeks Bing and Google would figure it out and no longer try to crawl them. FYI there are thousands of these pages, and they are dynamically created. Well, 2 months later google is still crawling the upper case urls even though it appears that only the lower case are in the index (from when I do a site:www.site.com/some-detail ) search. Bing is also crawling the upper case although I'm not seeing any of the upper case pages and only a small percentage of the lower case ones show using a site:www.site.com/details.... command Assuming there are no backlinks will they eventually stop crawling those uppercase pages? If not, again assuming there are no backlinks, should I 410 the upper case pages, or will that remove any credit I am getting for the page having existed for over a year prior to changing the upper to lower?
Intermediate & Advanced SEO | | friendoffood0 -
Is it OK to have Search Engines Skip Ajax Content Execution?
I recently added some ajax pages to automatically fill in small areas of my site upon page loading. That is, the user doesn't have to click anything. Therefore when Google and Bing crawl the site the ajax is executed too. However, my understanding is that does not mean Google and Bing are also crawling the ajax content. I actually would prefer that the content would be not be executed OR crawled by them. In the case of Bing I would prefer that the content not even be executed because indications are that the program exits the ajax page for Bing because Bing isn't retaining session variables which that page uses, which makes me concerned that perhaps when that happens Bing isn't able to even crawl the main content..dunno..So, ajax execution seems potentially risky for normal crawling in this case. I would like to simply have my program skip the ajax execution for Google and Bing by recognizing them in the useragent and using an If robot == Y skip ajax approach. I assume I could put the ajax program in the robots.txt file but that wouldn't keep Bing from executing it (and having that exit problem mentioned above). It would be simpler to just have them skip the ajax execution altogether. Is that ok or is there a chance the search engines will penalize my site if they find out (somehow) that I have different logic for them than for the actual users? In the past this surely was not a concern but I understand that Google is increasingly trying to become like a browser so may increasingly have a problem with this approach. Thoughts?
Intermediate & Advanced SEO | | friendoffood0 -
Pages are Indexed but not Cached by Google. Why?
Here's an example: I get a 404 error for this: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all But a search for qjamba restaurant coupons gives a clear result as does this: site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all What is going on? How can this page be indexed but not in the Google cache? I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine. This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google Yet, no cache.
Intermediate & Advanced SEO | | friendoffood2 -
When does Google index a fetched page?
I have seen where it will index on of my pages within 5 minutes of fetching, but have also read that it can take a day. I'm on day #2 and it appears that it has still not re-indexed 15 pages that I fetched. I changed the meta-description in all of them, and added content to nearly all of them, but none of those changes are showing when I do a site:www.site/page I'm trying to test changes in this manner, so it is important for me to know WHEN a fetched page has been indexed, or at least IF it has. How can I tell what is going on?
Intermediate & Advanced SEO | | friendoffood0 -
Bingpreview/1.0b Useragent Using Adding Trailing Slash to all URLs
The Bingpreview crawler, which I think exists in order to take snapshots of mobile friendly pages, crawled my pages last night for the first time. However, it is adding a trailing slash to the end of each of my dynamic pages. The result is my program is giving the wrong page--my program is not expecting a trailing slash at the end of the urls. It was 160 pages, but I have thousands of pages it could do this to. I could try doing a mod rewrite but that seems like it should be unnecessary. ALL the other crawlers are crawling the proper urls. None of my hyperlinks have the slash on the end. I have written to Bing to tell them of the problem. Is anyone else having this issue? Any other suggestions for what to do? The user agent is: Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 BingPreview/1.0b
Intermediate & Advanced SEO | | friendoffood0 -
Why wont rogerbot crawl my page?
How can I find out why rogerbot won't crawl an individual page I give it to crawl for page-grader? Google, bing, yahoo all crawl pages just fine, but I put in one of the internal pages fo page-grader to check for keywords and it gave me an F -- it isn't crawling the page because the keyword IS in the title and it says it isn't. How do I diagnose the problem?
Getting Started | | friendoffood0 -
Attack of the dummy urls -- what to do?
It occurs to me that a malicious program could set up thousands of links to dummy pages on a website: www.mysite.com/dynamicpage/dummy123 www.mysite.com/dynamicpage/dummy456 etc.. How is this normally handled? Does a developer have to look at all the parameters to see if they are valid and if not, automatically create a 301 redirect or 404 not found? This requires a table lookup of acceptable url parameters for all new visitors. I was thinking that bad url names would be rare so it would be ok to just stop the program with a message, until I realized someone could intentionally set up links to non existent pages on a site.
Intermediate & Advanced SEO | | friendoffood1 -
Do 404s really 'lose' link juice?
It doesn't make sense to me that a 404 causes a loss in link juice, although that is what I've read. What if you have a page that is legitimate -- think of a merchant oriented page where you sell an item for a given merchant --, and then the merchant closes his doors. It makes little sense 5 years later to still have their merchant page so why would removing them from your site in any way hurt your site? I could redirect forever but that makes little sense. What makes sense to me is keeping the page for a while with an explanation and options for 'similar' products, and then eventually putting in a 404. I would think the eventual dropping out of the index actually REDUCES the overall link juice (ie less pages), so there is no harm in using a 404 in this way. It also is a way to avoid the site just getting bigger and bigger and having more and more 'bad' user experiences over time. Am I looking at it wrong? ps I've included this in 'link building' because it is related in a sense -- link 'paring'.
Intermediate & Advanced SEO | | friendoffood0 -
Is it better to find a page without the desired content, or not find the page?
Are there any studies that show which is best? If you find my page but not the specific thing you want on it, you may still find something of value. But, if you don't you may associate my site with poor results, which can be worse than finding what you want at a competitor site. IOW maybe it is best to have pages that ONLY and ALWAYS have the content desired. What do the studies suggest? I'm asking because I have content that maybe 1/3 of the time exists and 2/3 of the time doesn't...think 'out of stock' products. So, I'm wondering if I should look into removing the page from being indexed during the 2/3 or should keep it. If I remove it then my concern is whether I lose the history/age factor that I've read Google finds important for credibility. Your thoughts?
Search Behavior | | friendoffood0 -
Alt tag for src='blank.gif' on lazy load images
I didn't find an answer on a search on this, so maybe someone here has faced this before. I am loading 20 images that are in the viewport and a bit below. The next 80 images I want to 'lazy-load'. They therefore are seen by the bot as a blank.gif file. However, I would like to get some credit for them by giving a description in the alt tag. Is that a no-no? If not, do they all have to be the same alt description since the src name is the same? I don't want to mess things up with Google by being too aggressive, but at the same time those are valid images once they are lazy loaded, so would like to get some credit for them. Thanks! Ted
Intermediate & Advanced SEO | | friendoffood0 -
Removal tool - no option to choose mobile vs desktop. Why?
Google's removal tool doesn't give a person the option to tell them which index - mobile friendly, or desktop/laptop - the url should be removed from. Why? I may have a fundamental misunderstanding. The way I thought it works is that when you have a dynamically generated page based on the user agent, (ie, the SAME URL but different formatting for smartphones as for desktop/laptop) then the Google mobile bot will index the mobile friendly version and the desktop bot will index the desktop version -- so Google will have 2 different indexed results for the same url. That SEEMS to be validated by the existence of the words 'mobile-friendly' next to some of my mobile friendly page descriptions on mobile devices. HOWEVER, if that's how it works--why would Google not allow a person to remove one of the urls and keep the other? Is it because Google thinks a mobile version of a website must have all of the identical pages as the desktop version? What if it doesnt? What if a website is designed so that some of the slower pages simply aren't given a mobile version? Is it possible that Google doesn't really save results for a mobile friendly page if there is a corresponding desktop page-- but only checks to see if it renders ok? That is, it keeps only one indexed copy of each url, and basically assumes the mobile title and actual content is the same and only the formatting is different? That assumption isn't always true -- mobile devices lend themselves to different interactions with the user - but it certainly could save Google billions of dollars in storage. Thoughts?
Intermediate & Advanced SEO | | friendoffood0 -
Indexing and Resolving to One www.domain.com format
People can come to a site www.domain.com in these 6 different ways. http://www.domain.com, www.domain.com, http://domain.com, domain.com https://www.domain.com, https://domain.com Obviously we don't want google to maintain an index for any more than one of these. What is the way to handle this? 301 redirects for all to resolve to www.domain.com? Or is that overkill? Or 302 redirects? Seems like a pretty basic issue but I'm not finding simple answers.
Intermediate & Advanced SEO | | friendoffood0 -
Mobile Search Results Include Pages Meant Only for Desktops/Laptops
When I put in site:www.qjamba.com on a mobile device it comes back with some of my mobile-friendly pages for that site(same url for mobile and desktop-just different formatting), and that's great. HOWEVER, it also shows a whole bunch of the pages (not identified by Google as mobile-friendly) that are fine for desktop users but are not supposed to exist for the mobile users, because they are too slow. Until a few days ago those pages were being redirected for mobile users to the home page. I since have changed that to 404 not founds. Do we know that Google keeps a mobile index separate from the desktop index? If so, I would think that 404 should work.. How can I test whether the 404 not founds will remove a url so they DON'T appear on a mobile device when I put in site:www.qjamba.com (or a user searches) but DO appear on a desktop for the same command.
Intermediate & Advanced SEO | | friendoffood0 -
HTTP Pages Indexed as HTTPS
My site used to be entirely HTTPS. I switched months ago so that all links in the pages that the public has access to are now http only. But I see now that when I do a site:www.qjamba.com, the results include many pages with https in the beginning (including the home page!), which is not what I want. I can redirect to http but that doesn't remove https from the indexing, right? How do I solve this problem? sample of results: Qjamba: Free Local and Online Coupons, coupon codes ... **<cite class="_Rm">https://www.qjamba.com/</cite>**One and Done savings. Printable coupons and coupon codes for thousands of local and online merchants. No signups, just click and save. Chicnova online coupons and shopping - Qjamba **<cite class="_Rm">https://www.qjamba.com/online-savings/Chicnova</cite>**Online Coupons and Shopping Savings for Chicnova. Coupon codes for online discounts on Apparel & Accessories products. Singlehop online coupons and shopping - Qjamba <cite class="_Rm">https://www.qjamba.com/online-savings/singlehop</cite>Online Coupons and Shopping Savings for Singlehop. Coupon codes for online discounts on Business & Industrial, Service products. Automotix online coupons and shopping - Qjamba <cite class="_Rm">https://www.qjamba.com/online-savings/automotix</cite>Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. Online Hockey Savings: Free Local Fast | Qjamba **<cite class="_Rm">www.qjamba.com/online-shopping/hockey</cite>**Find big online savings at popular and specialty stores on Hockey, and more. Hitcase online coupons and shopping - Qjamba **<cite class="_Rm">www.qjamba.com/online-savings/hitcase</cite>**Online Coupons and Shopping Savings for Hitcase. Coupon codes for online discounts on Electronics, Cameras & Optics products. Avanquest online coupons and shopping - Qjamba <cite class="_Rm">https://www.qjamba.com/online-savings/avanquest</cite>Online Coupons and Shopping Savings for Avanquest. Coupon codes for online discounts on Software products.
Intermediate & Advanced SEO | | friendoffood0 -
Removing UpperCase URLs from Indexing
This search - site:www.qjamba.com/online-savings/automotix gives me this result from Google: Automotix online coupons and shopping - Qjamba
Intermediate & Advanced SEO | | friendoffood
https://www.qjamba.com/online-savings/automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. and Google tells me there is another one, which is 'very simliar'. When I click to see it I get: Automotix online coupons and shopping - Qjamba
https://www.qjamba.com/online-savings/Automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. This is because I recently changed my program to redirect all urls with uppercase in them to lower case, as it appears that all lowercase is strongly recommended. I assume that having 2 indexed urls for the same content dilutes link juice. Can I safely remove all of my UpperCase indexed pages from Google without it affecting the indexing of the lower case urls? And if, so what is the best way -- there are thousands.0 -
Website with only a portion being 'mobile friendly' -- what to tell Google?
I have a website for desktop that does a lot of things, and have converted part of it do show pages in a mobile friendly format based on the users device. Not responsive design, but actual diff code with different formatting by mobile vs desktop--but each still share the same page url name. Google allows this approach. The mobile-friendly part of the site is not as extensive as desktop, so there are pages that apply to the desktop but not for mobile. So the functionality is limited some for mobile devices, and therefore some pages should only be indexed for desktop users. How should that page be handled for Google crawlers? If it is given a 404 not found for their mobile bot will Google properly still crawl it for the desktop, or will Google see that the url was flagged as 'not found' and not crawl it for the desktop? I asked a similar question yest, but it was not stated clearly. Thanks,Ted
Intermediate & Advanced SEO | | friendoffood0 -
Google crawling different content--ever ok?
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think? Are these scenarios a concern for getting penalized by Google? Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0 -
What if page exists for desktop but not mobile?
I have a domain (no subdomains) that serves up different dynamic content for mobile/desktop pages--each having the exact same page url, kind of a semi responsive design, and will be using "Vary: User-Agent" to give Google a heads up on this setup. However, some of the pages are only valid for mobile or only valid for desktop. In the case of when a page is valid only for mobile (call it mysite.com/mobile-page-only ), Google Webmaster Tools is giving me a soft 404 error under Desktop, saying that the page does not exist, Apparently it is doing that because my program is actually redirecting the user/crawler to the home page. It appears from the info about soft 404 errors that Google is saying since it "doesn't exist" I should give the user a 404 page--which I can make it customized and give the user an option to go to the home page, or choose links from a menu, etc.. My concern is that if I tell the desktop bot that mysite.com/mobile-page-only basically is a 404 error (ie doesn't exist), that it could mess up the mobile bot indexing for that page--since it definitely DOES exist for mobile users.. Does anyone here know for sure that Google will index a page for mobile that is a 404 not found for desktop and vice versa? Obviously it is important to not remove something from an index in which it belongs, so whether Google is careful to differential the two is a very important issue. Has anybody here dealt with this or seen anything from Google that addresses it? Might one be better off leaving it as a soft 404 error? EDIT: also, what about Bing and Yahoo? Can we assume they will handle it the same way? EDIT: closely related question--in a case like mine does Google need a separate sitemap for the valid mobile pages and valid desktop pages even though most links will be in both? I can't tell from reading several q&a on this. Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0