Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • Hello All, On my ecommerce landing pages, I currently have links to my products as H3 Tags. I also have useful guides displayed on the page with links useful articles we have written (they currently go to my news section). I am wondering if I should put those article links as additional H3 tags as well for added seo benefit  or do I have to many tags as it is ?. A link to my Landing Page I am talking about is - http://goo.gl/h838RW Screenshot of my h1-h6 tags - http://imgur.com/hLtX0n7 I enclose screenshot my guides and also of my H1-H6 tags. Any advice would be greatly appreciated. thanks Peter

    | PeteC12
    0

  • Hi, I had to edit some of the URLs. But, google is still showing my old URL in search results for certain keywords, which ofc get 404. By crawling with ScremingFrog it gets me 301 'page not found' and still giving old URLs. Why is that? And do I need to re-index pages with new URLs? Is 'fetch as Google' enough to do that or any other advice? Thanks a lot, hope the topic will help to someone else too. Dusan

    | Chemometec
    0

  • Hello fellow Mozzers, We have two websites for two similar brands at my place of employment, the two brands currently serve slighly different products but could be held quite happily under one branded site. As part of a potential group merger into one sole brand, we will have to create one joined up website which will then feature all our products. The newly merged site will also have more scope to allow us to expand our product range where as currently one brand is kind of specific to a particular market due to its name. So as part of the Merge, I have to consider the potential implications for our search traffic, as this is an integral part of our business. Brand A - older, more authorative, great content, good organic positions - top 10 for pretty much all terms we favour. Brand B - younger, but has more marketing scope due to name, still good site and lots of content. Unfortunately Brand B has more in terms of potential lifespan, but is currently the less authorative of the two sites we run. it has lower DA and PR according to my Moz Analytics, a lower number of quality links and less content. In order to give the Brand B website the boost that is needed and in effect replace Brand A in the serps which has great organic positions, I need to make sure all bases are ticked for an action plan. So far this is what I have. Transfer all exisiting Brand A web pages to Brand B website. Rel canonical all Brand A pages to now point to Brand B websites new pages. 301 redirect all pages on Brand A to Brand B during the transfer. Once 301 redirects are in place then request external sites to actually repoint to Brand B website for any links. Update xml Sitemaps Update any content that mentions Brand B to now be Brand A. resubmit sitemaps to Webmaster tools Update all social profiles Update all local search profiles and listings Update all review sites with new brand name / merge any with both brands On a supplementary note for customer information, looking to also keep the older Brand A Home page up for a short time to help people understand the transition rather than a complete redirect which to our demographic could confuse and alienate people. Will also look to send a mass email to roughly 400K people informing them of the move abd how it affects them. I have no doubt there will be some glaringly obvious additions, any further advice would be much appreciated. Hope you are all well. Tim

    | TimHolmes
    1

  • Hi, I am having some issues with country targeting of our sites. Just to give a brief background of our setup and web domains We use magento and have 7 connected ecommerce sites on that magento installation 1.www.tidy-books.co.uk  (UK) - main site 2. www.tidy-books.com (US) - variations in copy but basically a duplicate of UK 3.www.tidy-books.it (Italy) - fully translated by a native speaker - its' own country based social medias and content regularly updated/created 4.www.tidy-books.fr  (France) - fully translated by a native speaker - its' own country based social medias and content regularly updated/created 5.www.tidy-books.de (Germany) - fully translated by a native speaker - uits' own country based social medias and content regularly updated/created 6.www.tidy-books.com.au (Australia) - duplicate of UK 7.www.tidy-books.eu (rest of Europe) - duplicate of UK I’ve added the country and language href tags to all sites. We use cross domain canonical URLS I’ve targeted in the international targeting in Google webmaster the correct country where appropriate So we are getting number issues which are driving me crazy trying to work out why The major one is for example If you search with an Italian IP in google.it  for our brand name Tidy Books the .com site is shown  first then .co.uk and then all other sites followed on page 3 the correct site  www.tidy-books.it The Italian site is most extreme example but the French and German site still appear below the .com site. This surely shouldn’t be the case? Again this problem happens with the co.uk and .com sites with when searching google.co.uk for our keywords the .com often comes up before the .co.uk so it seems we have are sites competing against each other which again can’t be right or good. The next problem lies in the errors we are getting on google webmaster on all sites is having no return tags in the international targeting section. Any advice or help would be very much appreciated. I’ve added some screen shots to help illustrate and happy to provide extra details. Thanks UK%20hreflang%20errors.png de%20search.png fr%20search.png it%20search.png

    | tidybooks
    1

  • Does google count YouTube links in the video description

    | bondhoward
    0

  • Howdy Moz, We've recently bought a new domain and we're looking to change over to it. We're also wanting to change our permalink structure. Right now, it's a WordPress site that uses the post date in the URL. As an example: http://blog.mydomain.com/2015/01/09/my-blog-post/ We'd like to use mod_rewrite to change this using regular expressions, to: http://newdomain.com/blog/my-blog-post/ Would this be an appropriate solution? RedirectMatch 301 /./././(.) /blog/$1

    | IanOBrien
    0

  • My client has an old eCommerce website that is ranking high in Google. The website is not responsive for mobile devices. The client wants to create a responsive design mobile version of the website and put it on a different URL address. There would be a link on the current page pointing to the external mobile website. Is this approach ok or not? The reason why the client does not want to change the design of the current website is because he does not have the budget to do so and there are a lot of pages that would need to be moved to the new design. Any advice would be appreciated.

    | andypatalak
    0

  • Hi, I actually asked it a year and a half ago (with a slight variation) but didn't get any real response and things do change over time. On my eCommerce website I have the main category pages with client side filtering and sorting. As a result, the number of page views is lower than can be expected. Do you think having more page views is still a ranking factor? and if so is it more important than user experience? Thanks

    | BeytzNet
    1

  • Here's an example: I get a 404 error for this: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all But a search for qjamba restaurant coupons gives a clear result as does this: site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all What is going on?  How can this page be indexed but not in the Google cache? I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine.  This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google  Yet, no cache.

    | friendoffood
    2

  • Do they crawl backlinks from an iframe example from a Youtube video embedded in a blog post? TIA!

    | zpm2014
    0

  • I was referred to this plugin and have found it to be the most irritating and poorly designed plugin in the world. I want to be able to set my titles without it changing my page headers as well. For instance - If I set my title to be "This is my article name | site name" it will make my H1 tag read the same. I do not want or desire this nonsense. Why would they think this is something wise? Why would I want my site name on every single H1 tag on my site? How can I fix this? I only want my title to be my title. I want my H1 tag to remain the post/page name that I define in wordpress.

    | Atomicx
    0

  • I noticed in a recent Q&A response that along with Bing and Google web master tools there was a reference to Yandex. What is the relevance of this webmaster tool set? Is there a cost associated with it?  If so is it worth it? I would love to hear what the community thinks.

    | Ron_McCabe
    0

  • I'm not so sure that disavowing links also discounts the anchor texts from those links. Because nofollow links absolutely still pass anchor text values. And disavowing links is supposed to be akin to nofollowing the links. I wonder because there's a potential client I'm working on an RFP for and they have tons of spammy directory links all using keyword rich anchor texts and they lost 98% of their traffic in Pengiun 1.0 and haven't recovered. I want to know what I'm getting into. And if I just disavow those links, I'm thinking that it won't help the anchor text ratio issues. Can anyone confirm?

    | MiguelSalcido
    0

  • I'm working on a very conventional-type site with a home page (why come to us), methods we use, pricing, reviews, FAQs and contact us. After reading the Moz case study at (http://www.conversion-rate-experts.com/seomoz-case-study/), I have been working on a conversion-optimised home page that consolidates much of content in all these pages. At the bottom of the home page, I then plan to add a list of blog posts "Want to read more? We have a lot of useful information on our blog. Here are the most popular articles:" with articles that explain more about the methods we use for example (content that was formerly on our methods page). Obviously this new blog will also have more interesting information (but a lot that could actually be converted into pages) This radically changes the site into just a home page full of selling points and calls-to-action and a blog. I have some questions about this strategy: How do we keep our search engine ranking for keywords such as "[our service] prices" or "[a particular method] London". We rank quite well on Google for these and it goes straight to the relevant page. Shall we keep the pages active somewhere even though the information is also on the home page? Is a blog actually necessary here (SEO wise)? The things I'm planning to write could easily be made into more pages. Am I going about this completely wrong by trying using the CRO guide? Should this sort of page be reserved for landing pages? The reason why I'm considering making a conversion-generating home page is because we only sell one service pretty much (although there are differences in how we do it on children vs. adults) and because we are quite niche so most of our traffic comes from organic sources. Thank you

    | LondonAli
    0

  • Hi all, I help run a website for a history-themed podcast and we just moved it to its second domain in 7 years. We've had very good SEO up until last week, and I'm wondering if I screwed up the way I redirected the domains. It's like this: Originally the site was hosted at "first.com", and it acquired inbound links. However, we then started to host the site on blogger, so we... Redirected the site to "second.blogspot.com". (Thus, 1 --> 2) It stayed here for about 7 years and got lots of traffic. Two weeks ago we moved it off of blogger and into Wordpress, so we 301 redirected everything to... third.com. (Thus, 1 --> 2 --> 3) The redirects worked, and when we Google individual posts, we are now seeing them in Google's index at the new URL. My question: What about the 1--> 2 redirect? There are still lots of links pointing to "first.com". Last week I went into my GoDaddy settings and changed the first redirect, so that first.com now points to third.com. (Thus 1 --> 3, and 2-->3) I was correct in doing that, right? The drop in Google traffic I've seen this past week makes me think that maybe I screwed something up. Should we have kept 1 --> 2 --> 3? (Again, now we have 1-->3 and 2-->3) Thanks for any insights on this! Tom

    | TomNYC
    1

  • One of our clients is in the Cosmetic Surgery business (bodevolve.com) and individuals most likely to purchase a cosmetic procedure only search for 2 things....'**before & after photos' and 'cost'. ** That being said we've worked extremely hard to optimize all 500+ before and after photos. And to our great disappointment, they still aren't being indexed...we are testing a few things but any feedback would be greatly appreciated! All photos are in the 'attachment' sitemap: http://bodevolve.com/sitemap_index.xml I'm also testing a few squeeze pages like this one: http://bodevolve.com/tummy-tuck-before-and-after-photos/ Thanks so much, Brit

    | BritneyMuller
    0

  • Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah

    | Modbargains
    1

  • Hello Everyone - I maintain a site with over a million distinct pages of content. Each piece of content can be thought of like a node in graph database or an entity. While there is a bit of natural hierarchy, every single entity can be related to one or more other entities. The conceptual structure of the entities like so: Agency - A top level business unit ( ~100 pages/urls) Office - A lower level business unit, part of an Agency ( ~5,000 pages/urls) Person - Someone who works in one or more Offices ( ~80,000 pages/urls) Project - A thing one or more People is managing ( ~750,000 pages/urls) Vendor - A company that is working on one or more Projects ( ~250,000 pages/urls) Category - A descriptive entity, defining one or more Projects ( ~1,000 pages/urls) Each of these six entities has a unique (url) and content. For each page/url, there are internal links to each of the related entity pages. For example, if a user is looking at a Project page/url, there will be an internal link to one or more Agencies, Offices, People, Vendors, and Categories. Also, a Project will have links to similar Projects. This same theory holds true for all other entities as well. People pages link to their related Agencies, Offices, Projects, Vendors, etc, etc. If you start to do the math, there are tons of internal links leading to pages with tons of internal links leading to pages with tons of internal links. While our users enjoy the ability to navigate this world according to these relationships, I am curious if we should force a more strict hierarchy for SEO purposes. Essentially, does it make sense to "nofollow" all of the horizontal internal links for a given entity page/url? For search engine indexing purposes, we have legit sitemaps that give a simple vertical hierarchy...but I am curious if all of this internal linking should be hidden via nofollow...? Thanks in advance!

    | jhariani
    2

  • Hi Mozzers What is your view on the following. Should you Paginate comments to increase page speed? If yes, at what # of comments would you begin pagination? (with the objective being decreasing page load times) Apply rel="canonical" back to the main article URL? eg: url/comment-page-1 => url noindex the comment pages? create a "View all" comments page? Thanks in advance for your help! 🙂
    J

    | jeremycabral
    0

  • Hello Moz folks ! For the very first time im dealing with a massive community who rely on UGC ( user generated content ). Their forum is finding a great deal of duplicate content/broken link/ duplicate title and on-site issue. I have Advance SEO knowledge related to ecommerce or blogging but new to forum and UGC. I would really love to learn or get ressources links that would allow me to see/understand the best practices in term of SEO. Any help is greatly appreciated. Best, Yan

    | ydesjardins200
    0

  • Howdy Mozzers! We would like to use no follow, no index on our magento layered navigation pages after any two filters are selected. (We are using single filter pages as landing page, so we would liked them indexed) Is it ok to use nofollow, noindex on these filter pages? Are there disadvantages of using nofollow on internal pages? Matt mentioned refraining from using nofollow internally https://www.youtube.com/watch?v=4SAPUx4Beh8 But we would like to conserve crawling bandwidth and PR flow on potentially 100's of thousands of irrelevant/duplicate filter pages.

    | MozAddict
    0

  • I am looking to condense a features list on my pricing page. it is currently a static list however I want the user to click a button and a full list of standard features will pop up in a lightbox. How will this affect my SEO? Can Google read content in a lightbox?

    | ParkerSoftware
    0

  • It doesn't make sense to me that a 404 causes a loss in link juice, although that is what I've read.  What if you have a page that is legitimate -- think of a merchant oriented page where you sell an item for a given merchant --, and then the merchant closes his doors.  It makes little sense 5 years later to still have their merchant page so why would removing them from your site in any way hurt your site?  I could redirect forever but that makes little sense.  What makes sense to me is keeping the page for a while with an explanation and options for 'similar' products, and then eventually putting in a 404.  I would think the eventual dropping out of the index actually REDUCES the overall link juice (ie less pages), so there is no harm in using a 404 in this way.  It also is a way to avoid the site just getting bigger and bigger and having more and more 'bad' user experiences over time. Am I looking at it wrong? ps I've included this in 'link building' because it is related in a sense -- link 'paring'.

    | friendoffood
    0

  • Hi all A query has recently been raised internally with regard to the use of canonical links. Due to CMS limitations with a client who's CMS is managed by a third party agency, canonical links are currently output with the port number attributed, e.g. example.com/page:80 ...as opposed to the correct absolute URL: example.com/page Note port number are not attributed to the actual page URLs. We have been advised that this canonical link functionality cannot be amended at present. My personal interpretation of canonical link requirements is that such a link should exactly match the absolute URL of the intended destination page, my query is does this extend to the attribution of port number to URLs. Is the likely impact of the inclusion of such potentially incorrect URLs likely to be the same as purely incorrect canonical links. Thanks

    | 26ryan
    0

  • We are a large catalog company with thousands of products across 2 different domains. Google clearly knows that the sites are connected. Both domains are fairly well known brands - thousands of branded searches for each site per month. Roughly half of our products overlap - they appear on both sites. We have a known duplicate content issue - both sites having exactly the same product descriptions, and we are working on it. We've seen that when a product has different content on the 2 sites, frequently, both pages get to page 2 of the SERPs, but that's as far as it goes, despite aggressive white hat link building tactics. 1. Is it possible to get the same product pages on page 1 of the SERPs for both sites? (I think I know the answer...) 2. Should we be canonicalizing (is that a word?) products across the sites? This would get tricky - both sites have roughly the same domain authority, but in different niches. Certain products and keywords naturally rank better on 1 site or the other depending on the niche.

    | AMHC
    0

  • I didn't find an answer on a search on this, so maybe someone here has faced this before. I am loading 20 images that are in the viewport and a bit below.  The next 80 images I want to 'lazy-load'.  They therefore are seen by the bot as a blank.gif file.  However, I would like to get some credit for them by giving a description in the alt tag.  Is that a no-no?  If not, do they all have to be the same alt description since the src name is the same?  I don't want to mess things up with Google by being too aggressive, but at the same time those are valid images once they are lazy loaded, so would like to get some credit for them. Thanks!  Ted

    | friendoffood
    0

  • I have recently set up a mens style blog - the site is made up of articles pulled in from a CMS and I am wanting to keep the design as clean as possible - so no text other than the articles. This makes it hard to get a H1 tag into the page - are there any solutions/alternatives? that would be good for SEO? The site is http://www.iamtheconnoisseur.com/ Thanks

    | SWD.Advertising
    0

  • One of my clients has about 40 franchisees who are going to have their own sites on a WordPress multisite installation. The question they're wondering: should these be on subdirectories (thesite.com/this-franchise) or on subdomains (this-franchise.thesite.com)? Which is best for SEO? Thanks!

    | ideasandpixels
    0

  • I know there has been some mention on Moz Q&A for .uk.com, but not for at least 3 years. So I wanted to see if any Mozzers out there knew if having a .uk.com domain would hinder our SEO long-term? Our company is finally now taking SEO seriously and we're planning some great stuff for the year ahead, but I have a feeling that our .uk.com domain may prevent us from out-ranking some of the bigger companies out there. Does anyone have any thoughts about this out there? Thanks 🙂

    | JamesPearce
    0

  • Hi ive heard silos being mentioned in the past to help with rankings does this still apply? and what about breadcrumbs do i use them with the silo technique or instead of which ones do you think are better or should i not be using these anymore with the recent google updates?

    | juun
    0

  • I want to put blog on my site.  The IT department is asking that I use a subdomain (myblog.mysite.com) instead of a subfolder (mysite.com/myblog).  I am worried b/c it was my understanding that any links I get to my blog posts (if on subdomain) will not count toward the main site (search engines would view almost as other website).   The main purpose of this blog is to attract backlinks.  That is why I prefer the subfolder location for the Blog. Can anyone tell me if I am thinking about this right? Another solution I am being offered is to use a reverse proxy. Thoughts? Thank you for your time.

    | ecerbone
    0

  • Hello Moz Community, I had a conversation with someone who claimed that implementing a DMCA protection badge, such as those offered at http://www.dmca.com/ for $10/mo, will improve a site's Google rankings.  Is this true? I know that if my content is stolen it can hurt my rankings (or the stolen content can replace mine), but I'm asking if merely implementing the badge will help my rankings. Thanks! Bill

    | Bill_at_Common_Form
    0

  • This search   -  site:www.qjamba.com/online-savings/automotix gives me this result from Google: Automotix online coupons and shopping - Qjamba
    https://www.qjamba.com/online-savings/automotix
    Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. and Google tells me there is another one, which is 'very simliar'.  When I click to see it I get: Automotix online coupons and shopping - Qjamba
    https://www.qjamba.com/online-savings/Automotix
    Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. This is because I recently changed my program to redirect all urls with uppercase in them to lower case, as it appears that all lowercase is strongly recommended. I assume that having 2 indexed urls for the same content dilutes link juice.  Can I safely remove all of my UpperCase indexed pages from Google without it affecting the indexing of the lower case urls?  And if, so what is the best way -- there are thousands.

    | friendoffood
    0

  • Is it legit to show different content to http request having different referrer? case a: user view one page of the site with plenty of information about one brand, and click on a link on that page to see a product detail page of that brand, here I don't want to repeat information about the brand itself case b: a user view directly the product detail page clicking on a SERP result, in this case I would like to show him few paragraph about the brand Is it bad? Anyone have experience in doing it? My main concern is google crawler. Should not be considered cloaking because I am not differentiating on user-agent bot-no-bot. But when google is crawling the site which referrer will use? I have no idea, does anyone know? When going from one link to another on the website, is google crawler leaving the referrer empty?

    | max.favilli
    0

  • Hey All, Was just looking through some google pages on best practices for meta descriptions and came across this little tidbit. "Include clearly tagged facts in the description. The meta description doesn't just have to be in sentence format; it's also a great place to include structured data about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information—price, age, manufacturer—scattered throughout a page. A good meta description can bring all this data together. For example, the following meta description provides detailed information about a book. " This is the first time I have seen suggested use of structured data in meta descriptions.  Does this totally replace a regular meta description or will it work in conjunction with the regular meta description? If I provide both structured data and text, will the SERP display text and the structured data the way it was previously displayed? Or will the 150 -160 character limit take precedence and just cut off all info after that?

    | Whebb
    0

  • Due to the way most auto dealership website populate inventory pages, should you allow inventory to be indexed at all? The main benefit us more content. The problem is it creates duplicate, or near duplicate content. It also creates a ton of crawl errors since the turnover is so short and fast. I would love some help on this. Thanks!

    | Gauge123
    0

  • Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok.  Of course, it is normally NOT ok,  I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location.  The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen.  OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url),   So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results   The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location.   If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think?  Are these scenarios a concern for getting penalized by Google? Thanks, Ted

    | friendoffood
    0

  • I have a domain (no subdomains) that serves up different dynamic content for mobile/desktop pages--each having the exact same page url, kind of a semi responsive design, and will be using "Vary: User-Agent" to give Google a heads up on this setup. However, some of the pages are only valid for mobile or only valid for desktop.  In the case of when a page is valid only for mobile (call it mysite.com/mobile-page-only ), Google Webmaster Tools is giving me a soft 404 error under Desktop, saying that the page does not exist, Apparently it is doing that because my program is actually redirecting the user/crawler to the home page.  It appears from the info about soft 404 errors that Google is saying since it "doesn't exist" I should give the user a 404 page--which I can make it customized and give the user an option to go to the home page, or choose links from a menu, etc.. My concern is that if I tell the desktop bot that mysite.com/mobile-page-only basically is a 404 error (ie doesn't exist), that it could mess up the mobile bot indexing for that page--since it definitely DOES exist for mobile users.. Does anyone here know for sure that Google will index a page for mobile that is a 404 not found for desktop and vice versa?  Obviously it is important to not remove something from an index in which it belongs, so whether Google is careful to differential the two is a very important issue.  Has anybody here dealt with this or seen anything from Google that addresses it?  Might one be better off leaving it as a soft 404 error? EDIT: also, what about Bing and Yahoo?  Can we assume they will handle it the same way? EDIT: closely related question--in a case like mine does Google need a separate sitemap for the valid mobile pages and valid desktop pages even though most links will be in both?  I can't tell from reading several q&a on this. Thanks, Ted

    | friendoffood
    0

  • Hi, I can not seem to find good documentation about the use of hreflang and paginated page when using rel=next , rel=prev
    Does any know where to find decent documentatio?, I could only find documentation about pagination and hreflang when using canonicals on the paginated page. I have doubts on what is the best option: The way tripadvisor does it:
    http://www.tripadvisor.nl/Hotels-g187139-oa390-Corsica-Hotels.html
    Each paginated page is referring to it's hreflang paginated page, for example: So should the hreflang refer to the pagined specific page or should it refer to the "1st" page? in this case:
    http://www.tripadvisor.nl/Hotels-g187139-Corsica-Hotels.html Looking foward to your suggestions.

    | TjeerdvZ
    0

  • Hello All, We want to split up our Sitemap , currently it's almost 10K pages in one xml sitemap but we want to make it in smaller chunks splitting it by category or location or both. Ideally into 100 per sitemap is what I read is the best number to help improve indexation and seo ranking. Any thoughts on this ? Does anyone know or any good tools out there which can assist us in doing this ? Also  another question I have is that should we put all of our products (1250) in one site map or should this also be split up in to say products for category etc etc ? thanks Pete

    | PeteC12
    0

  • My client is generating templates for his eBay template based on content he has on his eCommerce platform. I'm 100% sure this will cause duplicate content issues. My question is this.. and I'm not sure where eBay policy stands with this but adding the canonical tag to the template.. will this work if it's coming from a different page i.e. eBay? Update: I'm not finding any information regarding this on the eBay policy's: http://ocs.ebay.com/ws/eBayISAPI.dll?CustomerSupport&action=0&searchstring=canonical So it does look like I can have rel="canonical" tag in custom eBay templates but I'm concern this can be considered: "cheating" since rel="canonical is actually a 301 but as this says: http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html it's legitimately duplicate content. The question is now: should I add it or not? UPDATE seems eBay templates are embedded in a iframe but the snap shot on google actually shows the template. This makes me wonder how they are handling iframes now. looking at http://www.webmaster-toolkit.com/search-engine-simulator.shtml does shows the content inside the iframe. Interesting. Anyone else have feedback?

    | joseph.chambers
    1

  • For reasons I won't get into here, I need to move most of my site to a new domain (DOMAIN B) while keeping every single current detail on the old domain (DOMAIN A) as it is. Meaning, there will be 2 live websites that have mostly the same content, but I want the content to appear to search engines as though it now belongs to DOMAIN B. Weird situation. I know. I've run around in circles trying to figure out the best course of action. What do you think is the best way of going about this? Do I simply point DOMAIN A's canonical tags to the copied content on DOMAIN B and call it good? Should I ask sites that link to DOMAIN A to change their links to DOMAIN B, or start fresh and cut my losses? Should I still file a change of address with GWT, even though I'm not going to 301 redirect anything?

    | kdaniels
    0

  • We have a medium size site that lost more than 50% of its traffic in July 2013 just before the Panda rollout. After working with a SEO agency, we were advised to clean up various items, one of them being that the 10k+ urls were all mixed case (i.e. www.example.com/Blue-Widget). A 301 redirect was set up thereafter forcing all these urls to go to a lowercase version (i.e. www.example.com/blue-widget). In addition, there was a canonical tag placed on all of these pages in case any parameters or other characters were incorporated into a url. I thought this was a good set up, but when running a SEO audit through a third party tool, it shows me the massive amount of 301 redirects. And, now I wonder if there should only be a canonical without the redirect or if its okay to have tens of thousands 301 redirects on the site. We have not recovered yet from the traffic loss yet and we are wondering if its really more of a technical problem than a Google penalty. Guidance and advise from those experienced in the industry is appreciated.

    | ABK717
    0

  • Hi guys I run my own photography webstie (www.hemeravisuals.co.uk Going through the process optimizing my page for seo. I have one question I have a few gallery pages with no text etc? Do I still have to optimize these ? Would it rank my site lower if they weren't optimized? And how can i do this sucessfully with little text on these pages ( I have indepth text on these subjects on my services & pricing pages? Kind Regards Cam

    | hemeravisuals
    0

  • We are changing our homepage (and gradually the rest of the site) to Angular JS.
    In order not to lose anything in terms of SEO we are implementing Hashbangs + escaped fragment snapshots. Are there any other SEO considerations you think we should have and/or additional elements that we could add to the page to improve it in terms of SEO?

    | theLotter
    0

  • Hi - our website's sitemap is pretty huge, and I'm trying to generate it with the hreflang= information in it, because we have 11 different language sites all under the .com. I used the Media Flow generator for this purpose, but it returned a lot of entries with a blank tag. Our U.S. website by far has the most pages, so an example of what I'm getting is: Does this look correct???? Doesn't to me but I'm unsure.

    | Jenny1
    0

  • I need help regarding some SEO strategy that need to be implemented to my website http://goo.gl/AiOgu1 . My website is a leading live chat product, daily it receives around 2000 unique visitors. Initially the website was impacted by manual link penalty, I cleaned up lot of backlinks, the website revoked from the penalty some where around June'14. Most of the secondary and longtail Keywords started ranking in Google, but unfortunately, it do not rank well for the primary keywords like (live chat, live chat software, helpdesk etc). Since I have done lot of onsite changes and even revamped the content but till now I dont find any improvement. I am unable to understand where I have got structed.
    can anyone help me out?

    | sandeep.clickdesk
    0

  • I have a site that has around 5,000 pages now. Are there any recommened online free/paid tools to generate a sitemap for me?

    | rhysmaster
    0

  • Hey Mozers! I've noticed that on www.Zappos.com they have a Canonical tag on each page referencing it self.  I have heard that this is a popular method but I dont see the point in canon tagging a page to its self. Any thoughts?

    | rpaiva
    0

  • We have a client that has an events category section that is filled to the brim with past events webpages.  Another issue is that these old events webpages all contain duplicate meta description tags, so we are concerned that Google might be penalizing our client's website for this issue.   Our client does not want to create specialized meta description tags for these old events pages. Would it be a good idea to 301 redirect these old events landing pages to the main events category page to pass off link equity & remove the duplicate meta description tag issue?   This seems drastic (we even noticed that searchmarketingexpo.com is keeping their old events pages).  However it seems like these old events webpages offer little value to our website visitors. Any feedback would be much appreciated.

    | RosemaryB
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.