Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • We're working with a client who gets about 80% of their organic, inbound search traffic from links to PDF files on their site. Obviously, this isn't ideal, because someone who just downloads a PDF file directly from a Google query is unlikely to interact with the site in any other way. I'm looking to develop a plan to convert those PDF files to HTML content, and try to get at least some of those visitors to convert into subscribers. What's the best way to go about this?  My plan so far is: Develop HTML landing pages for each of the popular PDFs, with the content from the PDF, as well as the option to download the PDF with an email signup. Gradually implement 301 redirects for the existing PDFs, and see what that does to our inbound SEO traffic.  I don't want to create a dip in traffic, although our current "direct to inbound" traffic is largely useless. Are their things I should watch out for?  Will I get penalized by Google for redirecting a PDF to HTML content? Other things I should be aware of?

    | atourgates
    0

  • I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.

    | andyheath
    0

  • Hello, Some time ago, we accidentally made changes to our site which modified the way urls in links are generated. At once, trailing slashes were added to many urls (only in links). Links that used to send to
    example.com/webpage.html Were now linking to
    example.com/webpage.html/ Urls in the xml sitemap remained unchanged (no trailing slash). We started noticing duplicate content (because our site renders the same page with or without the trailing shash). We corrected the problematic php url function so that now, all links on the site link to a url without trailing slash. However, Google had time to index these pages. Is implementing 301 redirects required in this case?

    | yacpro13
    1

  • We are now introducing 5 links in all our category pages for different sorting options of category listings.
    The site has about 100.000 pages and with this change the number of URLs may go up to over 350.000 pages.
    Until now google is indexing well our site but I would like to prevent the "sorting URLS" leading to less complete crawling of our core pages, especially since we are planning further huge expansion of pages soon. Apart from blocking the paramter in the search console (which did not really work well for me in the past to prevent indexing) what do you suggest to minimize indexing of these URLs also taking into consideration link juice optimization? On a technical level the sorting is implemented in a way that the whole page is reloaded, for which may be better options as well.

    | lcourse
    0

  • Hello Mozzers, I am looking at a website with the homepage repeated several times (4 times) on the sitemap (sitemap is autogenerated via a plugin) - is this an SEO problem do you think - might it damage SEO performance, or can I ignore this issue? I am thinking I can ignore, yet it's an odd "issue" so your advice would be welcome! Thanks, Luke

    | McTaggart
    0

  • Hi, I got a question in regard to webpages being served via AJAX request as I couldn't find a definitive answer in regard to an issue we currently face: When visitors on our site select a facet on a Listing Page, the site doesn't fully reload. As a consequence only certain tags of the content (H1, description,..) are updated, while other tags like canonical URLs, meta noindex,nofollow tag, or the title tag are not updating as long as you don't refresh the page. We have no information about how this will be crawled and indexed yet but I was wondering if anyone of you knows, how this will impact SEO?

    | FashionLux
    0

  • Hi, our site prams.net has 72.000 crawled and only 2500 indexed urls according to deep crawl mainly due to colour variations (each colour has its own urls now). We now created 1 page per product, eg http://www.prams.net/easywalker-mini and noindexed all the other ones, which had a positive effect on our seo. http://www.prams.net/catalogsearch/result/?q=002.030.059.0 I might still hurt our crawl budget a lot that we have so many noindexed pages. The idea is now to redirect 301 all the colour pages to this main page and make them invisible. So google do not have to crawl them anymore, we included the variations in the product pages, so they should still be searchable for google and the user. Does this make sense or is there a better solution out there? Does anyone have an idea if this will likely have a big or a small impact? Thanks in advance. Dieter

    | Storesco
    0

  • Hi Mozzers - I'm working with 2 businesses at the moment, at the same address - the only difference between the two is the phone number. I could ask to split the business addresses apart, so that NAP(name, address, phone number) is different for each businesses (only the postcode will be the same). Or simply carry on at the moment, with the N and Ps different, yet with the As the same - the same addresses for both businesses. I've never experienced this issue before, so I'd value your input. Many thanks, Luke

    | McTaggart
    0

  • Hi all, Let's say I have two websites: www.mywebsite.com and www.mywebsite.de - they share a lot of content but the main categories and URLs are almost always different. Am I right in saying I can't just set the hreflang tag on every page of www.mywebsite.com to read: rel='alternate' hreflang='de' href='http://mywebsite.de' /> That just won't do anything, right? Am I also right in saying that the only way to use hreflang properly across two domains is to have a customer hreflang tag on every page that has identical content translated into German? So for this page: www.mywebsite.com/page.html my hreflang tag for the german users would be: <link < span="">rel='alternate' hreflang='de' href='http://mywebsite.de/page.html' /></link <> Thanks for your time.

    | Bee159
    0

  • We have a domain www.spintadigital.com that is hosted with dreamhost and we also have a seperate subdomain blog.spintadigital.com which is hosted in the Ghost platform and we are also using Unbounce landing pages with the sub domain get.spintadigital.com. I wanted to know whether having subdomain like this would affect the traffic metric and ineffect affect the SEO and Rankings of our site.  I think it does not affect the increase in domain authority, but in places like similar web i get different traffic metrics for the different domains.  As far as i can see in many of the metrics these are considered as seperate websites.  We are currently concentrating more on our blogs and wanted to make sure that it does help in the overall domain. We do not have the bandwidth to promote three different websites, and hence need the community's help to understand what is the best option to take this forward.

    | vinodh-spintadigital
    0

  • Can anyone point me to the best way to implement 301 redirects on a Ruby on Rails website?

    | brianvest
    0

  • Hi, I am trying to cleanse a news website.  When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts.  This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012.  So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
    https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove.  Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404?  I believe this is very wrong.  As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
    https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
    http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?

    | ioannisa
    0

  • With the depreciation of Freebase, we're moving some of our data to Wikidata. One of the identifiers (and signals for a Knowledge Graph placement) is your Crunchbase Organization ID. However, I can't find any reference to this number on our company Crunchbase profile. There's an application ID in the source code but it seems to be a different number length than other Org. ID examples I've seen. Anybody have experience and know where I can find this?

    | MattCommonBond
    0

  • Bazaar Voice provides a pretty easy-to-use product review solution for websites (especially sites on Magento): https://www.magentocommerce.com/magento-connect/bazaarvoice-conversations-1.html If your product has over a certain number of reviews/questions, the plugin cuts off the number of reviews/questions that appear on the page. To see the reviews/questions that are cut off, you have to click the plugin's next or back function. The next/back buttons' URLs have a parameter of "bvstate....." I have noticed Google is indexing this "bvstate..." URL for hundreds of sites, even with the proper rel canonical tag in place. Here is an example with Microsoft: http://webcache.googleusercontent.com/search?q=cache:zcxT7MRHHREJ:www.microsoftstore.com/store/msusa/en_US/pdp/Surface-Book/productID.325716000%3Fbvstate%3Dpg:8/ct:r+&cd=2&hl=en&ct=clnk&gl=us My website is seeing hundreds of these "bvstate" urls being indexed even though we have a proper rel canonical tag in place. It seems that Google is ignoring the canonical tag. In Webmaster Console, the main source of my duplicate titles/metas in the HTML improvements section is the "bvstate" URLs. I don't necessarily want to block "bvstate" in the robots.txt as it will prohibit Google from seeing the reviews that were cutoff. Same response for prohibiting Google from crawling "bvstate" in Paramters section of Webmaster Console. Should I just keep my fingers crossed that Google honors the rel canonical tag? Home Depot is another site that has this same issue: http://webcache.googleusercontent.com/search?q=cache:k0MBLFcu2PoJ:www.homedepot.com/p/DUROCK-Next-Gen-1-2-in-x-3-ft-x-5-ft-Cement-Board-172965/202263276%23!bvstate%3Dct:r/pg:2/st:p/id:202263276+&cd=1&hl=en&ct=clnk&gl=us

    | redgatst
    1

  • Lately we have been applying structured data to the main content body of our client's websites.  Our lead developer had a good question about HTML however. In JSON-LD, what is the proper way to embed content from a data field that has html markup (i.e. p, ul, li, br, tags) into mainContentOfPage. Should the HTML be stripped our or escaped somehow? I know that apply schema to the main body content is helpful for the Googlebot.  However should we keep the HTML?  Any recommendations or best practices would be appreciated. Thanks!

    | RosemaryB
    0

  • Hi guys, im putting together a proposal for a new site and trying to figure out if it'd be better to (A) have a keyword split across multiple directories or duplicate keywords to have the keyword hyphenated? For example, for the topic of "Christmas decor" would you use; (A) - www.domain.com/Christmas/Decor (B) - www.domain.com/Christmas/Christmas-Decor in example B the phrase 'Christmas' is duplicated which looks a little spammy, but the key term "Christmas decor" is in the URL without being broken up by directories. which is stronger? Any advice welcome! Thanks guys!

    | JAR897
    1

  • Hi, We had a content manager request to delete a page from our site. Looking at the traffic to the page, I noticed there were a lot of inbound links from credible sites. Rather than deleting the page, we simply removed it from the navigation, so that a user could still access the page by clicking on a link to it from an external site. Questions: Is it bad for SEO to have a page that is not directly accessible from your site? If no: do we keep this page in our Sitemap, or remove it? If yes: what is a better strategy to ensure the inbound links aren't considered "broken links" and also to minimize any negative impact to our SEO? Should we delete the page and 301 redirect users to the parent page for the page we had previously hidden?

    | jnew929
    0

  • I have previously searched the forum and could not find a definitive answer on this subject so would appreciate any guidance. I have just joined a new company, we have a .co.uk site which gets lots of traffic. We have a .com site which is targeting USA and .com/de/ targeting Germany. 'hreflang' is configured on the .com (between the USA and German sites) but not on .co.uk. This means that in the eyes of search engines (and Moz Pro) the 2 domains are competitors (and the .co.uk has much more presence than the .com in the USA). I know how to fix this and I am in the process of doing so. My question is whether it would make sense to migrate the .co.uk site to .com As previously mentioned the .co.uk site already does very well both in the UK and around the world (as our product is well known in our niche). As .co.uk can only primarily be targeted to UK would our global reach increase enough to justify migrating it to .com? We have dealers/distributors in maybe 30 countries and are continuing to expand, we will at point point add additional languages so my suggestion is that we migrate now as the authority of the .co.uk will help the emerging markets as well as increase our visibility in markets that are not currently primary targets. We are also in the process of hiring new staff specifically to focus on Content Marketing. So again this suggests having the 1 domain will make sense in the long run (as any value gained from content marketing success will be seen by all country/language focussed sites). I am also planning to rebuild the sites in the next few months as the current ones are not fit for purpose so the migration would coincide with this (I know this is not ideal). Apologies for the lengthy question, I hope the additional background information will help in providing some feedback to help me make the decision. David

    | JamesCrossland
    0

  • Capitalization of first letter of each word in meta description. Catches more attention, but may this lead to google ignoring the meta description then more frequently? Same for an occasional capitalized FREE in meta description. Anybody had experience with this?

    | lcourse
    1

  • I'm currently in debate with our 508 compliance team over the use of alt tags on images. For SEO, it is best practice to use alt tags so that readers can tell what the image represents. However, they are arguing that these images should NOT have alt text as it doesn't add anything to the disability screen reader as the image text would be repetitive with the text on the page. I feel they are taking the "decorative" image concept in 508 compliance too far. It's intention is for images for bullets, etc that truly are decorative in nature and add no benefit to the reader. What is the communities thoughts on this? Have you ever run into scenario where 508 is attempting to ruin SEO? Usually the 2 play nicely.

    | jpfleiderer
    0

  • Looking for SEOs who have experience with resetting projects by migrating on to a new domain to shed either a manual or algorithmic penalty. My questions are: For algorithmic penalties, what is the best migration strategy to avoid inheriting any kind of baggage? 301, 302, establish no connection between the two sites? For manual penalties, what is the best migration strategy to avoid inheriting any kind of baggage? 301, 302, establish no connection between the two sites? Any other input on these kind of reset projects is appreciated.

    | spanish_socapro
    0

  • Hello, We're putting together a large piece of content that will have some interactive filtering elements. There are two types of filters, topics and object types. The architecture under the hood constrains us so that everything needs to be in URL parameters. If someone selects a single filter, this can look pretty clean: www.domain.com/project?topic=firstTopic
    or
    www.domain.com/project?object=typeOne The problems arise when people select multiple topics, potentially across two different filter types: www.domain.com/project?topic=firstTopic-secondTopic-thirdTopic&object=typeOne-typeTwo I've raised concerns around the structure in general, but it seems to be too late at this point so now I'm scratching my head thinking of how best to get these indexed. I have two main concerns: A ton of near-duplicate content and hundreds of URLs being created and indexed with various filter combinations added Over-reacting to the first point above and over-canonicalizing/no-indexing combination pages to the detriment of the content as a whole Would the best approach be to index each single topic filter individually, and canonicalize any combinations to the 'view all' page? I don't have much experience with e-commerce SEO (which this problem seems to have the most in common with) so any advice is greatly appreciated. Thanks!

    | digitalcrc
    0

  • My site is at 4th place, 3 places above it is a gumtree (similar to yell, yelp) listing. How can you figure out how difficult it would be outrank those pages? I mean obviously the pages would have low PA and they are top based on the high DA of the site. This also seems to go back to keyword research and difficulty, when I'm doing keyword research and I see a wikipedia site in top 5 rank, or a yell.com or perhaps an article in forbes.com outranks your site. Typically the problem seems to be Google giving a lot of credit to these pages rankings based on the high DA rather than PA of the pages. How would you gauge the difficulty of that keyword then if the competition are pages with very high DA which is impossible to compete with but low PA? Thanks

    | magusara
    2

  • Hey Mozzers, I'll be moving several sites from HTTP to HTTPS in the coming weeks (same brand, multiple ccTLDs). We'll start on a low traffic site and test it for 2-4 weeks to see the impact before rolling out across all 8 sites. Ideally, I'd like to simply 301 redirect the HTTP version page to the HTTPS version of the page (to get that potential SEO rankings boost). However, I'm concerned about the potential drop in rankings, links and traffic. I'm thinking of alternative ways and so instead of the 301 redirect approach, I would keep both sites live and accessible, and then add rel canonical on the HTTPS pages to point towards HTTP so that Google keeps the current pages/ links/ indexed as they are today (in this case, HTTPS is more UX than for SEO). Has anyone tried the rel canonical approach, and if so, what were the results? Do you recommend it? Also, for those who have implemented HTTPS, how long did it take for Google to index those pages over the older HTTP pages?

    | Steven_Macdonald
    0

  • Greetings, I just discovered that some of our content was produced with
    tags in the title tag. Example: <title>Diabetes Symptoms <br> In Women Over 40</title> My gut says this is bad for SEO, but I couldn't find a definitive answer on the web, so I thought I would ask the community of gurus here at Moz. 🙂 Thanks in advance for any reply. Kind regards, Eric

    | Eric_Lifescript
    0

  • My company is looking at consolidating 5 websites that it has running on magento, wordpress, drupal and a few other platforms on to the same domain. Currently they're all on subdomains but we'd like to consolidate the subdomains to folders for UX and SEO potential. Currently they look like this: shop.example.com blog.example.com uk.example.com us.example.com After the reverse proxy they'll look like this: example.com/uk/ example.com/us/ example.com/us/shop example.com/us/blog I'm curious to know how much link juice will be lost in this switch. I've read a lot about site migration (especially the Moz example). A lot of these guides/case studies just mention using a bunch of 301's but it seems they'd probably be using reveres proxies as well. My questions are: Is a reverse proxy equal to or worse/better than a 301? Should I combine reverse proxy with a 301 or rel canonical tag? When implementing a reverse proxy will I lose link juice = ranking? Thanks so much! Jacob

    | jacob.young.cricut
    0

  • Hi to all the SEO experts here, I am working on SEO of my 4 months old website. For example, its 'abz.com'. We like the brand name 'abz' for the business and we are able to SEO well for keyword 'abz'. However, we would also like to target for the keyword 'abc'. There are 2 reasons for that: 'abc' is an actual word. So there is a possibility that our users may type 'abc' instead of 'abz' to reach us. For 'abc', the top result is 'abct.us', which is a site of adult in nature. Also our website doesn't feature at all in the results. This is hitting us hard in terms of or brand visibility. So the questions are: How to feature in results of keyword search of 'abc'? Will the following approach work: Buying an available domain 'abc.co.in', and use it to feature in 'abc' results and 301 redirect to 'abz.com' Having 'abc' in the page meta (title and description). This is hard for us, since we need to rethink our taglines and copyrights. 2. If we search for 'abz', Google says "Do you mean abc". Is there a way to not have this suggestion? It would helpful to have some more ideas for this problem.

    | manasag
    0

  • We're looking at expanding our robots.txt, we currently don't have the ability to noindex/nofollow. We're thinking about adding the following: Checkout Basket Then possibly: Price Theme Sortby other misc filters. What do you include?

    | ThomasHarvey
    0

  • My domain is currently targeting the US, but I'm building out sub-folders that will need to geo-target France, England, and Spain. Each country will have it's own sub-folder, and professionally translated (domain.com/france). Other than the hreflang tags, what are other best practices I can implement? Can Google Webmaster tools geo-target by subfolder? Any suggestions would be appreciated. Thanks Justin

    | Rhythm_Agency
    0

  • Hi guys, I hope to find some good answers to my questions, because here are some of the best SEO's in the world. I'm doing SEO as a hobby for a few years and had some very good results before the latest Google updates. Now I'm not able to rank any website for competitive keywords. The last project I started is this website (man and van hire company targeting London market).
    The problem is that I can't rank even in Top 100 in Google UK for the main keywords like: "man and van london" , "man and van service london" ,"london man & van"...
    The site has over 1k good backlinks (according to Ahrefs), unique content, titles and descriptions but still can't rank well. Am i missing something? Few years back that was more than enough to rank well in Google.
    I will be very grateful to hear your suggestions and opinions.

    | nasi_bg
    0

  • Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: -          Excluding in robots.txt should result in Google not crawling those pages right? -          Is this number of crawled pages normal for a website with around 1000 unique pages? -          What am I missing? BxlESTT

    | Bob_van_Biezen
    0

  • There are websites that have linked to my site. Whenever I hover over link I see my direct website URL and I am not seeing "no follow" when viewing source code so I assume these are passing link juice. However when I click on link it directs briefly to shareasale (affiliate account) in web address bar, but then quickly directs back to my website URL as directed. I was curious if these good links I am acquiring truly pass juice or since they briefly pass through an affiliate site if that cancels or dilutes the link juice. Also I am noticing when inspecting element that after the HREF it says class="external-link" I am just not sure if my link building efforts are being ruined by having an affiliate account running.I did not tell them I had one. I guess they are searching to see that I have one and trying to make a few extra commission dollars.

    | nchachula
    0

  • there are some pages, my competitor is ranking well and also, we have done page optimization it is 100% for page title keywords as im going to use the same title of the competitor? Will this affect me? Pls suggest wht should I do..

    | Rahim119
    0

  • Hi does anyone know the correct hreflang for the UK Google webmaster error: International Targeting | Language > 'en-GB' - no return tags (sitemaps)Sitemap provided URLs and alternate URLs in 'en-GB' that do not have return tags.Thanks you all

    | Taiger
    0

  • Consider this example, because I want to be clear about what I mean.  You have two websites.  Lets all them www.a.com and www.b.com. On www.a.com/some/page, there is an iframe something like this:
    <iframe src="www.b.com/some/special/path"></iframe>
    Then content of this iframe is a bunch of pictures, text and numbers, as well as a group of links, linking each picture to www.b.com for example the links might be:
    www.b.com/content/1
    www.b.com/content/2
    www.b.com/content/3 Questions: When google crawls **www.a.com/some/page, **does it pass link juice to www.b.com/content/*? Does google instead consider these to be internal links within b.com itself.  because links to www.b.com/content/ ** are actually from b.com itself, since the domain of the iframe is actually: www.b.com/some/special/path 3) Is there any amount of link juice passed from www.a.com/some/page to* www.b.com/some/special/path **because this is the src= element of an iframe that a.com is hosting? Consider an alternative setup.  Where instead of using an iframe the contents of the above described iFrame is actually added the the page dynamically using javascript, and a call to an API endpoint at b.com.  Resulting in these links being added directly to the body of a.com without being wrapped in an iframe element. Questions:
    4) Do these links that were created after page load still get crawled and credited by google? (i have heard in the past that google was going to start crawling javascript, i just don't know if this is known for a fact yet).
    5) Do links created on the client side hold the same weight as a link that was served directly via the backend html generation? If both the links within the iframe and the links within the javascript embed method pass link juice.  Is one preferred over the other? is one known to be more effective than the other? Thanks!

    | A Former User
    0

  • Hi, About 10 months I switched from HTTP to HTTPS. I then switched back (long story). I noticed that Screaming Frog is picking up the HTTP and HTTPS version of the site. Maybe this doesn't matter, but I'd like to know why SF is doing that. The URL is: www.aerlawgroup.com Any feedback, including how to remove the HTTPS version, is greatly appreciated. Thanks.

    | mrodriguez1440
    0

  • Hi all, I work for a retailer and I've crawled our website with RankTracker for optimization suggestions. The main suggestion is "Pages with excessive number of links: 4178" The page with the largest amount of links has 634 links (627 internal, 7 external), the lowest 382 links (375 internal, 7 external). However, when I view the source on any one of the example pages, it becomes obvious that the site's main navigation header contains 358 links, so every new page starts with 358 links before any content. Our rivals and much larger sites like argos.co.uk appear to have just as many links in their main navigation menu. So my questions are: 1. Will these excessive links really be causing us a problem or is it just 'good practice' to have fewer links
    2. Can I use 'no follow' to stop Google etc from counting the 358 main navigation links
    3. Is have 4000+ pages of your website all dumbly pointing to other pages a help or hindrance?
    4. Can we 'minify' this code so it's cached on first load and therefore loads faster? Thank you.

    | Bee159
    0

  • Good evening Mozzers. Couple of questions which I hope you can help with. Here's the first. I am wondering, are we likely to see ranking changes if we remove the .html from the sites URLs. For example website.com/category/sub-category.html Change to: website.com/category/sub-category/ We will of course make sure we 301 redirect to the new, user friendly URLs, but I am wondering if anyone has had previous experience of implementing this change and how it has effected rankings. By having the .html in the URLs, does this stop link juice being flowed back to the root category? Second question: If one page can be loaded with and without a forward slash "/" at the end, is this a duplicate page, or would Google consider this as the same page? Would like to eliminate duplicate content issues if this is the case. For example: website.com/category/ and website.com/category Duplicate content/pages?

    | Jseddon92
    0

  • I have compared my traffic from Jan-15 to Dec-15 and found traffic increased but pageviews decreased on few pages. Is this any issue?

    | vivekrathore
    0

  • I'm not sure if I should be including old URLs (content) that are being redirected (301) to new URLs (content) in my sitemap.xml. Does anyone know if it is best to include or leave out 301ed URLs in a xml sitemap?

    | Jonathan.Smith
    0

  • Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.

    | HrThomsen
    0

  • Hi SeoMoz community! I have a software product, which our clients implement onto their websites. It is like a pop up box. I know that backlinks are very important for SEO ranking, and I really want to give our clients 2 options of product: 1. you can get the free/cheaper option if you use the code which has a keyworded backlink to our site on it 2. you can pay small fee if you don't want to use the version with a link to our site on it Now, the problem is that the product is written entirely in Javascript, and I don't think that Google crawls this, do they? Is there a way around this? Thanks for your help!

    | qdigi
    0

  • Just noticed a web developer I work with has been copying tweets into the website - and these are displayed (and saved) one page at a time across hundreds of pages (this is so they can populate a twitter feed, I am told). How would you tackle this, now that the deed's been done? This is in Drupal. Your thoughts would be welcome as this is a new one to me. Thanks, Luke

    | McTaggart
    0

  • Greetings, I have an interesting challenge for you. Well, I suppose "interesting" is an understatement, but here goes. Our company is a women's health site. However, over the years our content mix has grown to nearly 50/50 between unique health / medical content and general lifestyle/DIY/well being content (non-health). Basically, there is a "great divide" between health and non-health content. As you can imagine, this has put a serious damper on gaining ground with our medical / health organic traffic. It's my understanding that Google does not see us as an authority site with regard to medical / health content since we "have two faces" in the eyes of Google. My recommendation is to create a new domain and separate the content entirely so that one domain is focused exclusively on health / medical while the other focuses on general lifestyle/DIY/well being. Because health / medical pages undergo an additional level of scrutiny per Google - YMYL pages - it seems to me the only way to make serious ground in this hyper-competitive vertical is to be laser targeted with our health/medical content. I see no other way. Am I thinking clearly here, or have I totally gone insane? Thanks in advance for any reply. Kind regards, Eric

    | Eric_Lifescript
    0

  • Hi all, I wondered if you could help me at all please? We run a site called getinspired365.com (which is not optimised) and in the last 2 weeks have tried to optimise some new pages that we have added. For example, we have optimised this page - http://getinspired365.com/lifes-a-bit-like-mountaineering-never-look-down This page was added to Google's index via webmaster tools. When I then did a search for the full quote it came back 2nd in Google's search. If I did a search for half the quote (Life is a bit like mountaineering) it also ranked 2nd. We had another quote page that we'd optimised that displayed similar behaviour (it ranked 4th). But then for some reason when I now do the search it doesn't rank in the top 100 results. This, despite, an unoptimised "normal" page ranking 4th for a search such as: Thousands of geniuses live and die undiscovered. So our domain doesn't seem to be penalised as our "normal" pages are ranking. These pages aren't particularly well designed from an SEO standpoint. But our new pages - which are optimised - keep disappearing from Google, despite the fact they still show as indexed. I've rendered the pages and everything appears fine within Google Webmaster Tools. At a bit of a loss as to why they'd drop so significantly? A few pages I could understand but they've all but been removed. Any one seen this before, and any ideas what could be causing the issue? We have a different URL structure for our new pages in that we have the quote appear in the URL. All the content (bar the quote) that you see in the new pages are unique content that we've written ourselves. Could it be that we've over optimised and Google view these pages as spam? Many thanks in advance for all your help.

    | MichaelWhyley
    0

  • Does anyone know of any changes SEOwise when running an adult toy site versus a normal eCommerce site? Is there any tips or suggestions that are worth knowing to achieve rankings faster? Thanks,

    | the-gate-films
    0

  • Hello Amazing SEO Community! Quick Q for a client with a TON of duplicate content. (yikes!) My client is currently undertaking a large SEO project around canonical tagging for their thousands of duplicate pages. Currently, one product sits on multiple URLs and they are being indexed as different pages (with the same content). The issue is found across all products and other pages, and across their international sites as well. One core challenge they face now is lack of time/resources from their developer side. The solution we see to the duplicate content is to manually add a canonical tag to each of our tens of thousands of pages. Their content management system is Magento. Has anyone ever tackled canonicalization for a large site that uses Magento? Any more efficient solutions to manual tagging is ideal. Thanks in advance for your input. -Bonnie

    | accpar
    0

  • For some reason every webpage of our website (www.nathosp.com)  has a rel=canonical tag. I'm not sure why the previous SEO manager did this, but we don't have any duplicate content that would require a canonical tag. Should I remove these tags? And if so, what's the advantage - or disadvantage of leaving them in place? Thank you in advance for your help. -Josh Fulfer

    | mhans
    1

  • Just a question (or questions) I have wondered about. What's the difference, besides the actual encoding, between the three? Why have three? Why not just the one? Seems to me that Microdata is the easiest, but maybe I am wrong. Is there a reason to use one versus another? I have not found anything explaining this on schema.org - I suppose this is just a discussion versus getting one right or wrong answer. I am just curious of the opinions of people in the SEO MOZ community. Unless of course there is one answer. I'll take that too.

    | Brian_Dowd
    1

  • My client owns a brand and came to me with two ecommerce websites. One website sells his specific brand product and the other sells general products in his niche (including his branded product). Question is my client wants to rank each website for basically the same set of keywords. We have two choices I'd like feedback on- Choice 1 is to rank both websites for same keyword groupings so even if they are both on page 1 of the serps then they take up more real estate and share of voice. are there any negative possibilities here? Choice 2 is to recommend a shift in the position of the general industry website to bring it further away from the industry niche by focusing on different keywords so they don't compete with each other in the serps. I'm for choice 1, what about you?

    | Rich_Coffman
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.