Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • I am getting that error on my product pages. This link is in the errors http://www.wolfautomation.com/drive-accessory-safety-sto-module-i500 but when I look at it on mobile it is fine.

    | Tylerj
    0

  • I have 2 Domains with the same name  with same content. How to solve that problem? Do I need to change the content from my main website. My Hosting is having different plans, but with the same features. So many pages were having the same content, and it is not possible to change the content, what is the solution for that? Please let me know how to solve that issue?

    | Alexa.Hill
    0

  • My client is trying to achieve a global presence in select countries, and then track traffic from their international pages in Google Analytics. The content for the international pages is pretty much the same as for USA pages, but the form and a few other details are different due to how product licensing has to be set up. I don’t want to risk losing ranking for existing USA pages due to issues like duplicate content etc. What is the best way to approach this? This is my first foray into this and I’ve been scanning the MOZ topics but a number of the conversations are going over my head,so suggestions will need to be pretty simple 🙂 Is it a case of adding hreflang code to each page and creating different URLs for tracking. For example:
    URL for USA: https://company.com/en-US/products/product-name/
    URL for Canada: https://company.com/en-ca/products/product-name /
    URL for German Language Content: https://company.com/de/products/product-name /
    URL for rest of the world: https://company.com/en/products/product-name /

    | Caro-O
    1

  • Again I am facing same Problem with another wordpress blog. Google has suddenly started to Cache a different domain in place of mine & caching my domain in place of that domain. Here is an example page of my site which is wrongly cached on google, same thing happening with many other pages as well - http://goo.gl/57uluq That duplicate site ( protestage.xyz) is showing fully copied from my client's site but showing all pages as 404 now but on google cache its showing my sites. site:protestage.xyz showing all pages of my site only but when we try to open any page its showing 404 error My site has been scanned by sucuri.net Senior Support for any malware & there is none, they scanned all files, database etc  & there is no malware found on my site. As per Sucuri.net Senior Support It's a known Google bug. Sometimes they incorrectly identify the original and the duplicate URLs, which results in messed ranking and query results. As you can see, the "protestage.xyz" site was hacked, not yours. And the hackers created "copies" of your pages on that hacked site. And this is why they do it - the "copy" (doorway) redirects websearchers to a third-party site [http://www.unmaskparasites.com/security-report/?page=protestage.xyz](http://www.unmaskparasites.com/security-report/?page=protestage.xyz) It was not the only site they hacked, so they placed many links to that "copy" from other sites. As a result Google desided that that copy might actually be the original, not the duplicate. So they basically hijacked some of your pages in search results for some queries that don't include your site domain. Nonetheless your site still does quite well and outperform the spammers. For example in this query: [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 But overall, I think both the Google bug and the spammy duplicates have the negative effect on your site. We see such hacks every now and then (both sides: the hacked sites and the copied sites) and here's what you can do in this situation: It's not a hack of your site, so you should focus on preventing copying the pages: 1\. Contact the protestage.xyz site and tell them that their site is hacked and that and show the hacked pages. [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 Hopefully they clean their site up and your site will have the unique content again. Here's their email flang.juliette@yandex.com 2\. You might want to send one more complain to their hosting provider (OVH.NET) abuse@ovh.net, and explain that the site they host stole content from your site (show the evidence) and that you suspect the the site is hacked. 3\. Try blocking IPs of the Aruba hosting (real visitors don't use server IPs) on your site. This well prevent that site from copying your site content (if they do it via a script on the same server). I currently see that sites using these two IP address: 149.202.120.102\. I think it would be safe to block anything that begins with 149.202 This .htaccess snippet should help (you might want to test it) #-------------- Order Deny,Allow Deny from 149.202.120.102 #-------------- 4\. Use rel=canonical to tell Google that your pages are the original ones. [https://support.google.com/webmasters/answer/139066?hl=en](https://support.google.com/webmasters/answer/139066?hl=en) It won't help much if the hackers still copy your pages because they usually replace your rel=canonical with their, so Google can' decide which one is real. But without the rel=canonical, hackers have more chances to hijack your search results especially if they use rel=canonical and you don't. I should admit that this process may be quite long. Google will not return your previous ranking overnight even if you manage to shut down the malicious copies of your pages on the hacked site. Their indexes would still have some mixed signals (side effects of the black hat SEO campaign) and it may take weeks before things normalize. The same thing is correct for the opposite situation. The traffic wasn't lost right after hackers created the duplicates on other sites. The effect build up with time as Google collects more and more signals. Plus sometimes they run scheduled spam/duplicate cleanups of their index. It's really hard to tell what was the last drop since we don't have access to Google internals. However, in practice, if you see some significant changes in Google search results, it's not because of something you just did. In most cases, it's because of something that Google observed for some period of time. Kindly help me if we can actually do anything to get the site indexed properly again, PS it happened with this site earlier as well & that time I had to change Domain to get rid of this problem after I could not find any solution after months & now it happened again. Looking forward for possible solution Ankit

    | killthebillion
    0

  • I worked with a team of developers to launch a new site back in March. I was (and still am) in charge of SEO for the site, including combining 4 sites into 1. I made sure 301 redirects were in place to combine the sites and pretty much every SEO tactic I can think of to make sure the site would maintain rankings following launch. However, here we are 6 months later and YoY numbers are down -70% on average for organic traffic. Anyone mind taking a look at http://www.guestguidepublications.com and seeing if there's a glaring mistake I'm missing?!?!?! Thanks ahead of time!

    | Annapurna-Digital
    1

  • I have an ecommerce marketplace that has new items added daily. In search consoloe my pages have always gone up almost every week. It hasn't increased in 5 weeks. We haven't made any changes to the site and the sitemap looks good. Any ideas on what I should look for?

    | EcommerceSite
    0

  • We are trying to eliminate tedium when developing complexly designed responsive navigations for mobile, desktop and tablet. The changes between breakpoints in our designs are too complex to be handled with css, so we are literally grabbing individual elements with javascript and moving them around. What we'd like to do instead is have two different navigations on the page, and toggle which one is on the DOM based on breakpoint. These navigations will have the same links but different markup. Will having two navigation components on the page at page load negatively impact our Google SEO rankings or potential to rank, even if we are removing one or the other from the DOM with JavaScript?

    | CaddisInteractive
    0

  • It appears Google is moving towards the Rich Cards JSON-LD for all data. https://webmasters.googleblog.com/2016/05/introducing-rich-cards.html However on an ecommerce site when I have schema.org microdata structured data inline for a product and then I add the JSON-LD structured data Google treats that as two products on the page even though they are the same. To make the matter more confusing Bing doesn't appear to support JSON-LD. I can go back to the inline structured data only, but that would mean when Rich Cards for products eventually come I won't be ready. What do you recommend I do for long term seo, go back to the old or press forward with JSON-LD?

    | K-WINTER
    0

  • Hey Everyone, I'm so happy to be apart of this community and assert knowledge where and when I can. I joined the community for one specific reason and I hope to employ the help of everyone here in conjunction with solving my SEO problem. I have a few years experience in SEO/SEM and have been continuously learning, while learning to adapt to continuous changes (I think we can all relate lol). At any rate, here is what I am experiencing frustration with. I'm the SEO Analyst for a company that is trying to compete for the keyword phrase "Lyft Promo Code". We have been trying to place page one on google for over a year now to no avail. I have gotten my direct domain url to appear on pages 1 & 2, but can't seem to get permalinks or "Sub-URL's" indexed. If you google this phrase you will see what I mean. The top result is:http://rideshareapps.com/lyft-promo-code-credit/
    This url has an aggregated rating and appears page one for the phrase aforementioned above. What we have managed to do, as I mentioned is get www.couponcodeshero.com on page two. However, we have noticed that the page one trend is all permalinks. However when we have tried to emulate the pages structure and index priority, we are unable too. Our page:
    http://couponcodeshero.com/lyft-promo-code-rideshare-guide/ I have ran multiple on-page graders from many resources and have not been able to get this page indexed as a permalink on any page that directly correlates with the Keyword Phrase. In essence, I'm looking for some direction from individuals who may have experienced this before. I have spent a good amount of time Googling and searching forum databases but can not find any direct content that explains how to index a permalink. I hope to get some great ideas from the individuals here! If you do know of any articles or even previously answered questions here please direct me there. it is only my intention to add value to the community! Schieler Mew
    Number One Designs

    | Number_One_Deisgns
    0

  • Hi, I'm looking for an example/use case of someone whose site has been linked to from another using a Bitly, or other generic URL shortener link. I'm specifically interested in proving/disproving the value of the backlink in terms of boost in SEO rankings. Ideally you somehow got a juicy backlink from a reputable site, but they accidentally linked to you using a Bitly or something, yet you saw a noticeable increase in your pages search rankings, thus proving the value of a Bitly link still passing all SEO value. Or alternatively, you got that juicy backlink and noticed nothing at all, or not much, and are frustrated that they used a BItly. I'm launching a study on this soon to identify the possible value behind short links as backlinks. Yes, I know that Matt Cutts says all short links are 301 redirects which passes something like 99.9% of link juice. I'd just like to see some use cases on this. Thanks!

    | Rebrandly
    0

  • I have built a link on behalf of a ciient in a long, well-written article on a reputable website that accepts contributor accounts. I therefore control the link. I have since realised that the anchor text of the link could be optimized much better than it currently is (while still only being a partial match). Would I be punished by the algorithm for going in and changing the link? I know it's not 100% "natural," but then we're SEOs, and i don't think it's too implausible that a website owner may go in and do the same... Maybe if I add some text as well, it would make things look more natural?

    | zakkyg
    1

  • Hi fellow Mozians, I have come up with a doubt today which I would appreciate your thoughts on. I have always been convinced that the disavowal tool can be used at any time as part of your backlink monitoring activities- if you see a dodgy backlink coming in you should add it to your disavowal file if you can't get it removed (which you probably can't). That is to say that the disavowal tool can be used pre-emptively to make sure a dodgy link does do your site any harm. However, this belief of mine has taken a bit of a beating this morning as another SEO suggested that the disavowal tool only has en effect if acompanied by a reconsideratiosn request, and that you can only file a reconsideration request if you have some kind of manual action. This logic describes that you can only disavowal when you have  a penalty. This theory was backed up by this moz article from May 2013:
    https://moz.com/blog/google-disavow-tool
    The comments didnt do much to settle my doubts. This Mat Cutts video, from November 2013 seems to confirm my belief however:
    https://www.youtube.com/watch?time_continue=86&v=eFJZXpnsRsc It seems perfectly reasonable that Google does allow pre-emptive disavowal-ing, not just because of the whole negative seo issue, but just because nasty links do happen naturally. Not all SEOs spend all their waking hours building links which they know they will have to disavowal later shoudl a penalty hit at some point, and it seems reasonable that an SEO should be able to say- "Link XYZ is nothing to do with me!" before Google excercises retribution. If, for example you get hired working for a company that HAD a penalty due to spammy link building in the past that has been lifted; but you see that Google periodically discovers the occasional spammy link it seems fair that you should be able to tell google that you want to voluntarily remove any "credit" that that link is giving you today, so as to avoid a penalty tomorrow. Your help would be much appreciated. Many thanks indeed. watch?time_continue=86&v=eFJZXpnsRsc

    | unirmk
    0

  • I know back in 08 Google started crawling forms using the method=get however not method=post. whats the latest? is this still valid?

    | Turkey
    0

  • We are having an odd thing happen with our Mobile Friendly status. Google has had the pages "Mobile Friendly" for almost a year now. While Bing says we fail mobile friendly. We've tried changing the two things we are failing on in the Bing test but that breaks the page for some users. Two things we are failing on Bing are: Viewport Not Configured correctly - We have tried their suggested tag, it breaks our pages on Tablets. Page content does not fit device Width - Page does fit the devices fine, Google has no problem with it. What do you suggest we do?

    | K-WINTER
    0

  • Hello everyone. I was reading this article on semrush.com, published the last year, and I'd like to know your thoughts about it: https://www.semrush.com/blog/does-google-crawl-relnofollow-at-all/ Is that really the case? I thought that Google crawls and "follows" nofollowed tagged links even though doesn't pass any PR to the destination link. If instead Google really doesn't crawl internal links tagged as "nofollow", can that really help with crawl budget?

    | fablau
    0

  • Hi, We have blogs set up in each of our markets, for example http://blog.telefleurs.fr, http://blog.euroflorist.nl and http://blog.euroflorist.be/nl. Each blog is localized correctly so FR has fr-FR, NL has nl-NL and BE has nl-BE and fr-BE. All our content is created or translated by our Content Managers. The question is - is it safe for us to use a piece of content on Telefleurs.fr and the French translated Euroflorist.be/fr, or Dutch content on Euroflorist.nl and Euroflorist.be/nl? We want to avoid canonicalising as neither site will take preference. Is there a solution I've missed until now? Thanks,
    Sam

    | seoeuroflorist
    0

  • Hi Mozzers! I have a question on mass uploading low quality product pages We have a huge catalogue of products and our product managers are looking to mass reference 17,000 new products quickly on the website. Obviously, this will mean content will somehow have to be made unique - which would take a huge amount of resource. Apart from this issue, will adding this many new product pages in one go be bad for SEO? If we also do manage to make the content unique, but not high quality - we'll have 17,000 new low quality product pages - will this reduce our domain authority? Becky

    | BeckyKey
    1

  • We're changing our website's URL structures, this means all our site URLs will be changed. After this is done, do we need to update the old inbound external links to point to the new URLs? Yes the old URLs will be 301 redirected to the new URLs too. Many thanks!

    | Jade
    1

  • Hi Ya'll. I'm transitioning our http version website to https.  Important question: Do images have to have 301 redirects? If so, how and where? Please send me a link or explain best practices. Best, Shawn

    | Shawn124
    1

  • Hello, On a webpage I have multiple tabs, each with their own specific content. Now these AJAX/JS tabs, if Google only finds the first tab when the page loads the content would be too thin. What do you suggest as an implementation? With Google being able to crawl and render more JS nowadays, but they deprecated AJAX crawling a while back. I was maybe thinking of doing a following implementation where when JS is disabled, the tabs collapse under each other with the content showing. With JS enabled then they render as tabs. This is usually quite a common implementation for tabbed content plugins on Wordpress as well. Also, Google had commented about that hidden/expandable content would count much less, even with the above JS fix. Look forward to your thoughts on this. Thanks, Conrad

    | conalt
    1

  • Hi Guys, I am currently working on a site that's organic traffic suffered ( and is still suffering ) a huge drop in organic traffic. From a consistent 3-400 organic visits a day to almost zero. This happened as soon as the new site went live. I am now digging to find out why. 301s were put in place ( over 2, 500 over them ) and there are still over 1,100 outstanding after review search console this morning. Having looked at the redirect file that was put in place when the new site went live, it all look OK, apart from the redirects look like this... http://www.physiotherapystore.com/ to http://physiotherapystore.com/ Where the new URL is missing www. - I am concerned this is causing a large duplicate issue as both www. and non www. work fine. I am right to have concern or is this something not to worry about?

    | HappyJackJr
    0

  • Hi all, I have come across the most weird situation ever in my SEO career. Google is displaying description in organic results for brand term under the website URL that doesnt exist on the website ANYWHERE but this description does appear on some directory sites created back in 2002 or so. Is there a possibility that Google is pulling info from directory sites and displaying as a description in the organic results? I am super confused! Help needed! Thanks

    | Malika1
    0

  • Good day!
    We are thinking about replacing a traditional menu on an e-commerce website with a Shop button like on Amazon, with a dropdown and expandable sub-menus. Current menu: Category 1 | Category 2 | Category 3 | ... New menu: Shop | Search bar The Shop menu would expand on mouse hover. When clicked, it would link to a directory like on Amazon: https://www.amazon.ca/gp/site-directory/. Is there anything we should be worried about (ex. link juice, engagement) or considerations to think about (CSS-based vs JS)? Thanks for your time!
    Ben

    | AxialDev
    0

  • Hi, We have rel=canonical set up on our ecommerce site but Google is still indexing pages that have rel=canonical. For example, http://www.britishbraces.co.uk/braces/novelty.html?colour=7883&p=3&size=599 http://www.britishbraces.co.uk/braces/novelty.html?p=4&size=599 http://www.britishbraces.co.uk/braces/children.html?colour=7886&mode=list These are all indexed but all have rel=canonical implemented. Can anyone explain why this has happened?

    | HappyJackJr
    0

  • Hello Everyone, I have wordpress site Which is from last 20 days generating links like For Example http://www.domainname.com/game/965/wiki/キャラクター図鑑_レアリティ(★★★)_【ID:675】ワッツ・ステップニー htttp://www.domainname.com/nkpghfu_13356_gvgjq_tfjhnkt_jsj_296_82566_673_567_245 This is screenshot of webmaster tools http://prnt.sc/ccwh0e can please any expert check & Tell How this Link i am getting, Also What are steps i need take for removing this Errors, As it is harming my sites Flow As well As Rankings. Thanx in Advance

    | innovativekrishna1
    0

  • Hello, Moz Community My client's site hasn't been indexed by Google, although it was launched a couple of months ago. I've ran down the check points in this article https://moz.com/ugc/8-reasons-why-your-site-might-not-get-indexed without finding a reason why. Any sharp SEO-eyes out there who can spot this quickly? The url is: http://www.oldermann.no/ Thank you
    INEVO, digital agency

    | Inevo
    0

  • Hi, Moz community For some of the category-pages, Google is showing some of the brands in the SERP, like this: http://www.screencast.com/t/62wldbwc
    This is the page-url: https://www.gsport.no/sport/loep/lopeklaer/loepebukse For other category-pages that seemingly is built with similar code and settings, Google doesn't show brands in the snippet: http://www.screencast.com/t/zU9cg7odf
    The page-url: https://www.gsport.no/sport/loep/lopeklaer/loepejakke This all begs the questions:
    If the two pages contain the same code/html in terms of schema.org / rich snippets, why is Google choosing to display the brands in the SERP for only one of them? And is there something I can do in order to make them display the brands for all my pages? Thank you
    Sigurd Bjurbeck, INEVO (digital agency)

    | Inevo
    0

  • I just ran a robots.txt file through "Google robots.txt Tester" as there was some unusual syntax in the file that didn't make any sense to me... e.g. /url/?*    
    /url/?
    /url/* and so on. I would use ? and not ? for example and what is ? for! - etc. Yet "Google robots.txt Tester" did not highlight the issues... I then fed the sitemap through http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php and that tool actually picked up my concerns. Can anybody explain why Google didn't - or perhaps it isn't supposed to pick up such errors? Thanks, Luke

    | McTaggart
    0

  • Just looking at an ecommerce website and they've hidden their product page's duplicate content behind tabs on the product pages - not on purpose, I might add. Is this a legitimate way to hide duplicate content, now that Google has lowered the importance and crawlability of content hidden behind tabs? Is this a legitimate tactic to tackle duplicate content? Your thoughts would be welcome. Thanks, Luke

    | McTaggart
    0

  • Hi Guys, Wondering if I can get some technical help here... We have our site britishbraces.co.uk , built in Magento. As per eCommerce sites, we have paginated pages throughout. These have rel=next/prev implemented but not correctly ( as it is not in is it in ) - this fix is in process. Our canonicals are currently incorrect as far as I believe, as even when content is filtered, the canonical takes you back to the first page URL. For example, http://www.britishbraces.co.uk/braces/x-style.html?ajaxcatalog=true&brand=380&max=51.19&min=31.19 Canonical to... http://www.britishbraces.co.uk/braces/x-style.html Which I understand to be incorrect. As I want the coloured filtered pages to be indexed ( due to search volume for colour related queries ), but I don't want the price filtered pages to be indexed - I am unsure how to implement the solution? As I understand, because rel=next/prev implemented ( with no View All page ), the rel=canonical is not necessary as Google understands page 1 is the first page in the series. Therefore, once a user has filtered by colour, there should then be a canonical pointing to the coloured filter URL? ( e.g. /product/black ) But when a user filters by price, there should be noindex on those URLs ? Or can this be blocked in robots.txt prior? My head is a little confused here and I know we have an issue because our amount of indexed pages is increasing day by day but to no solution of the facet urls. Can anybody help - apologies in advance if I have confused the matter. Thanks

    | HappyJackJr
    0

  • Hi there, We have a strange issue at a client website (www.rubbermagazijn.nl). Webpage are indexed by Google but images are not, and have never been since the site went live in '12 (We recently started SEO work on this client). Similar sites like www.damenrubber.nl are being indexed correctly. We have correct robots and sitemap setup and directions. Fetch as google (Search Console) shows all images displayed correctly (despite scripted mouseover on the page) Client doesn't use CDN Search console shows 2k images indexed (out of 18k+) but a site:rubbermagazijn.nl query shows a couple of images from PDF files and some of the thumbnails, but no productimages or category images from homepage. (product page example: http://www.rubbermagazijn.nl/collectie/slangen/olie-benzineslangen/7703_zwart_nbr-oliebestendig-6mm-l-1000mm.html) We've changed the filenames from non-descriptive names to descriptive names, without any result. Descriptive alt texts were added We're at a loss. Has anyone encountered a similar issue before, and do you have any advice? I'd be happy to provide more information if needed. CBqqw

    | Adriaan.Multiply
    0

  • My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?

    | THandorf
    0

  • I have a website with nearly 70,000 incoming links, since its a somewhat large site that has been online for 19 years. The rate I was quoted for a link audit from a reputable SEO professional was $2 per, and clearly I don't have $140,000 to spend on a link audit 🙂  !! I was thinking of asking you guys for a tutorial that is the Gold Standard for link auditing checklists - and do it myself.  But then I thought maybe its easier to shorten the list by knocking out all the "obviously good" links first.  My only concern is that I be 100% certain they are good links. Is there an "easiest approach" to take for shortening this list, so I can give it to a professional to handle the rest?

    | HLTalk
    0

  • Hi Moz Community A new client approached me yesterday for help with their site that used to rank well for their designated keywords, but now is not doing well. Actually, they are not on Google at all. It's like they were removed by Google. There are not reference to them when searching with "site: url". I investigated further and discovered the likely problem . . . 26 000 spam comments! All these comments have been removed now. I clean up this Wordpress site pretty well. However, I want to connect it now to Google webmaster tools. I have admin access to the WP site, but not ftp. So I tried using Yoast to connect. Google failed to verify the site. So the I used a file uploading console to upload the Google html code instead. I check that the code is there. And Google still fails to verify the site. It is as if Google is so angry with this domain that they have wiped it completely from search and refuse to have any dealings with it at all. That said, I did run the "malware" check or "dangerous content" check with them that did not bring back any problems. I'm leaning towards the idea that this is a "cursed" domain in Google and that my client's best course of action is to build her business around and other domain instead. And then point that old domain to the new domain, hopefully without attracting any bad karma in that process (advice on that step would be appreciated). Anyone have an idea as to what is going on here?

    | AlistairC
    0

  • I'm struggling to reach the last few spots for my client's main keyword, hovering around mid-page on the first SERP. I have continuously built more links to this page but have not seen a correlation in movement, until I finally realised that I have too high a ratio of links pointing to the home page relative to those pointing to other pages on the site, which doesn't look natural (stupidly, for the last year we have mainly only been trying to rank the home page). I already have links on most UK directories - since the links I need are really just safe links (they don't need to have power), can anyone suggest the best/cheapest source of link-building that I could use to point more links to other pages on the site, to balance the site's overall profile? A press release, perhaps? Thanks in advance!

    | zakkyg
    0

  • I have recently started working at an academic publisher on their digital products. In this industry it's standard practice to use link resolvers - such as SFX from ExLibris - when updating a product as an easy way to manage URL migration. However, these link resolvers appear to use 302 redirects which makes me concerned about the potential for rankings to drop. Does anybody out there know about the use of link resolvers and their effects on search engine visibility? The main sources of information I've been able to find so far have been a Google Webmaster Central forum post and a piece on DOI news from 2005. Any information that's more up to date would be very useful, thanks!

    | BenjaminMorel
    0

  • Happy Friday, everyone! 🙂 This week's Community Discussion comes from Monday's blog post by Everett Sizemore. Everett suggests that pruning underperforming product pages and other content from your ecommerce site can provide the greatest ROI a larger site can get in 2016. Do you agree or disagree? While the "pruning" tactic here is suggested for ecommerce and for larger sites, do you think you could implement a similar protocol on your own site with positive results? What would you change? What would you test?

    | MattRoney
    2

  • I want as much traffic as possible to my main site, but right now my blog lives on a blog.brand.com URL rather than brand.com/blog. What are some good solutions for getting that traffic to count as traffic to my main site if my blog is hosted on WordPress? Can I just create a sub-directory page and add a rel canonical to the blog post?

    | johnnybgunn
    0

  • I noticed my domain authority has dropped slightly in the recent update, and it has me re-thinking a strategy for a website I just recently launched. I purchased the domain name kansasisbeautiful.com about a year ago and have been working on building it for most of that time. Earlier in August, I went ahead and launched it. However, towards the end of the development of the website, I decided to just put it in a subdirectory of my parent company (my photography business) at mickeyshannon.com/kansas and redirected the kansasisbeautiful.com domain to the subdirectory. mickeyshannon.com is my photography business website. The Kansas website has it's own distinct design, but is powered completely by my photography. I created it for a few purposes, including promoting tourism to the state of Kansas and to publish a book on Kansas travel next year, but one of it's main goals is also to help sell my photography prints. I decided to put it in a subdirectory (mickeyshannon.com/kansas) as I had hoped it might drive more traffic into buying photo prints if it lived on my main website. However, I've been re-thinking my strategy and have been wondering if I'm competing against myself too much. Many of my photography prints have the name of a location in them and have their own URL per photo (for example: "Flint Hills Spring Sunrise" is at http://www.mickeyshannon.com/photo/flint-hills-spring-sunset/). It makes me wonder if the new Kansas travel website page for the Flint Hills (http://www.mickeyshannon.com/kansas/flint-hills/) is competing for that keyword. Would I be better moving mickeyshannon.com/kansas to kansasisbeautiful.com? I was worried having so many backlinks back to my photography site would send up red flags with Google as if the kansasisbeautiful.com website was just a spammy website created to push traffic to mickeyshannon.com when it really has it's own purpose. Any thoughts on whether using the domain name or keeping it at the subdomain level is better? Hopefully that made sense. Thanks, Mickey

    | VSphoto
    0

  • Hi Folks,
    I have a query & looking for some opinions. Our site migrated to https://
    Somewhere along the line between the developer & hosting provided 302 redirect was implemented instead of the recommended 301 (the 301 rule was not being honured in the htaccess file.)
    1 week passed, I noticed some of our key phrases disappear from the serps 😞 When investigated, I noticed this the incorrect redirect was implemented. The correct 301 redirect has now been implemented & functioning correctly. I have created a new https property in webmaster tools, Submitted the sitemap, Provided link in the robots.txt file to the https sitemap Canonical tags set to correct https. My gut feeling is that Google will take some time to realise the problem & take some time to update the search results we lost. Has anyone experienced this before or have any further thoughts on how to rectify asap.

    | Patrick_556
    0

  • Hello Moz fellows, a while ago (3-4 years ago) we setup our e-commerce website category pages to apply what Google suggested to correctly handle pagination. We added rel "canonicals", rel "next" and "prev" as follows: On page 1: On page 2: On page 3: And so on, until the last page is reached: Do you think everything we have been doing is correct? I have doubts on the way we have handled the canonical tag, so, any help to confirm that is very appreciated! Thank you in advance to everyone.

    | fablau
    0

  • I have a Wordpress website which is just using the Default theme, when I post in the blog, whatever I put in the "Title" field at the top of the editor is automatically is placed within the body of the blog post, like a headline, but it doesn't include any H1 tags that I can see. If I add my own headline within in the blog editor, it still inserts the Title like a headline. I am using the Yoast SEO Plugin and also write the meta title there, should I just leave the Wordpress title field blank so it doesn't insert into the blog post? Or is that inserted Title being recognized as an H1 even though I don't see h1 tags anywhere? Hope this isn't too confusing.

    | SEO4leagalPA
    1

  • Hi there, We run a quotes based site and so have hundreds of thousands of pages. We released a batch of pages (around 2500) and they ranked really well. Encouraged by this we released the remaining ~300,000 pages in just a couple of days. These have been indexed but are not ranking any where. We presume this is because we released too much too quickly. So we want to roll back what we've done and release them in smaller batches. So I wondered if: 1. Can we de-index thousands of pages, and if so what's the best way of doing this? 2. Can we then re-index these pages but over a much greater time period without changing the pages at all - or would we need to change the pages/the URL's etc? thanks! Steve

    | SteveW1987
    0

  • Hi there, We released around 4000 pieces of new content, which all ranked in the first page and did well. We had a database of ~400,000 pieces and so we released the entire library in a couple of days (all remaining 396,000 pages). The pages have indexed. The pages are not ranking, although the initial batch are still ranking as are a handful (literally a handful) of the new 396,000. When I say not ranking - I mean not ranking anywhere (gone up as far as page 20), yet the initial batch we'd be ranking for competitive terms on page 1. Do Google penalise you for releasing such a volume of content in such a short space of time? If so, should we deindex all that content and re-release in slow batches? And finally, if that is the course of action we should take is there any good articles around deindexing content at scale. Thanks so much for any help you are able to provide. Steve

    | SteveW1987
    0

  • What I am trying to accomplishI want what AMC has. When searching google for a movie at AMC near me, Google loads the movie times right onto the top of the first page. When you click the movie time it links to a pop up window that gives you the option to purchase from MovieTickets.com, Fandango or AMC.com.Info about my theaterMy theater hosts theater info and movie time info on their website. Once you click the time you want it takes you to a third party ticket fulfillment site via sub domain that I have little control over. Currently Fandango tickets show up in Google like AMCs but the option to buy on my theater site does not.Questions Generally, how do I accomplish this? Does the schema code get implemented on the third party ticket purchasing site or on my site? How can I ensure that the Google pop-up occurs so that users have a choice to purchase via Fandango or on my theaters website? TSt9g

    | ColeBField
    2

  • Hi there, I have a site structure flat like this it ranks quite well for its niche site.com/red-apples.html site.com/blue-apples.html The site is branching out into a new but related lines of business is it ok to keep existing site architecture as above while using a silo structure just for the two new different but related business? site.com/meat/red-meat.html site.com/fish/oceant-trout.html Thanks for any advice!

    | servetea
    0

  • Hey everyone, One of my clients is currently getting an answer box (people also ask) result for a page that is no longer live. They migrated their site approximately 6 months ago, and the old page is for some reason still indexed in the (people also asked) results. Weird thing is that this page leads to a 404 error. Why the heck is Google showing this? Are there separate indexes for "people also asked" results, and regular organic listings? Has anyone ever seen/experienced something like this before? Any insight would is much appreciated

    | HSawhney
    0

  • Hi I have a few pages flagged for duplicate meta e.g.: http://www.key.co.uk/en/key/workbenches?page=2
    http://www.key.co.uk/en/key/workbenches I can;t see anything wrong with the pagination & other pages have the same code, but aren't flagged for duplicate: http://www.key.co.uk/en/key/coshh-cabinets http://www.key.co.uk/en/key/coshh-cabinets?page=2 I can't see to find the issue - any ideas? Becky

    | BeckyKey
    0

  • We have recently developed a site with the structure of domain.com/page-title/about/category Instead of the traditional domain.com/category/page-title We want to optimize more on each single article rather than the category its in. However now we get the info from a seo company that this is rather a bad idea and it hurts the SEO performance because google doesnt understand the structure. The archive page of each category is domain.com/category/overview Whats your input on this?

    | Preen
    0

  • Hi guys, hope you had a fantastic bank holiday weekend. Quick question re URL parameters, I understand that links which pass through an affiliate URL parameter aren't taken into consideration when passing link juice through one site to another. However, when a link contains a tracking URL parameter (let's say gclid=), does link juice get passed through? We have a number of external links pointing to our main site, however, they are linking directly to a unique tracking parameter. I'm just curious to know about this. Thanks, Brett

    | Brett-S
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.