Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.

  • This question is deleted!

    0

  • I have a client who sell retirement homes. Their current schema for each property is LocalBusiness - should this in fact be Product schema?

    | Adido-105399
    0

  • Hi, I am doing link cleaning, and still a bit new to this, and would appreciate the community's help 🙂 So, I have a site which has quite a lot of low DA (or no DA) follow backlinks. BUT, the links are from my niche the sites are not spammy the anchors are okay and they are from good Geo location for me The only negative thing is that these sites are a bit "dead" meaning that there is no new content, and thus there is no traffic or clicks coming from them. Should I keep those links or disavow them? To me these links are natural, but do they help me at all.... FYI I have plenty of good DA links. But what do you guys think, if I disavow all these low DA backlinks, does Google think that I am trying to manipulate my backlink structure to look better than it naturally is? Cheers guys and girls! 🙂

    | RistoM
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0

  • Hi, i have a couple fashion clients who have very active blogs and post lots of fashion content and images. Like 50+ images weekly. I want to check if these images have been used by other sources in bulk, are there any good reverse image search tools which can do this? Or any recommended ways to efficiently do this for a large number of images? Cheers

    | snj_cerkez
    0
  • This question is deleted!

    0
  • This question is deleted!

    0

  • Hi all, I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design: URLs did not change Mobile URLs were redirected Images were moved from a subdomain (images.sitename.com) to Amazon S3 Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel) I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats. User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)

    | BandG
    0

  • Hi, I am doing the SEO for a webshop, which has a lot of linking and related websites on the same root domain. So the structure is for example: Root domain: example.com
    Shop: shop.example.com
    Linking websites to shop: courses.example.com, software.example.com,... Do I have to check which keywords these linking websites are already ranking for and choose other keywords for my category and product pages on the webshop? The problem with this could be that the main keywords for the category pages on the webshop are mainly the same as for the other subdomains. The intention is that some people immediately come to the webshop instead of going first to the linking websites and then to the webshop. Thanks.

    | Mat_C
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    1
  • This question is deleted!

    | vtmoz
    0
  • This question is deleted!

    0
  • This question is deleted!

    1

  • Hello Everyone! I'm new here! My husband and I are working on creating a website: https://sacwellness.com .The site is an online therapist directory for the the Sacramento California area. Our problem is this: In wordpress our category system is being used for blog posts. Our theme is using a custom taxonomy system to categorize different therapist specialties, therapeutic approaches, etc. We've found ourselves in a position where our custom taxonomy and categories are near duplicates. for example we have the blog categories: ADHD counseling, Anxiety therapy, and Career counseling our corresponding custom taxonomy/therapist categories are: ADHD, Anxiety, and....(oops) career counseling. My understanding is that google doesn't see a difference between identically named categories and custom taxonomies and will so choose one to rank and disregard the other, effectively leaving you competing against yourself. is this true in a case like this? Can google maybe understand the difference because of the custom taxonomy and/or URL paths? if this is a problem is it ok to have near duplicates....like ADHD vs. ADHD counseling. This has been our solution so far....but now we're questioning it....derp x_x. I thought about tagging the categories with no index, but I think the archive pages would be useful for people. Essentially we have 2 sets of archives for each keyword. One is for blog posts, and one is for therapists who work with that particular issue along with the 6 most recent blog posts in that category.....because we are putting the 6 most recent blog posts at the bottom of the therapist pages I feel like it wouldn't be as terrible of a loss if we had to noindex the category pages. ....what do you think? Thank you!

    | angelamaemae
    0
  • This question is deleted!

    | BobGW
    1

  • Hi everyone, I am trying to add span tags in H1, break tag on 2 lines and style each line of H1 differently: Example: Line 1Line 2 I might add a smaller font for line 2 as well... Is this SEO friendly? Will crawlers read entire text or can interfere and block it. Thank you!

    | bgvsiteadmin
    0
  • This question is deleted!

    0

  • Hello! I did something dumb back in the beginning of September. I updated Yoast and somehow noindexed a whole set of custom taxonomy on my site. I fixed this and then asked Google to validate the fixes on September 20. Since then they have gotten through only 5 of the 64 URLS.....is this normal? Just want to make sure I'm not missing something that I should be doing. Thank you! ^_^

    | angelamaemae
    0
  • This question is deleted!

    0
  • This question is deleted!

    | BobGW
    0
  • This question is deleted!

    0

  • On 26 of October 2018 My website have around 1 million pages indexed on google. but after hour when I checked my website was banned from google and all pages were removed. I checked my GWT and I did not receive any message. Can any one tell me what are the possible reasons and how can I recover my website? My website link is https://www.whoseno.com

    | WhoseNo
    0
  • This question is deleted!

    1
  • This question is deleted!

    0

  • Basically we get a lot of users uploading photos as part of their review, but many photos aren't moderated into our pages and therefore are never displayed. Things like selfies rather than photos of the product or just random google images that are completely unrelated to our products or services. Is there any benefit in cleaning up the gallery since some images we don't use are just sat there in admin?
    when a page loads, would it be quicker if we had less content in the gallery? With our SEO hat on.
    or does it not matter since it's not loading that content (photos) anyway?

    | Fubra
    0
  • This question is deleted!

    0

  • Hi All, I'm trying to find more information on what IP address Googlebot would use when arriving to crawl your site from an external backlink. I'm under the impression Googlebot uses international signals to determine the best IP address to use when crawling (US / non-US) and then carries on with that IP when it arrives to your website? E.g. - Googlebot finds www.example.co.uk. Due to the ccTLD, it decides to crawl the site with a UK IP address rather than a US one. As it crawls this UK site, it finds a subdirectory backlink to your website and continues to crawl your website with the aforementioned UK IP address. Is this a correct assumption, or does Googlebot look at altering the IP address as it enters a backlink / new domain? Also, are ccTLDs the main signals to determine the possibility of Google switching to an international IP address to crawl, rather than the standard US one? Am I right in saying that hreflang tags don't apply here at all, as their purpose is to be used in SERPS and helping Google to determine which page to serve to users based on their IP etc. If anyone has any insight this would be great.

    | MattBassos
    0

  • Hello! Though I browse MoZ resources every day, I've decided to directly ask you a question despite the numerous questions (and answers!) about this topic as there are few specific variants each time: I've a site serving content (and products) to different countries built using subfolders (1 subfolder per country). Basically, it looks like this:
    site.com/us/
    site.com/gb/
    site.com/fr/
    site.com/it/
    etc. The first problem was fairly easy to solve:
    Avoid duplicated content issues across the board considering that both the ecommerce part of the site and the blog bit are being replicated for each subfolders in their own language. Correct me if I'm wrong but using our copywriters to translate the content and adding the right hreflang tags should do. But then comes the second problem: how to deal with duplicated content when it's written in the same language? E.g. /us/, /gb/, /au/ and so on.
    Given the following requirements/constraints, I can't see any positive resolution to this issue:
    1. Need for such structure to be maintained (it's not possible to consolidate same language within one single subfolders for example),
    2. Articles from one subfolder to another can't be canonicalized as it would mess up with our internal tracking tools,
    3. The amount of content being published prevents us to get bespoke content for each region of the world with the same spoken language. Given those constraints, I can't see a way to solve that out and it seems that I'm cursed to live with those duplicated content red flags right up my nose.
    Am I right or can you think about anything to sort that out? Many thanks,
    Ghill

    | GhillC
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    1
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | newwhy
    0
  • This question is deleted!

    | Nitruc
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    1

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.