Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.

  • This question is deleted!

    0

  • Hi mozzers, We have decided to migrate the blog subdomain to the domain's subfolder (blog.example.com to example.com/blog). To do this the most effective way and avoid impact SEO negatively I believe I have to follow this checklist: Create a list of all 301 redirects from blog.example.com/post-1 to example.com/post-1 Make sure title tags remain the same on main domain Make sure internal links remain the same Is there something else I am missing? Any other best practices? I also would like to have all blog post as AMPs. Any recommendations if this something we should do since we are not a media site? Any other tips on successfully implementing those types of pages? Thanks

    | Ty1986
    1

  • Hi all, I have strange issue as someone redirected website http://bukmachers.pl to ours https://legalnibukmacherzy.pl We don't know exactly what to do with it. I checked backlinks and the website had some links which now redirect to us. I also checked this website on wayback machine and back in 2017 this website had some low quality content  but in 2018 they made similar redirection to current one but to different website (our competitor). Can such redirection be harmful for us?  Should we do something with this or leave it, as google stop encouraging to disavow low quality links.

    | Kahuna_Charles
    1
  • This question is deleted!

    1

  • I'm getting the same structured data error on search console form most of my websites, Invalid value in field "itemtype" I take off all the structured data but still having this problem, according to Search console is a syntax problem but I can't find what is causing this. Any guess, suggestion or solution for this?

    | Alexanders
    0

  • Hi Mozers, Are in-page tabs still detrimental for SEO? In-page tabs: allow you to  alternate between views within the same context, not to navigate to different areas.  As in one long HTML page that just looks like it's divided into different pages via tabs that you can click between. Each tab has it's own URL, which I guess is for analytics tracking purposes? https://XXX https://XXX?qt-staff_profile_tabs=1 https://XXX?qt-staff_profile_tabs=2 https://XXX?qt-staff_profile_tabs=3

    | yaelslater
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0

  • Hi I am migrating an old wordpress site to a custom PHP site and the URL profiles will be different, so want to retain all link profiles and more importantly if a user visits the old urls via search then they are seamlessly transferred to the new equivalent page For example www.domain.com/about-us is going to need to redirect to www.domain.com/aboutus.php www.domain.com/furniture is going to need to redirect to www.domain.com/furniture-collections.php etc What is the best way of achieving this apart from .htaccess as not 100% confident of doing this.  Could it be done via PHP or using meta tags?

    | ocelot
    0

  • Hi everyone! Let's use this as an example URL: https://www.example.com/marvel/avengers/hulk/ We have done a 301 redirect for the "Avengers" page to another page on the site. Sibling pages of the "Hulk" page live off "marvel" now (ex: /marvel/thor/ and /marvel/iron-man/). Is there any benefit in doing a 301 for the "Hulk" page to live at /marvel/hulk/ like it's sibling pages? Is there any harm long-term in leaving the "Hulk" page under a permanently redirected page? Thank you! Matt

    | amag
    0
  • This question is deleted!

    0
  • This question is deleted!

    | Ty1986
    0
  • This question is deleted!

    0
  • This question is deleted!

    | Ty1986
    0
  • This question is deleted!

    0

  • Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
    Disallow: ?filter= or User-agent: *
    Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!

    | Mat_C
    1

  • Google recently added related icons at the top of the image search results page. Some of the icons may be unrelated to the search. Are there any best practices to influence what is positioned in the related image icons section?  Thank you.

    | JaredBroussard
    1
  • This question is deleted!

    0
  • This question is deleted!

    1
  • This question is deleted!

    | Mat_C
    0
  • This question is deleted!

    | Ty1986
    0

  • Which is better practice, using 1/2" or ½"? The keyword research suggests people search for "1 2" with the space being the "/". How does Google handle fractions? Would ½ be the same as 1/2?

    | Choice
    2

  • Hello! We are doing an image optimization audit, and are therefore trying to find a way to get a list of all images on a site.  Screaming Frog seems like a great place to start (as per this helpful article:  https://moz.com/ugc/how-to-perform-an-image-optimization-audit), but unfortunately, it doesn't include images in CSS.  😞 Does the community have any ideas for how we try to otherwise get list of images? Thanks in advance for any tips/advice.

    | mirabile
    0
  • This question is deleted!

    0
  • This question is deleted!

    1

  • Hi there, We are now in the process of implementing a JSON-LD mark-up solution and are building cruises as an event.  Will this work and can we get away with this without penalty? Previously they have been marking their cruises as events using the data highlighter and this has displayed correctly in the SERP. The ideal schema would be Trip but this is not supported by Google Rich Results yet, hopefully they will support this in the future. Another alternative would be product but this does not display rich-results as we would like. Event has the best result in terms of how the information is displayed. For example someone might search "Cruises to Spain" and the landing page would display the next 3 cruises that go to Spain, with dates & prices. The event location would be the cruise terminal, the offer would be the starting price and the start & end date would be the cruise duration, these are fixed dates. I am interested to hear the communities opinion and experience with this problem.

    | NoWayAsh
    1

  • Hello there,  our site is on a Flatsome Wordpress theme (which is responsive and does not support AMP), and we are currently using the AMP for Wordpress plugin on our blog and other content rich pages. My question is - is a plugin sufficient to make our pages AMP friendly? Or should we consider switching to a theme that is AMP enabled already? Thanks!
    Katie

    | tnixis
    0
  • This question is deleted!

    0
  • This question is deleted!

    2

  • Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.

    | Mat_C
    0

  • Hi all, How bad is it to have a link in the breadcrumb that 301 redirects? We had to create some hidden category pages in our ecommerce platform bigcommerce to create a display on our category pages in a certain format. Though whilst the category page was set to not visable in bigcommerce admin the URL still showed in the live site bread crumb. SO, we set a 301 redirect on it so it didnt produce a 404. However we have lost a lot of SEO ground the past few months.  could this be why? is it bad to have a 301 redirect in the breadrcrumb.

    | oceanstorm
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | Alces
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | sb1030
    0

  • Hi all, Scenario: Ecommerce website selling a food product has their store on a subdomain (store.website.com). A GOOD chunk of the URLs - primarily parameters - are blocked in Robots.txt. When I search for the products, the main domain ranks almost exclusively, while the store only ranks on deeper SERPs (several pages deep). In the end, only one variation of the product is listed on the main domain (ex: Original Flavor 1oz 24 count), while the store itself obviously has all of them (most of which are blocked by Robots.txt). Can anyone shed a little bit of insight into best practices here? The platform for the store is Shopify if that helps. My suggestion at this point is to recommend they all crawling in the subdomain Robots.txt and canonicalize the parameter pages. As for keywords, my main concern is cannibalization, or rather forcing visitors to take extra steps to get to the store on the subdomain because hardly any of the subdomain pages rank. In a perfect world, they'd have everything on their main domain and no silly subdomain. Thanks!

    | Alces
    0
  • This question is deleted!

    0
  • This question is deleted!

    | Mat_C
    2
  • This question is deleted!

    | TStorm
    1

  • Hey guys and gals, I'm having a frustrating time with an issue. Our site has around 10 pages that are coming up as duplicate content/ duplicate title. I'm not sure what I can do to fix this. I was going to attempt to 301 direct the upper case to lower but I'm worried how this will affect our SEO. can anyone offer some insight on what I should be doing? Update:  What I'm trying to figure out is what I should do for our URL's. For example, when I run an audit I'm getting two different pages: aaa.com/BusinessAgreement.com and also aaa.com/businessagreement.com. We don't have two pages but for some reason, Google thinks we do.

    | davidmac
    1
  • This question is deleted!

    1
  • This question is deleted!

    1

  • Hi there, We have an e-commerce shopping site with over 8000 products and over 100 categories. Some sub categories belong to multiple categories - for example, A Christmas trees can be under "Gardening > Plants > Trees" and under "Gifts > Holidays > Christmas > Trees" The product itself (example: Scandinavian Xmas Tree) can naturally belong to both these categories as well. Naturally these two (or more) categories have different breadcrumbs, different navigation bars, etc. From an SEO point of view, to avoid duplicate content issues, I see the following options: Use the same URL and change the content of the page (breadcrumbs and menus) based on the referral path. Kind of cloaking. Use the same URL and display only one "main" version of breadcrumbs and menus. Possibly add the other "not main" categories as links to the category / product page. Use a different URL based on where we came from and do nothing (will create essentially the same content on different urls except breadcrumbs and menus - there's a possibiliy to change the category text and page title as well) Use a different URL based on where we came from with different menus and breadcrumbs and use rel=canonical that points to the "main" category / product pages This is a very interesting issue and I would love to hear what you guys think as we are finalizing plans for a new website and would like to get the most out of it. Thank you all!

    | arikbar
    0
  • This question is deleted!

    | OCN
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.