Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.

  • This question is deleted!

    0
  • This question is deleted!

    0

  • Given a choice, for your #1 keyword, would you pick a .com with one or two hypens? (chicago-real-estate.com) or a .co with the full name as the url (chicagorealestate.co)? Is there an accepted best practice regarding hypenated urls and/or decent results regarding the effectiveness of the.co? Thank you in advance!

    | joechicago
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | mnipko
    0
  • This question is deleted!

    0

  • Background: My e-commerce site uses a lot of layered navigation and sorting links.  While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google.  For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do?  Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed.  Is there a better way to do this, or is this a good solution? Thank you!

    | AndrewY
    1
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | ioV
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | RyanD.
    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    2
  • This question is deleted!

    | ACann
    0
  • This question is deleted!

    | IPIM
    0
  • This question is deleted!

    1
  • This question is deleted!

    6
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    1
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    | phogan
    0
  • This question is deleted!

    0

  • Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin

    | footsteps
    0
  • This question is deleted!

    0
  • This question is deleted!

    | kbbseo
    0
  • This question is deleted!

    0
  • This question is deleted!

    1
  • This question is deleted!

    | EGOL
    2
  • This question is deleted!

    0
  • This question is deleted!

    0
  • This question is deleted!

    1
  • This question is deleted!

    1

  • Hi, I have read numerous articles that support submitting multiple XML sitemaps for websites that have thousands of articles... in our case we have over 100,000.  So, I was thinking I should submit one sitemap for each news category. My question is how many page levels should each sitemap instruct the spiders to go?  Would it not be enough to just submit the top level URL for each category and then let the spiders follow the rest of the links organically? So, if I have 12 categories the total number of URL´s will be 12??? If this is true, how do you suggest handling or home page, where the latest articles are displayed regardless of their category... so I.E. the spiders will find l links to a given article both on the home page and in the category it belongs to.  We are using canonical tags. Thanks, Jarrett

    | jarrett.mackay
    0
  • This question is deleted!

    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.