There's only one logical reason I can come up with for this: At some point in the past, the site was not on WP and had .html extensions on URLs. When the site was moved to WP, they may have wanted to keep URLs exactly the same, which would require finding a way to add file extensions in WP.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Best posts made by LoganRay
-
RE: Why add .html to WordPress pages?
-
RE: [Very Urgent] More 100 "/search/adult-site-keywords" Crawl errors under Search Console
I've seen similar situations, but never in bulk and not with adult sites. Basically what's happening is somehow a domain (or multiple) are linking to your site with inaccurate URLs. When bots crawling those sites find the links pointing to yours, they obviously hit a 404 page which triggers the error in Search Console.
Unfortunately, there's not too much you can do about this, as people (or automated spam programs) can create a link to any site and any time. You could disavow links from those sites, which might help from an SEO perspective, but it won't prevent the errors from showing up in your Crawl Error report.
-
RE: Why add .html to WordPress pages?
I totally understand not wanting to rely on plugins if they're not necessary. 301 redirects generally don't impact the rankings of a site all that much if the redirects are pointing to the same content. So dropping .html by way of a single redirect rule is not likely to ruin your organic traffic numbers.
-
RE: Tracking 301 redirect traffic in Google Analytics
In your HTACCESS file (of the redirecting domain) where the existing redirect is located.
-
RE: Measuring the size of a competitors website?
I highly recommend buying the license for Screaming Frog, at $100/year, you won't find a more valuable SEO tool for the money. You won't find a free (and trustworthy) that will crawl a site that large.
-
RE: Canonical tag use for ecommerce product page detail
Even though your product titles have lower search volume, you still want to use your product detail pages as the preferred ranking URL for any product-specific query. This is where the benefit of long-tail keywords comes into play, you'll get a lot less traffic from them, but the quality (likelihood of them converting/purchasing) is much higher.
Take the 'Nicolai – 8th Wonder finger bit – Granite' for example. If I've done a Google search for that, my research is already done and I know exactly what I need. If I click on a result that takes me to a category page, that's not going to be as useful to me. But if the search result is for the product detail page, I'm landing on the exact page I want. It's got all the product info & specs I need, pricing, and most importantly, an Add to Cart button.
Hope that's helpful. For more info on ecomm SEO, I'd recommend taking a look at back through some of the Moz posts on the subject: https://moz.com/blog/category/e-commerce
-
RE: Should I use noindex or robots to remove pages from the Google index?
Hi Tyler,
Yes, remove the robots.txt disallow for that section and add a noindex tag. Noindex is the only sure-fire way to de-index URLs, but the crawlers need to be allowed to crawl those pages to see the tag.
-
RE: Should I use noindex or robots to remove pages from the Google index?
Rhys,
Your web dev team is confused. You cannot de-index by simply disallowing them in your robots.txt file. Google will still index anything they find (that doesn't have a noindex tag) from a link, this is the reason you often see search results that say "A description for this result is not available because of this site's robots.txt" as the description.
Here's a quote from Google regarding the subject: "You should not use robots.txt as a means to hide your web pages from Google Search results." - https://support.google.com/webmasters/answer/6062608?hl=en
-
RE: Google serp pagination issue
I took a look at the paginated versions of these pages and it looks like canonicals and rel next/prev are all setup correctly. Have you gone into Search Console and specified the purpose of the 'start' parameter? That might help Google better interpret what's going on. Typically pagination is handled by a more accurately named parameter which could be causing confusion, despite having canonical/prev/next tags configured properly. Under the Crawl menu in Search Console, go to URL Parameters and configure 'start' for pagination.
-
RE: What does Disallow: /french-wines/?* actually do - robots.txt
Disallow: /*?
This disallow literally says to crawlers 'if a URL starts with a slash (all URLs) and has a parameter, don't crawl it'. The * is a wildcard that says anything between / and ? is applicable to the disallow.
It's very easy to disallow the wrong this especially in regards to parameters, for this reason I always do these 2 things rather than using robots.txt:
- Set the purpose of each parameter in Search Console - Go to Crawl > URL Parameters to configure for your site
- Self-referring canonicals - most people disallow URLs with parameters in robots.txt to prevent indexing, but this only prevents crawling. A self-referring canonical pointing to the root level of that URL will prevent indexing or URLs with parameters.
Hope that's helpful!
-
RE: What does Disallow: /french-wines/?* actually do - robots.txt
Disallow: /?* is the same thing as Disallow:/?, since the asterisk is a wildcard, both of those disallows prevent any URL that begins with /? from being crawled.
And yes, it is incredibly easy to disallow the wrong thing! The robots.txt tester in Search Console (under the Crawl menu) is very helpful for figuring out what a disallow will catch and what it will let by. I highly recommend testing any new disallows there before releasing them into the wild.
-
RE: Too many Tags and Categories what should I do to clean this up?
Sounds like you're probably using WP, if so, I'd highly recommend this plugin to handle your category and tag pages. I made the same observation you did not too long ago and went on a mission to figure out the best solution, and noindexing these pages with that plugin is what I came up with.
-
RE: One domain - Multiple servers
You can find more details about how a reverse proxy works here. Regarding the setup, unfortunately that's outside of my wheelhouse - we had to rely on the tech support team to help out with that.
-
RE: Looking to remove dates from URL permalink structure. What do you think of this idea?
Jeff,
Based on the traffic you say this blog gets, I'm assuming its rather large and has hundreds, if not thousands of posts. Which leads me to one simple question:
Why? This seems like a HUGE amount of risk and a pretty decent amount of work to go into something that's really not going to provide any benefit.
*edit: It should also be noted that just because Google has recently stated that redirects now pass all link juice doesn't mean you should go needlessly add a massive amount of redirects. There are other implications that redirects have, like load time for example. If you have 1,000 redirects, every single one of those is going to be checked before any page on your site loads, which takes a lot of time.
-
RE: Conditional Noindex for Dynamic Listing Pages?
Hi,
I would advise against this for a couple reasons. First of all, dropping and re-adding pages to the index isn't quick, it often takes weeks before Google will obey a noindex tag. Second, even if it were quick, is going to cause problems at the authority level - is your site a trustworthy source of information? This week, yes, next week, no. It's best to have long-term content that can build trust/authority over time without ephemeral.
An alternative approach you might take is writing content about the particular trials offered, this will help prevent thin content. You might also consider adding a call to action on empty pages that prompt users to provide an email address and you'll notify them when trials of type XYZ have opened back up.
-
RE: Does content in collapsible menus negatively affect SEO or featured snippets?
Google has stated more than once that accordioned content will be crawled, indexed, and weighted the same as any other content on the page. This wasn't always the case, but as mobile usage grew and accordions became commonplace for good mobile UX, they pivoted their stance on this type of content. In your example, I would especially not be worried about it since the primary content of the page is in the Overview section.