Organic visits to landing page would probably be the most useful
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Posts made by OlegKorneitchouk
-
RE: Best method for tracking true keyword ranking overtime?
-
RE: Do 403 Forbidden errors from website pages hurt rankings?
Google's official stance is that it's no big deal. They'll simply won't rank those URLs (so if you are getting 403s for pages you want to rank, that's an issue).
However, if those 403's have external backlinks, I'd redirect them to a similar live page on your site to not lose those links. I also prefer to not waste crawler time visiting non-existent pages so if its easy to do, I'd minimize the number of 403 response codes on the site.
You sample 403 looks like an offsite CDN which shouldn't hurt, but it is taking up unnecessary resources. No reason not to remove.
-
RE: Move domain to new domain, for how much time should I keep forwarding?
huh? hreflang shouldn't be part of the equation here.
If you can't setup redirects, see if you can put canonicals on old pages to the new site. That way you'll transfer ranking benefits to the new site.
-
RE: How to remove skip links, main navigation, sidebars as h2 tags in wordpress genesis
edit the HTML in the child theme files. Likely the header and footer files specifically.
-
RE: How to prevent development website subdomain from being indexed?
So....
- If the dev site has not been indexed yet, you can block crawlers via robots.txt
- If the dev site is already indexed and you want it removed, add meta NOINDEX tag to all pages allow the site to be crawled via robots.txt (reason: you want google to crawl and noticed the noindex tag on the pages so that they remove it from search results. if the site is indexed and you block crawler via robots.txt, google will keep the pages indexed but won't crawl them again). Once deindexed, you can block via robots.txt again
As long as its blocked (and you build that into your process), having the dev site on the same domain shouldn't be an issue. We have our own dev domain + server that autoblocks all pages from being indexed.
-
RE: Problems with WooCommerce Product Attribute Filter URL's
If you got canonicals set up properly, not much else you can do. Google should recognize what you are trying to do but it will still crawl the filters.
If you don't want the subcategories indexed and crawls minimized:
- nofollow filter links
- block them via robots using wildcards
- set to noindex for all filtered urls.
-
RE: Using "nofollow" internally can help with crawl budget?
I've always treated it as such and their case study seems to confirm it. You can't sculpt link authority with it, but you can control the crawl budget better.
-
RE: Q&A Page Titles
I recommend you keep the title long as is and let google cut it based on the keywords searched. Editing those titles to a certain length and keep them making sense isn't scalable.
- long title tag = text gets cut off in serps, not worse for SEO but worse for CTR
- page heading and page title not matching won't hurt your SEO.