Search Console's reports (Webmaster Tools) do include SEO data (organic) yes
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Best posts made by effectdigital
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
-
RE: Over-optimizing Internal Linking: Is this real and, if so, what's the happy medium?
Just so you know, EMA (exact-match anchor text), which is also referred to as 'over' link optimisation, is more a concern for your off-site links. In terms of your internal site structure, that's much more lenient. Obviously if it impacted UX (e.g: site nav buttons with ridiculous amounts of text that become over-chunky, annoying users) then that's bad. If you can satisfy UX and also do some light keyword optimisation of your internal site links, I honestly don't see that as a massive problem. If anything it just gives Google more context and direction
I don't think internal link over-optimisation is a myth, because there's always someone stupid enough to pick up a spoon and run with it (taking it to ridiculous extremes that would also impact UX and the readability of the site). But as long as you don't go completely mental and the links make sense for users (they end up where they would expect to end up, with concise link / button text that doesn't bloat the UI) then you're fine. Don't worry about this overly much, but don't take it to an unreasonable extreme
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
"루비카지노" translates into Ruby Casino according to Google:
Your main difficulty won't be the language, it will be that you are part of the gambling neighborhood, which is one of Google's defined bad neighborhoods
That being said, most of your competitors will be in that same situation, so in a way that's not really a massive problem
There aren't really many people searching for this right now:
https://trends.google.com/trends/explore?date=all&q=%EB%A3%A8%EB%B9%84%EC%B9%B4%EC%A7%80%EB%85%B8
... so you'd have to do something to inspire people to search for the term (as of now, no one really seems to care)
Google doesn't reject the keyword, it's just not really worth anything right now. Google should at least be able to interpret the keyword
-
RE: 404 Errors flaring on nonexistent or unpublished pages – should we be concerned for SEO?
If the errors are detected by Moz's crawler and Google Search Console (both at the same time) then I'd be much more concerned. It does also depend on the volume of them, if there are like three then it's probably not worth your time to sort it out. If there are hundreds or thousands, you might want to think about that
If you have hidden links in the coding which Moz is picking up on (that's how Moz's crawler works, by following links) then you can't really say: "We've checked each page and know that we are not linking to them anywhere on our site" - the fact that the crawler found the links means they exist and are there (even if you can't see them or find them). That is of course, unless your site is on one of the unusual architecture that Rogerbot (Moz's crawler) has difficulties with. That shouldn't be your first assumption, though - he usually knows where he's going
Where you say this:
"since we migrated our blog to Hubspot so we think it has something to do with the test pages their developers had set up" - pull them up on it! If their developers coded a load of errors into your site, that's their fault not yours and it should be their expense (not yours) to fix it
This is the page regarding their CMS:
https://www.hubspot.com/products/marketing/content-management-system
It does say "A Content Management System Built for Professional Marketers" - so migrating to it, shouldn't cause loads of SEO problems, as SEO is still the largest chunk of most site's online marketing and traffic. That should be nailed down, no problems, fewer problem than your prior system
In-fact, HubSpot know that SEO is important for a CMS: https://www.hubspot.com/cms-and-seo - "Every marketer has been told that they need to consider SEO when creating content. But what makes SEO a unique marketing strategy that marketers should prioritize? And why should your CMS have tools that help you execute your SEO strategy?" - I would argue that a load of 404 errors, could not be considered "tools that help you execute your SEO strategy"
Whether their developers messed up or their CMS is at fault is not really relevant. The main point is, the responsibility to sort it out should be on their side (not yours, IMO)
-
RE: Geo-location by state/store
-
"We are a Grocery co-operative retailer and have chain of stores owned by different people. We are building a new website, where we would geo-locate the closest store to the customer and direct them to a particular store (selected based on cookie and geo location). All our stores have a consistent range of products + Variation in 25% range. I have few questions" - make sure you exempt Googlebot's user-agent from your geo-based redirects otherwise the crawling of your site will end up in a big horrible mess
-
"How to build a site-map. Since it will be mandatory for a store to be selected and same flow for the bot and user, should have all products across all stores in the sitemap? we are allowing users to find any products across all stores if they search by product identifier. But, they will be able to see products available in a particular store if go through the hierarchical journey of the website." - any pages you want Google to index should be in your XML sitemap. Any pages you don't want Google ti index should not be in there (period). If a URL uses a canonical tag to point somewhere else (and thus marks itself as NON-canonical) it shouldn't be in the XML sitemap. If a URL is blocked via robots.txt or Meta no-index directives, it shouldn't be in the XML sitemap. If a URL results in an error or redirect, it shouldn't be in your XML sitemap.The main thing to concern yourself with, is creating a 'seamless' view of indexation for Google. It seems like you'll have to have the same products available in multiple stores. You will want them all indexed, but will have to work hard to differentiate them (different images, different copy, different Meta data) otherwise Google will probably pick one product from one store as 'canonical' and not index the rest, leading to unfair product purchasing (users only purchasing X product from Y store, never the others). In reality, setting out to build a site which such highly divergent duplication is never going to yield great results, you'll just have to be aware of that from the outset
-
"Will the bot crawl all pages across all the stores or since it will be geolocated to only one store, the content belonging to only one store will be indexed?" - No it won't. Every time Google crawls from a different data centre, they will think all your other pages are being redirected now and that part of the site is now closed. Exempt Googlebot's user-agent from your redirects or face Google's fiery wrath when they fail to index anything properly
-
"We are also allowing customers to search for older products which they might have bought few years and that are not part of out catalogue any more. these products will not appear on the online hierarchical journey but, customers will be able to search and find the products . Will this affect our SEO ranking?" - If the pages are orphaned except in the XML sitemap, their rankings will go down over time. It won't necessarily hurt the rest of your site, though. Sometimes crappy results are better than no results at all!
Hope that helps
-
-
RE: Google-selected canonical: the homepage?
That is a very weird situation and I think that, adding some canonical tags may not meet with your expectations. If Google have strong views about what is / is not a canonical URL, they will usually override canonical tags anyway (which is usually when you get the error you have described). I think we'd really need to see some examples of actual URLs to help you
-
RE: Does redirecting from a "bad" domain "infect" the new domain?
There is a risk of the new domain being infected with the prior domain's negative equity. Basically if you put exactly the same site up as before with the same content and the domain-name is very similar, Google will assume it's the same site. For a lot of people that can be a good thing (preserving their SEO authority) but if you have negative equity it could be a very bad thing
If Google has mislabeled your site, that's what they believe your site is. Since they believe your site IS a porn site, their algorithms will probably see the new domain (with the same site on it) and interpret that as, hey this porn-site owner is trying to evade our listing bans. Ban evaders are bad, slap it back on!
Since you're already at a point where the algorithm(s) have failed you, I wouldn't anticipate them to randomly be more generous in the future. You might get lucky, but you might find the issue does creep across
The good news I guess is that usually, web-hosting and domain purchases aren't bank-breakingly expensive. As such it's worth a try I suppose. Just don't be shocked if it does well for a few days, and then suddenly declines again
I know that content similarity matters a lot for positive redirects (e.g: if you want to move SEO authority from one page to another, you'd better be sure that the content is very similar - otherwise it has little to no effect even with 301s). Does this also hold true for negative redirects? I'm not certain but I would suspect that the same logic would be at play. In this situation though, since it's exactly the same site and content it could work against you and let the negativity flow through
Maybe you could create 302 redirects instead of 301 redirects (keeps the authority, in this case negative, on the redirecting URL - not the redirect destination). Maybe you could also exempt the Googlebot user-agent from the redirects so that it doesn't see them at all
That would be some stuff I would try, but I wouldn't be amazed if Google got really cross with you for SERP ban-evasion behavior. If you could possibly re-brand slightly and re-write a load of the content that might help significantly. It's a lot of money though when in reality, it's a 50/50 gambit
-
RE: Does redirecting from a "bad" domain "infect" the new domain?
To be honest that's exactly right. In actuality you aren't ban evading, because you aren't that site and it no longer lives on that domain. But if Google's algorithms are unaware of that, more issues could arise
-
RE: Should I buy an expired domain that has already been redirected to some other website? Can I use it or redirect it to my own site if I purchase it?
If the domain has already been redirected somewhere else and if the redirects were accepted by Google, much of the authority for that domain may now have moved to a new location. In modern times the practice of buying domains and 301 redirecting them for extra link juice, is ineffective unless you are operating under very specific circumstances (and even then it's usually considered black-hat)
Nowadays Google often checks to see if the new content and pages are similar to the old ones. If they're not then quite often the redirect doesn't work for SEO purposes
-
RE: How do I get coupon information like retailmenot has on the SERPs?
This is the search query for anyone who is interested in reproducing:
https://www.google.com/search?q=bouqs%20coupons
Screenshot:
Google links through to this page:
https://www.retailmenot.com/view/bouqs.com
This is the schema read-out for the page:
... so it's not a schema thing. It's just that Google has begun identifying patterns in how coupon sites tend to visually and architecturally lay-out their coupon codes (which it is now recognising). The way the CSS classes are marked up may be helping them (plenty of references for: "OfferItemFull"). Although it's not schema code, there are some schemas which use very similar language e.g: OfferItemCondition
They're using Nginx on the React framework (JS). They're also using this, whatever the heck it is: https://www.signal.co/ - description seems wishy-washy to me. Doesn't seem schema-related, though
-
RE: 301 Redirect in breadcrumb. How bad is it?
Past performance is seldom a good indicator of future success. The web is so competitive now that 'good unique content' isn't really good enough any more (anyone can make it)
This video from Rand is a good illustration: https://moz.com/blog/why-good-unique-content-needs-to-die-whiteboard-friday - where you say "content is original and not bad" - maybe that's not enough any more
One solution is the 10x content initiative: https://moz.com/blog/how-to-create-10x-content-whiteboard-friday
And your site should have a unique value-proposition for end users: https://www.youtube.com/watch?v=6AmRg3p79pM (just wait for Miley to stop outlining issue #1 then stop watching)
It's possible your tech issue is a contributing factor but I'd say search engine advancements and changing standards are likely to be affecting you more
Even if you do have a strong legacy, that's not a 'meal ticket' to rank well forever. SEO is a competitive environment
Sometimes tech issues (like people accidentally no-indexing their whole site or blocking GoogleBot) can be responsible for massive drops. But these days it's usually more a comment on what Google thinks is good / bad
-
RE: Barba Plugin and SEO
A good way to check would be to load one of your client's pages (which is utilising barba.js) and turn off JavaScript using the Web Developer plugin for Chrome - then reload the page and play spot the difference
If pages look very bare or nothing really loads without JS, you could have a problem. Google can penetrate and crawl JS (by using headless browsing and script execution) - but they don't do this all the time, or for everyone
I find that sites which contain more for Google to read in the non-modified source code (before scripts execute and generated items are rendered) tend to perform better in Google's SERPs (still, yes - in 2019)
-
RE: For FAQ Schema markup, do we need to include every FAQ that is on the page in the markup, or can we use only selected FAQs?
I would gravitate to marking everything up and letting Google decide what they want to show. Most of the time when you try to 'sculpt' what Google can see in terms of structured data, it usually results in a structured data spam action. Sometimes it can take weeks, months or years for that to happen - but Google always want to be given the full picture.
Google don't take too kindly to being funneled in a certain direction. Schema and rich snippet spam have been a big headache for Google since they started utilising structured data more, some stuff (like author avatars for posts in SERPs) has been entirely taken away in the past (though someone has told me recently, they have been seeing these again for Google mobile layout only).
Google do have some official guidance here:
https://developers.google.com/search/docs/data-types/faqpage
They give a microdata example of implementation: https://search.google.com/test/rich-results?utm_campaign=devsite&utm_medium=microdata&utm_source=faq-page
In their example, nothing is missing or has been left out. Since that's how Google have illustrated their example, that's what I'd aim for myself
-
RE: Do self referencing links have any SEO importance?
Self-referencing hyperlinks ( <a>tags) are pretty much pointless for SEO and in general</a>
<a>For specific sets of Meta links (like hreflangs or canonical tags) - self-referencing URLs are often a requirement and are part of SEO best practice</a>
-
RE: Structured data: Product vs auto rental schema?
Not 100% sure if Google even reads AutoRental schema on web-pages, though there is some evidence to suggest that Google sees valid usage of AutoRental in emails
If you go here:
https://developers.google.com/search/docs/data-types/product
On the left-hand sidebar, you can see a list of all the different schemas which Google documents that they support. AutoRental isn't present there. A Google search helps to confirm this. But they do list "LocalBusiness" schema, of which "AutomotiveBusiness" and "AutoRental" are valid sub types, so I assume that using AutoRental would be ok and acceptable by Google
It does seem that this site: https://search.google.com/structured-data/testing-tool#url=https%3A%2F%2Fwww.kayak.co.uk%2FCheap-Leicester-Car-Hire.6700.cars.ksp (Structured Data results for a car rental site) is indeed using product schema to list all the vehicles on offer, so I think it could be a good supplementary schema to go alongside AutoRental
These guys: https://search.google.com/structured-data/testing-tool#url=https%3A%2F%2Fwww.enterprise.com%2Fen%2Fcar-rental%2Flocations%2Fus%2Fny%2Fnew-york.html - are using AutoRental, and Google's structured data tool does indeed pick it up
Check more of your competitors using Google's Structured Data testing tool, if enough of them are using product schema on the vehicular product listings then I'd see no good reason to omit it
-
RE: How does Google handle fractions in titles?
I personally don't think that Google handles this data exceptionally well:
https://d.pr/i/2Y562I.png (Keyword Revealer screenshot)
https://d.pr/i/El2skX.png (Ahrefs screenshot)
https://d.pr/i/Y3bQ3p.png (Google keyword planner screenshot)
... however, I do sometimes see such keywords returned from Google Search Console and / or Google Analytics under GSC's "Search Queries" (search terms) report. So it makes me wonder, if Google really has such trouble, why does it highlight and record such keywords, passing them to me for further analysis?
Maybe it's actually not a big deal, it's just that Google's keyword planner (in terms of full unicode support) is way, WAY out of date (something they should have patched and fixed 5-6 years ago IMO)
Regardless of this though, more people do seem to search by 'half' or '50%', people 'almost' never type "½" as it's so hard to type in a web browser, you almost always have to copy and paste the symbol unless you have some kind of rich-text field entry add-in / extension
Google can process the symbol as search entry text:
https://www.google.com/search?q=%C2%BD
Google often states that actually, using unicode characters (even in URLs, in UTF-8) is ok in modern times. This is a compromise they have had to make, as many foreign characters are packaged in various unicode character sets
This is the full list of UTF-8 symbols:
http://www.fileformat.info/info/charset/UTF-8/list.htm
If you Ctrl+F for '½', it is technically in that list. As early as 2008 Google was recorded indexing UTF-8 URLs:
https://www.seroundtable.com/archives/018137.html
Much more recently, the debate has been raised again:
https://searchengineland.com/google-using-non-english-urls-non-english-websites-fine-294758
"For domain names and top-level domains non-Latin characters are represented with Unicode encoding. This can look a little bit weird at first. For example, if you take Mueller, my last name, with the dots on the U, that would be represented slightly differently as a domain name. For browsers and for Google search, both versions of the domain name are equivalent; we treat them as one and the same. The rest of the URL can use unicode utf-8 encoding for non-Latin characters. You can use either the escape version or the unicode version within your website; they’re also equivalent to Google."
Obviously Google is talking about URLs here, but usually Google becomes capable of reading characters in markup (content, Page Titles etc) first and then accepts them for valid URL usage later. I would surmise that it probably is 'ok' to use them, but it probably would not be 'optimal' or 'the best idea'
-
RE: Removing the Trailing Slash in Magento
You could always force trailing slashes instead of removing all trailing slashes.
What you really want to establish, is which structure has been linked to more often (internally and externally). A 301 redirect, even a deeper more complex rule - is seldom the answer in isolation. What are you going to do (for example) when you implement this, then you realise most of the internal links use the opposite structure to the one which you picked, and then all your internal redirects get pushed through 301s and your page-speed scores go down?
What you have to do is crawl the site now, in advance - and work out the internal structure. Spend a lot of time on it, days if you have to, get to grips with the nuts and bolts of it. Figure out which structure most internal/external links utilise and then support it
Likely you will need a more complex rule than 'force all' or 'strip all' trailing slashes. It may be the case that most pages contain child URLs or sub-pages, so you decide to force the railing slash (as traditionally that denotes further layers underneath). But then you'll realise you have embedded images in some pages with URLs ending in ".jpg" or ".png". With those, they're files (hence the file extension at the end of the URL) so with those you'd usually want to strip the slash instead of forcing it
At that point you'd have to write something that said, force trailing slash unless the URL ends with a file extension, in which case always remove the slash (or similar)
Picking the right structural format for any site usually takes a while and involves quite a bit of research. It's a variable answer, depending upon the build of the site in question - and how it has been linked to externally, from across the web
I certainly think, that too many people use the canonical tag as a 'cop out' for not creating a unified, strong, powerful on-site architecture. I would say do stick with the 301s and consolidate your site architecture, but do some crawling and backlink audits - really do it properly, instead of just taking someone's 'one-liner' answer online. Here at Moz Q&A, there are a lot of people who really know their stuff! But there's no substitute for your own research and data
If you're aiming for a specific architecture and have been told it could break the site, ask why. Try and get exceptions worked into your recommendations which flip the opposite way - i.e: "always strip the trailing slash, except in X situation where it would break the site. In X situation always force the trailing slash instead"
Your ultimate aim is to make each page accessible from just one URL (except where parameters come into play, that's another kettle of fish to be handled separately). You don't have to have EVERYTHING on the site one way or the other in 'absolute' terms. If some URLs have to force trailing slash whilst others remove it, fine. The point is to get them all locked down to one accessible format, but you can have varied controlled architectures inside of one website