Never heard of it. The main site neglects to mention what any of the features are and looks a bit 'thrown up' there. Personally without further info I wouldn't shell out for anything like that
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
![effectdigital effectdigital](/community/q/assets/uploads/profile/73052-profileavatar-1619583206115.png)
Posts made by effectdigital
-
RE: Does anyone rate CORA SEO Software?
-
RE: Tracking PDF downloads from SERP clicks
To address the main question (sorry we got a bit off track) - you can set up virtual page-views which fire when links to these PDF URLs are clicked. In some browsers this will trigger a download, in other browsers (like Chrome, which contains a built-in PDF viewer) - unless the site has been coded a certain way, a download may not actually even occur. The PDF may simply open in a new tab, and render as a web page with a full URL
As such I prefer to use virtual page-views piped to Google Analytics when the links to these documents are clicked, to track their views / downloads (which under normal circumstances, you can't distinguish between those two view types). Even when a PDF is being viewed 'as' a page on your site in a new tab, remember that PDF documents don't support the GA tracking script (so views to those PDF URLs get 'lost' from GA). You need to use virtual page-views, to remedy that
-
RE: Find archived sitemap of a website that no longer exists
You can use this site to see legacy site-maps for some websites (though they may be partial or incomplete):
For example, check these sitemap results:
For smaller sites, the results are much easier to look at.
-
RE: Tracking PDF downloads from SERP clicks
This has actually significantly changed my views on PDF optimisation. I didn't know that they held so much optimisation potential. I have always agreed with allowing them to index, but pushed to have them replaced with pages (which contain optional links / buttons to download the original PDF, for users who prefer that)
The sticking point is usually budget. Many clients can't afford the required redesign efforts, so it's good to know that PDFs actually hold (within their native format) some optimisation potential. Thank you EGOL
-
RE: Subdomain or subfolder?
The only real info I've seen direct from Google on a similar subject, is on this page:
If you scroll down, there's a table on the page. It's a table of Google's supposed views on the pros and cons of different configurations (e.g: sub-folders vs sub domains) when considering an international roll-out. Obviously your situation is slightly different
All I'd say is, your home-page is supposed to be the top of the tree. The main page from which all other sub-things (including sub pages and sub domains) stem from
That being the case - why the heck would you have the homepage on a sub-domain sub-page? It's kind of like building an automobile with its wheels on the roof
I can't find any specific guidance on why you shouldn't do this. But my suspicion is, no one has felt the need to write much on it because it seems like sheer lunacy
If you have a developer / designer who can't work with a normal structure, I**'d probably replace them with someone more competent**. That to me sounds like very worrying whinging (and I'm usually someone who backs devs to the hilt!)
-
RE: Is there a benefit to changing .com domain to .edu?
No, none whatsoever. The old TLD bonus debates drew an accurate correlation but completely inaccurate causality
People thought:
1) I see lots of EDU sites
2) They rank really well
3) If I make an EDU it will rank well
... WRONG! Google aren't that stupid. Otherwise all webmasters would now be using EDU domains and all other domains would be pointless (which would be a weird internet to live on)
The truth was actually this:
1) EDU TLDs (Top-Level Domains) tend to be chosen by educational bodies or organisations
2) Such organisations are usually run by educated people and academics
3) One thing those people are good at, is creating really strong (in-depth) accurate content
4) As such many EDU sites naturally became prominent, because of Google's normal ranking rules (not some weird EDU TLD bonus scheme)
If you're looking for quick and easy answers in SEO, you're gonna have a bad time
-
RE: Rel="prev" / "next"
I had never actually considered that. My thought is, no. I'd literally just leave canonicals entirely off ambiguous URLs like that. Have seen a lot of instances lately where over-zealous sculpting has led to loss of traffic. In the instance of this exact comment / reply, it's just my hunch here. I'd just remove the tag entirely. There's always risk in adding layers of unrequired complexity, even if it's not immediately obvious
-
RE: Few pages without SSL
It may potentially affect the rankings on:
-
pages without SSL
-
pages linking to pages without SSL
At first, not drastically - but you'll find that you'll get more and more behind until you had wished you just embraced HTTPS.
The exception to this of course, is if no one who is competing over the same keywords, is fully embracing SSL. If the majority of the query-space's ranking sites are insecure, even though Google frowns upon that - there's not much they can do (they can't just rank no one!)
So you need to do some legwork. See if your competitors suffer from the same issue. If they all do, maybe don't be so concerned at this point. If they're all showing signs of fully moving over to HTTPS, be more worried
-
-
RE: Rel="prev" / "next"
Both are directives to google. All of the "rel=" links are directives, including hreflang, alternate/mobile, AMP, prev/next
It's not really necessary to use a canonical tag in addition to any of the other "rel=" family links
A canonical tag says to Google: "I am not the real version of this page, I am non-canonical. For the canonical version of the page, please follow this canonical tag. Don't index me at all, index the canonical destination URL"
The pagination based prev/next links say to Google: "I am the main version of this page, or one of the other paginated URLs. Did you know, if you follow this link - you can find and index more pages of content if you want to"
So the problem you create by using both, is creating the following dialogue to Google:
1.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
2.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from #buildawall"
*Google goes backwards to non-paginated URL
3.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
4.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from"
*Google goes backwards to non-paginated URL
... etc.
As you can see, it's confusing to tell Google to crawl and index URLs with one tag, then tell them not to with another. All your indexation factors (canonical tags, other rel links, robots tags, HTTP header X-Robots, sitemap, robots.txt files) should tell the SAME, logical story (not different stories, which contradict each other directly)
If you point to a web page via any indexation method (rel links, sitemap links) then don't turn around and say, actually no I've changed my mind I don't want this page indexed (by 'canonicalling' that URL elsewhere). If you didn't want a page to be indexed, then don't even point to it via other indexation methods
A) If you do want those URLs to be indexed by Google:
1) Keep in mind that by using rel prev/next, Google will know they are pagination URLs and won't weight them very strongly. If however, Google decides that some paginated content is very useful - it may decide to rank such URLs
2) If you want this, remove the canonical tags and leave rel=prev/next deployment as-is
B) If you don't want those URLs to be indexed by Google:
1) This is only a directive, Google can disregard it but it will be much more effective as you won't be contradicting yourself
2) Remove the rel= prev / next stuff completely from paginated URLs. Leave the canonical tag in place and also add a Meta no-index tag to paginated URLs
Keep in mind that, just because you block Google from indexing the paginated URLs, it doesn't necessarily mean that the non-paginated URLs will rank in the same place (with the same power) as the paginated URLs (which will be, mostly lost from the rankings). You may get lucky in that area, you may not (depending upon the content similarity of both URLs, depending whether or not Google's perceived reason to rank that URL - hinged strongly on a piece of content that exists only in the paginated URL variant)
My advice? Don't be a control freak and use option (B). Instead use option (A). Free traffic is free traffic, don't turn your nose up at it
-
RE: Few pages without SSL
Yes that can hurt Google rankings. Insecure pages tend to rank less well and over time, that trend is only set to increase (with Google becoming less and less accepting of insecure pages, eventually they will probably be labelled a 'bad neighborhood' like gambling and porn sites). Additionally, URLs which link out to insecure pages (which are not on HTTPS) can also see adverse ranking effects (as Google knows that those pages are likely to direct users to insecure areas of the web)
At the moment, you can probably get by with some concessions. Those concessions would be, accepting that the insecure URLs probably won't rank very well compared with pages offering the same entertainment / functionality, which have fully embraced secure browsing (which are on HTTPS, which are still responsive, which don't link to insecure addresses)
If you're confident that the functionality you are offering, fundamentally can't be offered through HTTPS - then that may be only a minor concern (as all your competitors are bound by the same restrictions). If you're wrong, though - you're gonna have a bad time. Being 'wrong' now, may be more appealing than being 'dead wrong' later
Google will not remove the warnings your pages have, unless you play ball. If you think that won't bother your users, or that your competition is fundamentally incapable of a better, more secure integration - fair enough. Google is set to take more and more action on this over time
P.S: if your main, ranking pages are secure and if they don't directly link to this small subset of insecure pages, then you'll probably be ok (at least in the short term)
-
RE: Robots.txt & Disallow: /*? Question!
With this kind of thing, it's really better to pick the specific parameters (or parameter combinations) which you'd like to exclude, e.g:
User-agent: *
Disallow: /shop/product/&size=*
Disallow: */shop/product/*?size=*
Disallow: /stockists?product=*
^ I just took the above from a robots.txt file which I have been working on, as these particular pages don't have 'pretty' URLs with unique content on. Very soon now that will change and the blocks will be lifted
If you are really 100% sure that there's only one param which you want to let through, then you'd go with:
User-agent: *
Disallow: /?
Allow: /?utm_source=google_shopping
Allow: /*&utm_source=google_shopping*
(or something pretty similar to that!)
Before you set anything live, get down a list of URLs which represent the blocks (and allows) which you want to achieve. Test it all with the Robots.txt tester (in Search Console) before you set anything live!
-
RE: Can an external firewall affect rankings?
Site speed impact is where I see this becoming a real problem, unless the setup is done correctly
-
RE: Correct use of schema for online store and physical stores
Google state here:
https://developers.google.com/search/docs/data-types/local-business
That "Local Business" is what they use. "Organization" does not appear in that list
Think about what you want to achieve. Utilising schema helps contact details (and many other, granular pieces of information) to jump out for brand, or entity-based queries
If you have a head office which you're working on, aren't most of the queries to HQ internal? Do you really want people calling up HQ instead of going to one of the purpose-built, consumer outlets? Obviously if you're looking to ascertain a mixture of B2B and B2C leads, what I'm saying might not quite be accurate
In most circumstances, I wouldn't want work-offices (HQ) to be more visible in Google's search results, so I would eradicate all schema. Then I'd just go with LocalBusiness schema for all the outlets
-
RE: Canonical and Alternate Advice
This is the correct solution!