Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Posts made by ThompsonPaul
-
RE: Why is Indeed.com traffic appearing as organic in Google Analytics?
In addition to the possibility Cat mentions, it's also possible someone has configured Indeed.com as an additional search engine in GA's Admin settings (Property > Tracking Info > Organic Search Sources - see screenshot next comment) As far as fixing the past data, you can't "change" it, but you could use a custom segment which filters it?
Let us know what you find!
Paul
(Tried to include screenshot, but Moz keeps barfing on submit.)
-
RE: Why is our noindex tag not working?
That SF response is from the robots.txt block, not a noindex tag though. SF is also ignoring the incorrectly formatted tag (as it should).
Paul
-
RE: Why is our noindex tag not working?
The example page does have a noindex tag in place, but it's not formatted correctly, so it's being ignored. Very subtle issue, but your tag is using "smart quotes" around the elements instead of the plain quotation marks that are required for code. If you look very carefully at the page source code, you'll see that they are quotation marks like you'd see in a Word document; the ones at the beginning of robots and noindex curl a different way than the ones at the end.) This usually occurs when the content was written in a word processor instead of a plain-text editor.
Because the tag's not formatted correctly, it's ignored by both the crawling tools and the search engines.
In addition, the site also has all pages blocked from crawling by the sitewide robots.txt file. This and noindex are conflicting instructions to search engines.
If a page is blocked in robots.txt, then the search engine will not crawl the page and so is not able to discover the noindex tag, even if it were formatted correctly. Therefore if the search engine becomes aware of the page in any other way than straight crawling (and there are a number of ways this can happen), then the page will still get indexed.
If it's a dev site, the proper way to keep it from being indexed is to either noindex all pages, or to put the site behind a password so the search engines and public visitors can't access it. If using noindex, the site must not be blocked with a robots.txt directive.
Does that all make sense?
Paul
-
RE: 3rd Party Reviews - Schema Implementation
Google's been quite clear that reviews on our sites from 3rd party review sources must not have review markup, SK. It's grounds for a "manipulative markup" penalty.
Review markup should "Only include critic reviews that have been directly produced by your site, not reviews from third-party sites or syndicated reviews."
Paul
-
RE: Does Google penalize you for reindexing multiple URLS?
Nope - not for submitting for indexing. If there have been considerable page changes though, traffic could fluctuate as search engines take their time figuring out and understanding the changes.
Paul
-
RE: How to find all broken images?
You'll probably find Xenu Link Sleuth will work for this, Red. Not quite as elegant as using an extraction in Screaming Frog, but it is free.
Lemme know if that works?
Paul
-
RE: Probably basic, but how to use image Title and Alt Text - and confusing advice from Moz!
That Moz help page is kinda half-right For many browsers, in the absence of a title attribute, they will display the alt text on hover instead. But if a title attribute is declared, it will be used, as you note.
Keep in mind - image title attributes are not used as ranking factors for regular search, but they are used as ranking factors for Google Image Search. So still well worth optimising them if your site benefits from image search specifically (as a good photographer's site likely would).
Paul
-
RE: Htaccess - Redirecting TAG or Category pages
The regex in your RedirectMatch doesn't say what you think it says, Jes
This part (note the bolded part of the expression (.*)
/category/Sample-Category**(.*)**
doesn't actually say "match the URL that is specifically** /category/Sample-Category"**
That**.*** is a wildcard thatmeans "and any other additional characters that might occur here"
So what it's saying is "match the URL /category/Sample-Category _**as well as **_any URLs that have any additional characters after the letter "y" in category. Which is what is catching your -1 variation of the URL (or the -size-30 in your second example).
In addition, that wildcard has been set as a variable (the fact it's in brackets), which you are then attempting to insert into the end of the new URL (with the $1), which I don't think is your intent.
Instead, try:
RedirectMatch 301 /category/Sample-Category https://OurDomain.com.au/New-Page/
you should get the redirect you're looking for, and not have it interfere with the other ones you wish to write.
Let me know if that solves the issue? Or if I've misunderstood why you were trying to include the wildcard variable?
Paul
P.S. You'll need to be very specific whether the origin or target URLs use trailing slashes - I just replicated the examples you provided.
-
RE: Will changing the property from http to https in Google Analytics affect main unfiltered view?
"The wrong name in the 'Property name' field or wrong setting in the http_ or https doesn't affect the data collection in your GA account." _I know - which is why I explained that changing the protocol there to HTTPS won't have any effect on the archive View either, which was the OP primary question.
"...verify all the properties and choose the preferred one" will not have any effect on "help[ing] me avoid a common problem others have experienced" as you state. That problem (Referral visits recorded as Direct in GA) is caused by the referral data being stripped out of the request when it travels from an HTTP site to an HTTPS site. There's nothing in GSC that can have any effect on this - it is entirely controlled by the server headers of the connection request.
There's nothing about Kevin's original question that has anything to do with or can be addressed in Search Console.
P.
-
RE: FAQ page structure
I'm afraid we're in a bit of a state of limbo on this issue, Nickington.
Currently, Google's ranking is based on the desktop version of the site for both desktop and mobile results.
Google has clearly stated, and many tests have confirmed, that content which is not visible unless a user interacts with the page (such as having to click the drop-down for the FAQ result) is deemphasized in search results.
BUT! Google has also stated that they are in the midst of changing to a mobile-first index which will mean that the mobile version of websites will be used for ranking assessment. In addition, they've been quite clear that at that point, since things like accordion drop-downs are so much better UX for mobile users, that kind of hidden content will no longer be "penalised".
Unfortunately, there's been no declared date for when the switch to the mobile Index will occur. Instead, they've said that it will be rolled out gradually to individual sites as they detect that the mobile version of a site is ready for it. This means it's entirely impossible to assess when the changeover might apply to your site.
So for absolute best SEO, the solution is unfortunately a bunch of extra work for a hybrid solution My best recommendation would be to build out the FAQ content using headers and sub-headers so the content is fully visible on the page and gets full indexing authority from the search engine. Then keep an eye on the mobile indexing of your site to detect when it appears that it has moved fully into the mobile-first Index stage, and at that point redo the FAQ page to utilize the accordion drop downs instead.
The alternative would be to build out the page using the accordion drop downs to start with, and accept the fact that it will be some time before that hidden content has a chance to rank effectively. this would definitely be a second-best option in my opinion.
Does that all make sense?
Paul
-
RE: CSS user select and any potential affect on SEO
Nope, no effect, as you suspect, Eddie. That kind of attempt at copy-blocking doesn't change the way the content is available in the DOM of the page (which is why it's so ineffective) so has no effect on crawling/indexing.
You can prove this for yourself by going to a page and right-clicking to select the browser Inspect mode. This mode shows the actual rendered DOM of the page that the search engines are reading, and you'll see the content is easily accessible. The other option is to do a Fetch and Render request from within the site's Google Search Console and it will also show you what content Google can see.
That what you were looking for?
Paul
-
RE: Will changing the property from http to https in Google Analytics affect main unfiltered view?
He's talking about the effect of updating the default URL in the Google Analytics Property Settings, Veronica - nothing to do with Google Search Console.
P.
-
RE: Will changing the property from http to https in Google Analytics affect main unfiltered view?
Lemme try that again
1. Updating the protocol in your GA Property settings won't have any harmful effect on your archive view (or any other view).
2. Setting the Property address to HTTPS isn't what's going to determine if the incoming referral data is available - that's been determined before the visits actually arrive by the browser connection and server headers. If the visit to HTTP is coming from HTTPS, the referrer data was stripped out before the request was sent. GA just uses whatever it receives. (My point was, even if you don't set the protocol to HTTPS in your Profile, the referrer data will come through anyway. But getting your GA set to the correct HTTPS address reinforces this, so still a good idea.)
Hope that clarifies?
Paul
-
RE: Google Indexing Of Pages As HTTPS vs HTTP
That's not going to solve your problem, vikasnwu. Your immediate issue is that you have URLs in the index that are HTTPS and will cause searchers who click on them not to reach your site due to the security error warnings. The only way to fix that quickly is to get the SSL certificate and redirect to HTTP in place.
You've sent the search engines a number of very conflicting signals. Waiting while they try to work out what URLs they're supposed to use and then waiting while they reindex them is likely to cause significant traffic issues and ongoing ranking harm before the SEs figure it out for themselves. The whole point of what I recommended is it doesn't depend on the SEs figuring anything out - you will have provided directives that force them to do what you need.
Paul
-
RE: Will changing the property from http to https in Google Analytics affect main unfiltered view?
The short answer, Kevin, is no, updating the protocol to HTTPS won't have any negative effect on your archival GA view.
Just having the visitor connection resolve at the HTTPS address "should" transmit the referrer info fully (it's the browser that determines this, not GA), but always good to back this up by having the GA property properly configured for the HTTPS update.
Little sidenote - since your site is now HTTPS, any referrals it sends to other non-HTTPS sites will get stripped. If it's important to you to have those other sites recognise you sent them traffic (this is important in some partnership/affiliate,/advertiser situations for example) you can add a Meta Referrer tag to your site so that it will send at least some of the referrer info even to a non-HTTPS site. You can select how much info gets passed based on your security sensitivities.
That what you were looking for?
Paul
-
RE: US and UK Websites of Same Business with Same Content
Yup, but doesn't matter. Hreflang works for this situation whether cross-domain or on a subdirectory/subdomain basis (and in fact is even more effective when cross-domain as you're also getting the benefit of the geo-located ccTLD.)
P.
-
RE: US and UK Websites of Same Business with Same Content
Unfortunately, your information is incorrect, Veronica.
Hreflang is specifically designed for exactly this situation. As Google Engineer Maile Oye clearly states, one of the primary uses of hreflang markup is:
- Your content has small regional variations with** similar content in a single language**. For example, you might have English-language content targeted to the US, GB, and Ireland.
(https://support.google.com/webmasters/answer/189077?hl=en)
There's no question differentiating similar content in the same language for different regions/countries is more of a challenge than for totally different languages, but it can absolutely be done, and in fact is a very common requirement for tens of thousands of companies.
Paul
- Your content has small regional variations with** similar content in a single language**. For example, you might have English-language content targeted to the US, GB, and Ireland.
-
RE: US and UK Websites of Same Business with Same Content
The more you can differentiate these two sites, the better they will each perform in their own specific markets, CP.
First requirement will be a careful, full implementation of hreflang tags for each site.
Next, you'll need to do what you can to regionalise the content - for example changing to UK spelling for the UK site content, making sure prices are referenced in pounds instead of dollars, changing up the language to use British idioms and locations as examples where possible. It'll also be critical to work towards having the reviews/testimonials from each site's own country, rather than generic, This will help dramatically from a marketing standpoint and also help differentiate for the search engines, so a double win.
And finally, you'll want to make certain you've set up each in their own Google Search Console and used the geographic targeting for the .com site to specify its target as US. (You won't' need to target the UK site as the .co.uk is already targeted so you won't' get that option in GSC.). If you have an actual physical address/phone in the UK, would also help to set up a separate Google My Busines profile for the UK branch.
Bottom line is - you'll need to put in significant work to differentiate the sites and provide as many signals as possible for which site is for which country in order to help the search engines understand which to return in search results.
Hope that all makes sense?
Paul
-
RE: I want to use some content that I sent out in a newsletter and post as a blog, but will this count as duplicate content?
You've asked a great question, Wagada. The fact that the version of the content on Constant Contact's page has already been indexed does mean that you'll have a duplicate content challenge, but there are ways to address it.
The whole problem with duplicate content is not that it generates some kind of penalty, (it doesn't) it's just that search engines then have to decide which of the dupe pages they should point to in the search results.
The version you publish on your own site already has several things going for it, and you need to add additional signals to help the search engines prioritise your site's version. First, at least part of the rest of your site is probably already talking about the same topics, so there will be more relevance there than from the random topics on Constant Contact. Plus, if your newsletter is like most, it will be linking back to your site, giving the SEs another signal.
The biggest thing you can do to get your site's page considered as the canonical (primary) version is to get at least a few links pointing to it. Social media links can be very useful for this, especially from Google Plus, but a solid link or two from other sites will go a long way as well. Also, make sure your page does NOT link to the CC page - that way there's a clear authority signal that only travels one way.
For future reference, if you're going to publish newsletter content on your own site, there are a couple of steps to take in preparation.
- Publish the content on your own site a day or a couple of days in advance
- Use the Fetch and Render tool in GSC to help it get crawled and indexed before sending the newsletter (SEs take "first published" date into account when trying to ascertain which page to return in results.)
- Make sure it's strongly-linked internally - maybe even put a link to the newsletter content page on your homepage before sending the newsletter
- Get a few incoming links to the newly-published page before the newsletter goes out.
- Use the newly published page's address in the newsletter's preheader text link where it says "If not showing up well in your email, you can read this in your browser" so the dupe page actually links back to the page you want to be considered primary.
- Or best yet, do the above and also turn off the newsletter archive on Constant Contact altogether and make the prepublished page on your site the only version. This is the best, but obviously takes a bit more work and preparation to pre-publish. It also offers the massive benefit of delivering those newsletter viewers who do want to read in a browser to your own pages where you can induce further activity/conversions. Though it should be said that in the newsletters I've managed, very few people click the "view in browser" links anymore anyway.
Hope all that makes sense?
Paul
-
RE: I want to use some content that I sent out in a newsletter and post as a blog, but will this count as duplicate content?
While a good solution if it were possible, unfortunately ESPs like Constant Contact don't give you any way to alter the content of the of their pages. And canonical tags must be in the or they'll be ignored.
-
RE: Google Indexing Of Pages As HTTPS vs HTTP
Great! I'd really like to hear how it goes when you get the switch back in.
P.
-
RE: Should the Product Name/Keyword be first in meta description?
Where the words are in a meta-description is not a ranking factor, Icarus. Think of meta descriptions as your opportunity to make a mini sales pitch for your page on the search results page.
You'll want to use the primary keywords that explain what the page is about, as that just makes sense, but artificially forcing them to be the first words can make the meta-description look very spammy and artificial in many cases.
There is a benefit to having the words in the meta description that your visitor actually searched for, as they will show up in bold in the description, but remember they'll also be showing in bold in the page title too, so overdoing/forcing it can contribute to looking artificial, which can turn visitors off.
Also to keep in mind, especially after last week's Google change to longer meta-descriptions, is that Google will often change the meta description if they think the one you wrote isn't a good match for the searcher's query. So keeping them effectively descriptive of the page, instead of keyword-stuffed, and having a good call-to-action in the description is still your best bet.
In your specific example, if the page is primarily about the wholesale distribution of that product, it makes perfect sense to include that in the description. Whether those should be the first words depends entirely on whether you can write a natural-sounding description text that way. I often use such words to expand on what the page is about in a way that often can't be effectively handled in the much shorter, more restrictive page title.
Hope that helps?
Paul
-
RE: Should the Product Name/Keyword be first in meta description?
This is relatively true for page title, but the OP is asking about meta-descriptions.