Hey @Viktoriia1805,
Would you be able to let us know the URL so we can check this?
Also are there any errors showing when you run the URL through the inspection tool on Google Search Console (as below)?
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hey @Viktoriia1805,
Would you be able to let us know the URL so we can check this?
Also are there any errors showing when you run the URL through the inspection tool on Google Search Console (as below)?
Hi,
They have confirmed that this has no impact on your search ranking:
https://searchengineland.com/google-says-keywords-tld-part-url-ignored-ranking-purposes-251971
Cheers,
Sean
Hey Moz folks,
I'm in the process of getting a deck together to discuss 'Disruption in search' and I wondered if the community would be able to share any ideas of their own as to what they feel is/has/will be disrupting the search/search marketing industry?
Any input would be appreciated.
Thanks!
Sean
Hi,
If you go to sitemaps and then run the same report against your image sitemap, it should shed some more light. Make sure to click on 'excluded' before you post the next screenshot. This will then list out the reasons that the images are not being indexed.
Kind regards,
Sean
Hi There,
Auto tagging is a great place to start.
If that isn't possible, it it's breaking, you can manually specify UTM values by adding a query extension to each landing page URL.
Google's tool for doing this is here > https://ga-dev-tools.appspot.com/campaign-url-builder/
All the best,
Sean
Hi There,
Press the 'go to new report' button in the top right. It should take you to the new Google Search Console report which will tell you the exact reasons why URLs that you're submitting aren't being indexed.
Feel free to post the screenshot of the results on here and I can help you figure out what to do next.
All the best,
Sean
Hi There,
First Question:
There are robots.txt files in these locations, however both don't reference a disallow for this page so this isn't the issue:
I would use the new version of Google Search Console and use the URL inspection tool. This will let you know why the URL is being excluded and give you the option to manually submit it if everything is ok with it.
Second Question
Google can be a bit funny about meta descriptions, especially ones that contain emoji characters like the ticks you are showing within the Yoast meta description field. Sometimes if it deems the meta description irrelevant, it can rewrite it itself. If this has been changed recently, I would give it a bit more time to refresh and see what happens.
I hope that helps!
Kind regards,
Sean
Hi there,
This looks to be something to do with a URL generator that is dynamically creating URLs based on certain categories being present.
The downside being that those categories look to have been removed, hence the 'null' values being passed through.
I would push this back to your developers as a serious issue, there could be a large amount of duplication occurring if you're not careful.
All the best,
Sean
Hi There,
It all looks fine for me when I search for Sumy Designs. Is this your business?
If so, this might have rectified itself.
All the best,
Sean
Hi Nikki,
You're absolutely right. There will be a domain level http > https redirect in place within the htaccess file and page to page redirects.
Any page to page redirects that sit below that rule can be put in manually and will be caught **after **the domain level rule and this will not conflict.
All the best,
Sean
Hi Alan,
If they're not willing to share their link/outreach targets with you, I would steer clear of them.
What they're saying about it being their 'intellectual property' is complete rubbish. At the end of the day, the site they're building links to is your business and it should be in your best interest to ensure that links that are being built to the website are of high quality. It sounds like they're being unnecessarily shady.
Find yourself an SEO that is transparent and working **with you **to create a solid link acquisition strategy, not working in isolation and secrecy.
All the best,
Sean
Hi there!
You're absolutely correct, you would just need a domain-level server redirect to take all http URLs to their https equivalent.
Best practice is to ensure you've also got the non-www. version of the website covered in that same redirect too, to avoid any chains. You'll see what I mean if you run your domain through this > https://varvy.com/tools/redirects/
Depending on what stack you're using, here are the 2 guides. One for htaccess and one for IIS:
You shouldn't need to, but just to be on the safe side, I would also add canonical tags to the http pages, pointing to their https equivalent prior to putting the server level redirect in place. This is to ensure that you won't be causing yourself issues if the redirect fails for any reason. Details here:
Once you've got your redirection planned in, make sure you set up a Google Search Console account for the https version to ensure there are no crawl issues and to check that the http version of the site stops receiving traffic.
That should just about cover it!
Hope it all goes well,
Sean
Hi Daniel,
That does seem very odd!
There can be various different things at play here in my experience:
It could also be a problem with how you're handing hard 404 errors vs soft 404s - i.e. actual not founds vs pages that don't function but the server is under the impression that they're fine.
Best of luck!
Sean
Hi guys,
We're noticing a few alternate hostnames for a website rearing their ugly heads in search results and I was wondering how everyone else handles them. For example, we've seen:
We're looking to ensure that these versions all canonical to their live page equivalent and we're adding meta robots noindex nofollow to all pages as an initial measure. Would you recommend a robots.txt crawler exclusion to these too?
All feedback welcome!
Cheers,
Sean
Hey there,
Personally, I'm not a fan of date subfolders, especially if they're split into three. It creates an abnormally deep URL structure that doesn't seem overly logical.
Google have stated that using 301 redirects no longer loses link equity (Check out the link below!) so you wouldn't be at risk of losing any link juice if you did go this way.
https://moz.com/blog/301-redirection-rules-for-seo
In terms of SEO advantage, I would say that putting your articles into 'themed' folders (think /blog/shoes as an example) would help as it would assist search engines in understanding your content, whilst including important keywords within the URL. Gianluca refers to these types of content pods as 'Topical Hubs' and his video is quite entertaining!:
https://moz.com/blog/topical-hubs-whiteboard-friday
Hope this helps,
Sean
Hey there,
In an ideal world, I would recommend maintaining the NAP (Name, Address & Postcode) you use anywhere else on the internet. This allows search engines (and users for that matter) to have some degree of continuity between your business listings.
Moz local is a decent tool for analysing your business listing and checking that your NAP is the same across the internet. It'll even highlight sites where this isn't the case so you can manually update them.
All the best,
Sean
To be honest, it all looks correct and that would have been the way I did it. If Google is currently not ranking the correct URL, it'll likely update when they take the 301 into account when they next recrawl the page.
It might be a factor in why rankings have dropped but it's likely to pick back up again when their index is updated. My advice is to hold tight and hope it all fixes itself soon.
All the best,
Sean
Reducing the number of pages that search engines need to crawl is definitely the right way to go, so yeah I would definitely get a uniform URL structure in place if possible. Reduce that crawl budget
It does sound like you're adopting a good approach to canonicals. There are a lot of sites out there that do the same approach with non-uniform URL structures such as the one you're using.
Don't suppose you could supply the URL so I can have a look?
I'm having no trouble accessing the site via the link you provided. Might be a caching issue on your browser.
Try removing all browsing data and trying again.
Give Moz OSE a try. Run a link export on the domain and match back the DA and PA stats from the Moz export to the existing domains that have been disavowed. You can also map the spam scores back to them too. 5 minute job at most
To be honest, not a lot. Some people see slight increases, some people see slight decreases.
It's something Google has been quite savvy with since it encouraged everyone to make the upgrade so I wouldn't worry at all.
Hey there,
My advice would be to minimize those redirect chains as soon as you can, not just for potential SEO benefit but more to lessen server stress and speed up page load.
Interestingly, chained 301s don't lose equity in the eyes of a search engine now (see updates below) so it's interesting that you're seeing such a fluctuation.
https://moz.com/blog/301-redirection-rules-for-seo
For the .html to trailing slash pages, did you say that he did a page-to-page remap for all of them instead of putting a redirect rule in place to catch them all? That seems like a crazy thing to put in place! Your redirect file (htaccess or similar) must be enormous!
Hope that helps,
Sean
This is exactly my point, the view=quick version of the page needs a canonical pointing back to the proper version of the page. Yes, it's not a normal webpage but that is now search engines are viewing it because it's missing the tag.
This will signify to search engines that the 'view=quick' version of the page is a duplicate of the normal page and that it should not be ranked within search results.
I've clicked on the link you mentioned above and there is no canonical on the page?
Hey there,
This is a funny issue that has been plagueing SEO for years - I find that the best fix is to ensure that canonicals are automatically added to query parameter versions of pages.
In this instance, I would ensure a canonical tag is added in the below format:
This would ensure that search engines understand the main page you want to rank and it encourages them not to rank these query parameter pages.
All the best,
Sean
This might also have nothing to do with the HTTPS switchover - Moz's algorithm is designed to mimic that of Google and if they've made updates to it, it might just be a natural drop in DA.
Have a look at your competitors that are being tracked - have they also fallen in DA?
Hi Becky,
Authority-wise, they were product pages that were deep down in the IA of the site so they had very little PA anyway. And rankings-wise, they were very niche, branded products so we didn't have a great deal of competition anyway - rankings were always pretty good.
Even if you're selling something unbranded and generic, I would say that you should create the pages ASAP to start the ball rolling. You're not going to get penalized in the first instance of loading them on and when you come back later to optimise them, you should gain rankings.
Cheers,
Sean
Hey,
I used to work for a company that had me loading on around 10,000 product SKUs into the system at one time. Rather than lifting the supplier descriptions of each item, we used excel to template up the descriptions of the products based on the stock information we had - ranges, sizes, materials etc. Page titles, however, we left auto generated by our CMS until we had a chance to go back and review them.
I would get them loaded on the site in bulk to get them indexed/available for product PPC bidding with the intention of coming back to them later to optimise - meta descriptions, page titles, product descriptions etc.
In the end I think we used a crowdsourcing platform like Elance to hire several freelancers to write the product descriptions uniquely. It was a great investment from an SEO and written copy perspective. Might be something to think about?
Cheers,
Sean
I'm sure it would probably be valuable and pass on some SEO benefit but if it's likely that someone has purchased this domain purely for transferring the link equity of inbound links to your website, then there's always a chance that this is going to be picked up on.
A small chance, but a chance nonetheless.
Hey Gill,
You can use multiple cases of local business - it's no problem at all.
All the best,
Sean
Your site is constantly trying to load an image file which is killing the connection.
http://a1bail.naked.digital/wp-content/uploads/2015/05/dial.png
On chrome browser, press CTRL+SHIFT+J to see developer tools and click 'Console' - you can see a bunch of red error messages that recur hundreds of times.
Sort out that image and try again.
All the best,
Sean
Hey there,
Welcome to the world of SEO!
The best place for beginners to start would be here: https://moz.com/beginners-guide-to-seo
Feel free to utilize the Moz Community for any questions you have and any help you need.
All the best,
Sean
Hey Becky,
Check out the Moz Algo update page for more detailed information about algo updates:
https://moz.com/google-algorithm-change
All the best,
Sean
Is the linking domain (not the link that links to that domain) relevant to your business?
i.e. is it an old domain that used to be active or is it just a dummy domain that was set up for the practice of link building?
Hi all,
I'm looking to implement sitelink search box mark-up in Google Tag Manager in JSON-LD format. This would be popped into the Custom HTML tag and would look a little something like:
The above option is great if you have one query string for your search term, but what if you had a URL that triggered two query strings - for example:
https://www.example.com/search?q=searchterm&category=all
Would you need to amend the code something like the below:
Any help would be much appreciated!
Cheers,
Sean
Hey Kevin,
The quick and easy answer to this is to use Google's Keyword Planner tool. Please note that the search volumes are now grouped (i.e. blue shoes for sale, buy blue shoe, buy blue shoes will all have a combined volume figure) so it's not 100% accurate but it's a good starting place.
Cheers,
Sean
Hey Helen,
I know that Penguin is now part of the core algorithm but I wasn't under the impression it was automatically ignoring spammy links? I still think that Google is requiring disavow files to be created, most likely to contribute to its machine learning algorithm - i.e. you feed it links that have been created from your site, it then both ignores them and also learns to look out for them on other websites.
Where did you hear that it automatically ignores them?
Cheers,
Sean
Hey Ricky,
Honestly, it probably would pass on some value but I would bet my bottom dollar that it would be short lived, especially now that Google Penguin has become part of the core algorithm and it's likely that this will eventually become machine educated.
My vote is to not.
All the best,
Sean
Hey Adrian,
In my opinion, none of the above are 'doorway pages' (sorry!).
A 'doorway' page is one of many pages that webmasters use as spam entry points to a website and are typically targeted towards the same keyword with a variable attached - most commonly 'locations'.
Here's an example - e cigarette online retailers tend to have little to no local presence within an area but they will build a list of pages to make it look as if they do:
In this example, they've even gone as far as to add in a list of these URLs (navigable from the footer) to ensure search engines crawl them - http://vapour-hut.co.uk/e-cig-town.php.
It's all a bit of a black hat tactic to grasp a whole bunch of search traffic that they otherwise shouldn't have. In your first example, the two pages for 'trumpet' and 'piano lessons', although similar, are targeting vastly different searcher intents which are relevant to the originating website - ergo they are not doorway pages.
I hope that makes sense
All the best,
Sean
Hey Miriam,
Whatever happens, I would LOVE Google to roll out a more comprehensive way of exporting Google My Business data - the API is still massively limited and the manual process of exporting data for clicks, calls, website visits etc. is an absolute killer!
Referring to your prediction number 3 on voice search, I'm thinking there's going to be way more of a focus on geolocation and we're going to see another huge rise in 'near me' searches.
All the best,
Sean
Hey Kirupa,
Short answer is that you're all good. The canonical is correct.
All the best,
Sean
Hi there,
The first search is for your exact brand name, meaning that there's a huge likelihood of that particular search being for your brand and nothing else. Because of this, Google can figure out that the searchers intention is to find you, and will show more of your website in first position.
The second search for 'cars ireland' is a lot more generic and might be searched by users that aren't familiar with your brand. For this reason, Google wouldn't show your organic sitelinks because you're unlikely to be the best result for the majority of these searches.
I hope that helps,
Sean
Hey there,
I would say the easiest way to do this would be through Google Tag Manager. If you can put the container on all pages of your website, it has a bunch of support for adding 3rd party tags (such as Facebook, Twitter etc.) within the interface and you can add your tag and deploy the container in seconds.
All the best,
Sean
Hey there,
Put your website URL through Moz Open Site Explorer and pull out all backlinks pointing to any page on your domain. Moz has it's own metric known as 'Spam Score' that will rate how likely each link is to be 'spam'. This should help you on your way:
https://moz.com/blog/spam-score-mozs-new-metric-to-measure-penalization-risk
I hope that helps!
Sean
Hi there,
I would check your server logs for pages that have been serving up 404 errors when accessed via Google Search.
You can then add in the necessary 301s for these and you'll be good as new!
Alternatively, if you want a catch-all approach, you can use screaming frog to identify the 404 pages on the site. If you have a URL export from your old site, run these URLs through using the 'upload list' function on screaming frog to check their response codes individually.
All the best,
Sean