No problem! I meant to mention this in my first comment, but I also noticed that there's no robots.txt file in place. That's obviously not going to help your indexation problem too much, but nonetheless something you should know about.
Best posts made by LoganRay
-
RE: Google only crawling a small percentage of the sitemap
-
RE: Similar product descriptions but with different urls
Hi Jonas,
The plan you've laid out - to have product variants for widths - is exactly how you should be handling this situation.You're likely to see improvements from making this improvement. Since you're getting rid of some unnecessary dupes it helps your crawl efficiency and shows that you noticed the issue and handled it accordingly.
Also, be sure to update your XML sitemap and resubmit in Search Console to ensure Google notices your change ASAP.
Good luck!
-
RE: Domain Forwarding and Rankings
Hi Melissa,
You actually won't see any benefit simply from having keyword rich domains redirecting to your site. This is a tactic that people used to do in the early days of SEO, and Google has since flagged this as a spam tactic and their algorithm accounts for it now.
For improving your rankings, there are literally hundreds of levers your can pull. Since you're running local SEO, I highly recommend spending time reading everything you can on the topic. A good starting place is the Moz Local Learning Center.
-
RE: Duplicate Content on a Page Due to Responsive Version
Hi,
That sounds like a definite candidate for duplicate content issues. A true responsive design only has one set of page elements coded, which then rearrange based on screen size, that's what makes responsive the optimal solution for SEO. Search engines only have to read one code set per page and they know it'll render for most devices. In your case, I believe search engines will view that as a tactic to game the system; one version of the content is essentially cloaked when the other is displayed.
-
RE: Google Search Console Block
No problem, good luck! Moz has plenty of great resources to help you along the way. Be sure to check out the beginners guide to SEO.
-
RE: Have You 301 Redirected Domain A to Domain B ?
EGOL,
I was part of a domain migration like this about a year and a half ago. In this case, Domain A was very old, we're talkin late 90's, and Domain B was brand new. Needless to say, the results were not optimal.
-
RE: Duplicate content across a number of websites.
Hi,
You've got a really big mess on your hands, IMO. Search engines absolutely _do _penalize duplicate content, and it sounds like you have a ton of it on the existing sites, with plans to create even more.
What types of locations are the 25 different sites going to be targeting? All within the same country, or one for each of 25 different countries? The answer to this question will drive any further recommendations.
-
RE: HTTPS Migration & Preserving Link Equity
Hi Joe,
Specifically addressing your first question, 301 redirects that take URLs to HTTPS won't cause any loss in link juice as they would with a typical 301 redirect. This information came out of a Q&A with Mueller, which you can read more of here.
For your second question, that largely depends on your server, but you should be able to do this with a single rule, as opposed to redirects on a one-to-one basis.
-
RE: Multiple redirects hurt?
Hi,
You'll want to avoid redirect chains whenever possible. Every time a URL request hits the server, it checks your entire list of redirects, if this is happening multiple times for one URL, you're slowing down page speeds. This is much more important for mobile load times since they don't have the resources of a desktop/laptop computer, but nonetheless, something you should still avoid regardless of your mobile traffic percentage.
You can easily update these by using Screaming Frog, under the Reports menu drop-down, the second option is redirect chains. This report will export an Excel doc in which you can see all of the URLs you have that have multi-step redirects.
-
RE: Have You 301 Redirected Domain A to Domain B ?
I think you're absolutely right about link assets being the only value that would pass through. Essentially, everything else (content, on-page assets, etc.) is all gone, so it stands to reason that Domain B wouldn't get any value from those elements.
I should've mentioned that Domain B was previously not an active domain. It was purchased and then immediately replaced Domain A. So it's a slightly different situation than you asked about, but nonetheless, a situation that does happen and should be considered anytime domain migrations are on the table.
-
RE: What place does plural versions of keywords have in keyword research?
Hi Alex,
Google views plurals as synonymous, so those 2 keywords will (almost always) have the same rank. Nonetheless, I still like to include them in my keyword monitoring, just to have a more granular level of visibility on what keywords the site is ranking for.
-
RE: Self referencing canonicals AND duplicate URLs. Have I set them up correctly?
As Yossi said, configuring parameters in Search Console should help - _but, _that's only going to help you out in Google.
Adding a disallow for those parameters in the robots file will help solve the problem in other search engines.
The thin content is definitely contributing as well. Moz identifies dupes based on a source code match between any two pages of 90% or higher. When you consider all your template code is the same across every page, thin content isn't enough to differentiate the source code.
I also noticed on one of those screenshots that you got a one dupe of /shop/necklaces/ and /shop/necklaces/necklace/. If you can, I recommend removing that second one with doubled up 'necklace' folders, that's going to cause a lot of dupes as well.
-
RE: Needs clarification: How "Disallow: /" works?
The directive that is literally "Disallow: /" will prevent crawling of all pages on your site, since technically, all page paths begin with a slash. Robots.txt files can only live at the root folder (subdirectory) of a site, so if you want to disallow a folder, you'll need to specify that with a directive like "Disallow: /folder-name/
-
RE: How much SEO damage would it do having a subdomain site rather directory site?
There are a few good use-cases for subdomains; higher education sites use them frequently, and often have organizational need for it. Another instance I can think of is international websites that want to have different sites for each country in which they operate. This can help each division operate a site without having to go through HQ every time they need an update.
That being said, given the typical website, with SEO considerations, I always favor subdirectories for the simple reason of keep all domain metrics (links, authority, etc.) together, rather than splitting the metrics across subdomains.
-
RE: Does it make sense to pursue long-tail keywords with low search volume
Yes, it definitely is beneficial to attack long-tail keywords in your content strategy. This chart is my go-to point of reference any time this topic comes up. It very clearly illustrates the need for long-tail targeting in a comprehensive SEO strategy. Not to mention, your competitors are most likely NOT putting the time and effort into it, so you can get some pretty big wins in that regard.
-
RE: What is the process for allowing someone to publish a blog post on another site? (duplicate content issue?)
HI Donald,
On the site that borrowed the post from the original publisher, add a canonical tag that points to the URL of the original blog post. This is the ideological way to handle this, and how news sites should be crediting the original publisher when syndicating articles.
-
RE: Homepage not ranking for branded searches after Google penalty removal
Hi Brendan,
Just as it can take a while to rank a brand new domain, it will take some time to recover from the penalty. The best thing you could be doing is working to get some good links pointing to the homepage to help outweigh those junk links.
-
RE: Google Search Console - Indexed Pages
I'm seeing this same thing, within the same instance of GSC.
I believe the discrepancy between the Sitemap Report and Index Status Report is due to the Sitemap report being strictly based on the URLs submitted in your XML sitemap. The index status report seems to be more inclusive of URLs that don't exist in the XML sitemap - dynamically generated URLs, old 301'd URLs that haven't been dropped yet, paginated URLs, etc.
Regarding the discrepancy between the Index Status Report and the Site: command, I have no clue. I would expect those to be more similar, but I'm seeing similar differences to what you're seeing.
-
RE: Does it make sense to pursue long-tail keywords with low search volume
No problem!
Optimize for the long-tail keyword in question. Going more generic with your on-page elements and content is likely to dilute your other pages. You can use generic terms in content, but treat those as opportunities for internal linking, pointing generic keywords towards the page you're targeting for that particular generic term.
-
RE: I can't crawl the archive of this website with Screaming Frog
Try going to File > Default Conif > Clear Default Configuration. This happens to me sometimes as well as I've edited settings over time. Clearing it out and going back to default settings is usually quicker than clicking through the settings to identify which one is causing the problem.
-
RE: Question about "sneaky" vs. non-sneaky redirects?
Is this strictly online advertising you're talking about? If so, that's just weird, I can't imagine a scenario in which I'd want links passing through a redirect. I wouldn't considered this sneaky or deceitful, but on their part, it's unwise to not be linking straight to the resolving domain and you should jump for joy that your competitor's links aren't carrying as much weight as they should.
-
RE: Solving pagination issues for e-commerce
Hi Joshua,
You will need all 3 of those tags to properly markup your pagination, just not all at the same time.
Page=1 should have a canonical to the base URL (no page=X), and a rel="next" for page 2. Page 2 will have prev tag for the base level URL, and next for page 3. And so on.
Google says they don't index paginated URLs anymore, but I prefer to play it safe and implement these tags anyway.
Regarding this comment: "It's also my understanding that the search results should be noindexed as it does not provide much value as an entry point in search engines." There is some validity to this, but honestly, it's your preference. I lean on the side of preventing indexing of search results. I don't see much value in those pages being indexed, and if you're doing SEO properly, you're already providing solid entry points. Those pages will also use up a lot of your crawl budget, so that's something to consider too. Chances are, there are better sections of your site that you'd prefer bots spend their time on.
-
RE: Google has indexed some of our old posts. What took so long and will we lose rank for their brevity?
Hi,
As long as that's not the bulk of your content, I don't see it being a problem. Thin content penalties are more common when short-form content is the majority of a site. I've mostly seen this with ecommerce sites where product detail pages make up about 95% of the page count and the product descriptions are thin or non-existent. It's hard to be viewed as authoritative or trustworthy when only 5% of your pages have a decent amount of content.
-
RE: HTTP URLs Still in Index
Thanks for your input Casey. I'm dealing with about 4,000 URLs, so I think I'll go with the patience method!
-
RE: 'SEO Footers'
The objection is that those links pass more authority/PR. Therefore the hesitation to remove them is that SEO pages will lose authority. I know this isn't true, but am having a hard time getting others to come to the light side.
-
RE: Dynamic XML Sitemap Generator
Hi Chris,
Screaming Frog has been my go-to XML sitemap generator for years. Plenty of customization options for exclusions and inclusions.
-
RE: Google has indexed some of our old posts. What took so long and will we lose rank for their brevity?
It depends on how accessible they are to search engines. If you've recently updated your sitemap, and those posts are on the new one, but weren't on the old one, that could cause it. New internal/external links pointing to those pages could have helped as well.
-
RE: Usage of keywords in URL
If having an informational URL as you've proposed is helpful for the user to understand the page, then by default, it's also going to be helpful for SEO. Google's goal is to provide the best results for people, so indirectly, it does help to have topic-indicating URLs.
-
RE: 'SEO Footers'
Thanks for that article, not quite the type of links I'm addressing here, but definitely some applicable nuggets of information there.
-
RE: 404 Error Complications
Hi Will,
It's hard to say without seeing exactly what the format of the broken and working URL is. When you click the link and you get the working URL, do you take that exact URL when entering directly? Or are you typing it in?
URI valet might help you troubleshoot this further. Or if you want to DM me the URL, I can take a look and see if I can identify the problem.
-
RE: Https & Google Updated Guidelines
Hi,
Since you're already using an SSL, you might as well apply it to the whole site. Here's a great article on all the necessary steps to take to ensure a smooth transition: https://moz.com/blog/seo-tips-https-ssl
I can't speak much to your second question, but my guess would be as Google continues to focus on overall site quality, this will play a role in rankings to some degree.
-
RE: Single page website vs Google
Hi Leszek,
Single page websites don't do very well in search results. The primary reason for this is that there's too many topics discussed on one page. It's important to be able to optimize different pages for different topics. For example, a site that reviews running shoes could have a page optimized for different brands of running shoes, or different styles (trail running, road running, minimalist running shoes, etc..). If this site was a single-page site, you'd be diluting all those topics by putting them on one page.
There are cases where a single-page site can be used without hurting SEO efforts, but since most sites target a number of different topics, it's rare to see.
-
'SEO Footers'
We have an internal debate going on right now about the use of a link list of SEO pages in the footer.
My stance is that they serve no purpose to people (heatmaps consistently show near zero activity), therefore they shouldn't be used. I believe that if something on a website is user-facing, then it should also beneficial to a user - not solely there for bots. There are much better ways to get bots to those pages, and for those people who didn't enter through an SEO page, internal linking where appropriate will be much more effective at getting them there.
However, I have some opposition to this theory and wanted to get some community feedback on the topic.
Anyone have thoughts, experience, or data to share on this subject?
-
RE: Screaming Frog returning both HTTP and HTTPS results...
Hi,
Found the source of your issue: this URL (https://www.aerlawgroup.com/sex-crimes.html) has a link pointing to it from a HTTP URL (http://www.aerlawgroup.com/sex-crimes/new-allegations-emerge-in-sexual-misconduct-case.html), which is probably how SF found all the other secure URLs.
All of your HTTPS page return 200. If you want to go back to non-secure, those secure URLs will need to be redirected back to HTTP, the same as you probably did when you went to secure.
Also, looks like you've still got a couple HTTPS pages indexed: https://www.google.com/search?q=site%3Aaerlawgroup.com+inurl%3Ahttps&oq=site%3Aaerlawgroup.com+inurl%3Ahttps&gs_l=serp.3...5577.15088.0.15655.24.24.0.0.0.0.150.1864.19j5.24.0....0...1.1.64.serp..3.0.0.bf5ZiosfHMI
-
RE: Wordpress Blog Integrated into eCommerce site - Should we use one xml sitemap or two?
Since your blog is on a subdomain, yes, you will need to set up a separate WMT profile for it. The blog will also need its own robots.txt and XML sitemap files, since technically speaking, subdomains are regarded as a different site.
-
RE: Keyword variations on a single page
Hi John,
I recommend grouping your keywords into similar topics and associating them with either an existing URL on your site, or if the topic isn't currently discussed on your site, proposing a new URL. This process is called keyword mapping, where you take each keyword your tracking and pair it with a preferred URL. I start all of my SEO projects off with one, here's why:
- Provides a road map of what _existing _content needs to be updated
- Lays out what new content I need to create
- Prevents me from diluting the emphasis of one keyword by addressing a very similar one on another page
- Helps me set priorities on which content should be updated/generated first (I aggregate search volume for all associated URLs and work in a descending order)
-
RE: 'SEO Footers'
Thanks for your feedback.
Glad to hear I'm not the only one dealing with this debate! Would you mind sharing any data you collect on your test once you have enough to be conclusive?
-
RE: Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
Hi Steven,
You'll definitely want to apply 301 redirects to any site that you move to HTTPS. For most sites, this can typically be done with a single redirect rules that essentially replaces http with https, so you won't have to comb through each URL and apply one-to-one redirects.
No need to worry about losing link juice, Google views these types of 301s differently than a typical 301, and all authority will pass through them.
Canonical should also be applied, this will help search engines learn your new URL structure and ensure they index the new HTTPS URLs.
Cryus Shepard wrote a great post with all the necessary steps for a secure migration, check it out here: https://moz.com/blog/seo-tips-https-ssl
Good luck!
-
RE: How Google's "Temporarily remove URLs" in search console works?
I'd recommend 301 redirecting the old version of the content to its new location on the new sub-domain. That's generally the quickest way to let search engines (and people) know you've relocated important content. Hiding URLs from Search Console is temporary only and not really intended for pointing search engines to relocated content.
-
RE: Change URL or use Canonicals and Redirects?
I knew it that sounded like a Google A/B test protocol!
A good rule of thumb is to avoid changing URLs unless it's absolutely necessary. There's a lot going on with that URL in the background that Google knows about....internal and external links as I mentioned above, but also XML sitemaps and usage metrics. You don't want to point them elsewhere and have them re-learn a new URL structure and step through a redirect just to get there.
Google has put more emphasis on UX in the last couple years, so improving the usability of this page, as you've done by A/B testing, is likely to benefit you in the long run.
-
RE: 'SEO Footers'
Thanks for your feedback.
I totally agree with all 3 of your points, especially the comment regarding better ways to tackle internal linking.
-
RE: Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
The best way to mitigate this problem would be to update the destination URLs in your Adwords Campaigns. You can do this in bulk relatively quickly using the Adwords Editor desktop application.
-
RE: Moz Point Swag
Commenting so I can see any response from a Mozzer, as I'm still waiting on my 500 pt. shirt.
-
RE: Duplicate title while setting canonical tag.
You'll definitely want to keep that canonical tag in place. Some tools don't recognize canonicals, so I wouldn't worry too much about duplicate notifications due to parameters like that. If you noindex that page, it will apply to the root of that URL, not strictly the parameter'd version.
-
RE: 'SEO Footers'
Of course, no problem! Maybe a comparison of before and after MozBar PA for a couple of the top performing SEO pages? Not sure if that's the best KPI for this test, but it's a rather difficult thing to measure...just throwing out some ideas on how I intend to measure when I'm able to run a similar test.
-
RE: Requesting Link
Hi Lauren,
This seems to be widespread across most news sites. I've reached out to many news sites before with the same request. The only theory I've been able to come up with is that they make money on the ads that plaster most news sites (even the big ones!), so if they link out to another site, they've just lost ad revenue.
*edit
Out of continued curiosity, I decided to see if I could corroborate my theory. While I couldn't do that (but really, they'd never admit it anyway), I did come across this article that points towards 'janky CMSs'.
-
RE: Problems with US site being prioritized in Google UK
Hi,
There are a couple things you can do to help Google serve up your preferred TLDs in each country.
First, you should identify (if you haven't already) the preferred geography in Search Console - see this link for more info on how to set that up: https://support.google.com/webmasters/answer/62399?hl=en
Next, you can add the appropriate hreflang tags to the pages on each of the different site versions. Moz has an extensive guide on how to do this here: https://moz.com/learn/seo/hreflang-tag
Hope that's helpful!
-
RE: Http urls on a new https website
Hi,
It's difficult to tell without seeing the site, but the likely problem is related to those canonical tags. A canonical tag that doesn't point to the preferred version of a URL can cause indexation problems, but not resolving URL issues.
There could also be some pages that don't have that 301 redirect applied, therefore leaving some pages to resolve at HTTP.
-
RE: Negative Keywords for SEO
That's the only method I know of deter unqualified clicks, interested to see if anyone else chimes in with some useful nuggets of info on this topic!