Yeah. I'd just leave it as a 404 in that case
Posts made by willcritchlow
-
RE: Site structure: Any issues with 404'd parent folders?
-
RE: Site structure: Any issues with 404'd parent folders?
PS - if you're worried about the crawling, you could always block it in robots.txt if you really wanted (but unless it's a huge site I wouldn't bother). Note - if you do go this route, do it carefully so as not to block all contents of the folder at the same time!
-
RE: Site structure: Any issues with 404'd parent folders?
The short answer is that there should be no harm going with your proposed approach.
Longer version: I believe there are cases where Google has tried to crawl a directory like "/famous-dogs/" in your example purely because it appears as a sub-folder in the paths of other pages even though there are not any direct links to it. But even if it does crawl it, if you don't have or intend to have a page there, a 404 is a perfectly valid response.
In general, while there could be a case that it's worth creating a "/famous-dogs/" page if there is search demand you can fulfil, until or unless you do, there is no harm in it returning a 404 response.
-
RE: How important is the file extension in the URL for images?
In theory, there should be no difference - the canonical header should mean that Google treats the inclusion of /images/123456 as exactly the same as including /images/golden-retriever.
It is slightly messier so I think that if it was easy, I'd go down the route of only ever using the /golden-retriever version - but if that's difficult, this is theoretically the same so should be fine.
-
RE: How important is the file extension in the URL for images?
Hi James. I've responded with what I believe is a correct answer to MarathonRunner's question. There are a few inaccuracies in your responses to this thread - as pointed out by others below - please can you target your future responses to areas where you are confident that you are correct and helpful? Many thanks.
-
RE: How important is the file extension in the URL for images?
@MarathonRunner - you are correct in your inline responses - it's totally valid to serve an image (or other filetype) without an extension, with its type identified by the Content-Type. Sorry that you've had a less-than-helpful experience here so far.
To answer your original questions:
- From an SEO perspective, there is no need that I know of for your images to have a file extension - the content type should be fine
- However - I have no reason to think that a filename in the Content-Disposition header will be recognised as a ranking signal - what you are describing is a rare use-case and I haven't seen any evidence that it would be recognised by the search engines as being the "real" filename
If you can't always refer to the image by its keyword-rich filename, then could you:
- Serve it as you propose (though without the Content-Disposition filename)
- Serve a rel="canonical" link to a keyword-rich filename (https://example.com/images/golden-retriever in your example)
- Also serve the image on that URL
This only helps if you are able to serve the image on the /images/golden-retriever path, but need to have it available at /images/123456 for inclusion in your own HTML templates.
I hope that helps.
-
RE: Keyword Rich Domain
Hi Jacob,
Experts have been writing about the declining importance of exact match domains for many years (those two articles are both from 2012).
In general, if you already had a setup like the one you are proposing, there's a good chance we would be recommending consolidating the domains.
You might also be interested in this whiteboard Friday.
Good luck - hope something there helps.
-
RE: Google inaccurate results: Common or error?
Google is incredibly good at entity detection (i.e. figuring out when someone is searching for a "thing"). When there is a canonical right answer for that thing (e.g. the company website in the case of a company), it will often rank very well despite not necessarily having any of the "traditional" ranking factors in place.
Typically, when the search is unambiguously for the thing in question, the company will rank #1 even if there are much stronger websites and pages about that company. You will rarely find that a company is outranked on their own company name when it's sufficiently distinguished from other entities even if it's a really small company and there are (for example) major media stories about it.
There are a variety of other factors - like query sequences (e.g. a user searching [seo] followed by [seo trucks] in your example) that Google can use to associate more specific sites with more general queries as well.
Most importantly - I'm not sure there's a great deal you can do about it, not any general lessons you can apply to your own attempts in this market / keyword space - so I wouldn't spend too long worrying about it!
-
RE: Google Indexed a version of my site w/ MX record subdomain
You appear to have the MX sub-domain also set up as an A record.
If you have a mac / linux you can run the command: host aspmx3.googlemail.com.sullivansolarpower.com
You get the result aspmx3.googlemail.com.sullivansolarpower.com has address 72.10.48.198
Where you should get the result "not found".
I think you want to delete the A record (though check the documentation of your email provider first). You should only need them set up as MX records and shouldn't need the A record.
You've done the right thing by setting up the redirect - which should mean that the pages drop out of the index and those links disappear. (Note that there is also an https error on the aspmx3 sub-domain - but given that you don't actually want it, I don't suppose that matters that much).
Hope that helps.
-
RE: Top-10 ranked site dropping in/out of Google index?
Hi hoosteeno,
It's obviously tricky to diagnose specific technical issues in the abstract, without looking at the specific site, but here are some resources and ideas that might help:
- Rand recently talked about ranking fluctuations on whiteboard Friday - that'll be worth checking out
- Major site redesigns can cause quite a bit of fluctuation, but you don't want to wait around to find out weeks or months later that there is a technical issue, so I would suggest working through this checklist
- It's worth checking whether you have accidentally caused a load of new on-site duplicate content with the recent work - that's one other thing that could cause weird in / out fluctuations
Hope something there helps. Good luck!
-
RE: Google suddenly indexing 1,000 fewer pages. Why?
Off the top of my head, some things to check:
- noindex tags deployed from staging during the redesign
- broken rel=canonical / hreflang
- update to robots.txt
- broken status codes (are those pages still 200 OK status?)
- site speed - has the performance of the site as a whole taken a hit? It needn't necessarily be about those pages specifically, but could be about the site health as a whole
- changes to XML sitemaps
Some other harder to check / track down causes:
- changes to internal link architecture
- removal of pages that were the target of powerful inbound links
Hope something there helps - good luck!
-
RE: Case study re-directing one site into another?
Unfortunately most of the largest examples are not shared publicly. You could go looking for external data (e.g. SEMrush / searchmetrics / similarweb) around the time of big / well-known mergers. There have been a few over the years - but I don't know off-hand what historical data there is available. The biggest I remember being interested in was when Adobe bought Macromedia back in the day (but there won't be historical data going back that far, and anyway everything has changed since then).
Aside from "outside looking in" approaches, the biggest public case study I could find is this one.
Hope something there helps.
-
RE: Thoughts on RankScience?
Hey Logan. Just saw this - sorry for the v slow reply.
As you might know we (Distilled) are working on A/B testing as well (I wrote a bit more about it on moz here).
From our perspective, as a well established agency, we have built our tool with agencies and experts in mind. We see it as a tool for experts to use (our consultants, marketers on the client side, and other agencies). I wrote a bit more about how split testing is changing our consulting work here - but in short I agree with you that consulting skills and client management skills are going nowhere.
As for Google's view on the subject, any tool can be used for good or bad, but we believe that if you approach it the right way, this kind of testing is wholly aligned with Google's goals - our testing helps build better websites that are more closely-aligned with searcher intent and so we think that the general approach is one that forms a great part of a balanced search strategy.
-
RE: Site appears then disappears from Google
In terms of why things might have changed - if nothing has changed technically on the site, then the most likely reason is simply that one of the sequence of Google algorithm updates targeting quality has impacted the site: https://moz.com/google-algorithm-change
I wouldn't spend too long trying to work out why it might have changed - and instead would put the energy into improving what you can.
Good luck!
-
RE: How to stop some continuous attacks on our website
Hi Manish,
There is some confusion in your question:
- If you have inbound links that concern you, you can disavow at the domain level once and you should never need to repeat that
- I'm not clear what you mean by "redirecting to our Sitemap" -- having a redirect from their site to yours is unlikely to hurt you - especially if you have disavowed the domains
Thanks - good luck.
-
RE: Does Bing fully support SNI yet?
Hi Simon,
It's not looking great. I don't have any inside information from Bing / Microsoft, but there seem to be SNI pages that are still not being indexed correctly. See, for example, this page: https://www.mnot.net/blog/2014/05/09/if_you_can_read_this_youre_sniing which is nowhere to be found in Bing's index.
A site: search on bing finds two posts from the mnot site (out of hundreds). So it appears likely to still be flaky but I haven't gone down the rabbit hole of looking for other reasons that site may not be indexed, nor checked that it actually is using SNI. I also don't have an SNI site to test things on, so that's the best I have for you, I'm afraid.
The only other link I found that may be interesting is this page claiming that SNI is supported by bing: https://www.ssllabs.com/ssltest/viewClient.html?name=BingPreview&version=Jan%202015
Sorry that's not more useful.
You could reach out to the author of that SEL post (Ben Goodsell) and see if he's seen any changes since he wrote it.
-
RE: Retina Sites
Hi Stephan,
Before you spend _too _long on this, I'd make sure it's the best use of your time - regular images do work on retina displays and so you may have higher priorities.
In many ways (simplicity, maintainability, design) the best route is to use retina-ready images throughout your site and serve them up to non-retina users as well. This typically results in better images for all and doesn't have to impact filesize too much (see this article about compressing larger images).
The downside is that for certain kinds of images, the larger image will still have a much larger filesize and this can be a major speed issue - especially for those on mobile connections.
Unless you have evidence that one or other effect (fuzzy images on retina or slow loading) is causing you problems, I would personally take an "if it ain't broke" approach. The complexity of maintaining two sets of images is a step too far for most websites in my opinion and I would tend to stick with regular images up until the point where high resolution displays are significant among your users and plan to switch to high-resolution images for all at that point.
I hope that helps.
-
RE: Complex URL Migration
Hi Conrad,
What a tricky situation. Ultimately, these kinds of issues are hard to call perfectly because it's never pure search considerations in play and, especially with each business being different, it's impossible to be certain how search engines will treat you.
With those caveats in mind, here are my thoughts:
Question 1
Your thinking is solid. Whether it is the right call or not is impossible to know (even in hindsight) because there are simply too many moving parts. Nonetheless, I think you have sensibly weighed up the pros and cons and made the decision with open eyes. Just for completeness, I believe that point #2 is only a small benefit if at all (and probably declining) but the only part I'd really challenge you on is #3. I would personally only go down this route if the company truly is a specialist in each destination. If that is true, then great (and they likely have specialist country managers who can push forward the marketing of each site). If it's not really true and you're more just "seeking the perception" that it's true, then I might stick with the benefits of an integrated site.
Question 2
Errant 404s are a nasty and annoying problem precisely because errors do not necessarily undo quickly. I would be prepared to wait 6-8 weeks to see a recovery. You need to bear in mind, of course, that the drop could be associated with the downsides you identified in #1 (lower aggregate domain authority etc) and so you may not see a recovery from the 404s specifically. If you haven't seen a recovery after 8-10 weeks, I'd believe this was the "new normal" and would be looking at growth from there rather than "recovery".
Question 3
It's impossible to be sure. The number of "reversed migrations" that any of us have seen is tiny and they're all different so I'm afraid that your guess is as good as mine. If it turns out that improvement isn't on the horizon, then I might be tempted but I think that my approach would be to stick with the decision if you think it's the right one - see my comments in answer to q1 above. I'd change (back) only if you think benefits you expected haven't come to pass (e.g. Has conversion rate increased on dedicated sites versus how it was on an integrated one?) and the balance of benefit has shifted.
I hope that helps.
-
RE: How to count number of app's installations for users who install app from https://play.google.com?
While I don't have first-hand experience of this issue, I've done some searching around and I wondered if this helps at all:
Some other possibly useful links:
https://stackoverflow.com/questions/10033313/android-google-play-referral-tracking-limitations
-
RE: How much dev knowledge as an seo do I need to know in order to make small changes on websites?
Thanks to those giving a shout out to DistilledU (which includes the basics of HTML but no real programming).
If you are keen to learn to code, I'd definitely endorse the Treehouse and Codecademy recommendations given here.
I like people wanting to learn to code. You may find some of my presentations on the topic helpful as well.
Good luck!
-
RE: Visual Website Optimizer vs. Optimizely vs. Conductrics
I meant anger - just short-hand for "haven't used it on anything live". I've played around with demos etc but haven't used it personally on live projects.
I believe they're doing what's called bandit optimization in the statistical literature. Fascinating area of study. I like it from a mathematical perspective - though I prefer declaring a specific winner and then implementing that version rather than always having the split-test software running and serving up variations.
It feels tidier, easier to maintain and results in a faster website...
-
RE: Visual Website Optimizer vs. Optimizely vs. Conductrics
I haven't used Conductrics in anger, but have recently started using Optimizely more - I really like the fact that it's all jQuery based. The big benefits of that for us are:
- Super easy point and click interface for non-technical writers to make changes
- More advanced (edit jQuery code) options for more involved tests
- Not limited to page snapshots (as I believe VWO is) - this was something that caused us issues with our XSS protection scripts and meant we couldn't use VWO effectively on distilled.net
Hope that helps - sorry I haven't got comprehensive comparison data. Optimizely has a free trial though so you can test it out for yourself (really quick to get up and running).
-
RE: Does 'jump to' navigation work with a hidden div?
The best I can think of would be to link to the anchors from the page the "level" above.
In other words, on a category page (or equivalent), display links to the page plus its named anchors (actually, much like Google's sitelinks). It's impossible to weigh up the relative weight of links from within the page to links from elsewhere on the site without inside knowledge, but I would prefer this to hidden links.
-
RE: Does 'jump to' navigation work with a hidden div?
Firstly, it isn't automatic for Google to add links to your in-page anchors no matter what you do.
It's hard to say for sure whether placing hidden links to the named anchors will work in your specific case - but I would say that if it does work, I'd view it as a short-term solution and probably more risky than I'd like to see for limited reward.
Why not actually link to the anchors? If you think that people might want to jump direct to them from the search results, mightn't people want to navigate to them when they're on your site as well?
There is essentially never a good reason for hiding information that you want Google to find - it should be there for the users as well.
-
RE: Does 'jump to' navigation work with a hidden div?
Hi Michelle,
What do you mean by "work"? Are you intending to have a way of exposing this hidden div (in a drop-down or similar)?
One of the most common uses of jump to navigation is for screen readers for the visually-impaired. I imagine that a hidden div could work well for that as they typically ignore CSS styling but I imagine it would need testing in the specific readers.
Happy to dig into this further if you have more info about your plans.