Not sure how we're blocking homepage in robots.txt; meta description not shown
-
Hi folks!
We had a question come in from a client who needs assistance with their robots.txt file.
Metadata for their homepage and select other pages isn't appearing in SERPs. Instead they get the usual message "A description for this result is not available because of this site's robots.txt – learn more".
At first glance, we're not seeing the homepage or these other pages as being blocked by their robots.txt file: http://www.t2tea.com/robots.txt.
Does anyone see what we can't? Any thoughts are massively appreciated!
P.S. They used wildcards to ensure the rules were applied for all locale subdirectories, e.g. /en/au/, /en/us/, etc.
-
I can see the meta descriptions in SERPs. do you have any sample pages where it does not show up?
-
According to screamingfrog the current line:
Line:40 http://www.t2tea.com/on/demandware.store/
Is the line on robots.txt is causing you an issue.
-
Hi,
It looks like they are 302 redirecting the homepage to internal language/region specific storefronts but are doing that based on an internal url structure that includes /on/demandware.store/ which is indeed being blocked in the robots.txt. It looks like those urls are then being 301 redirected to the user friendly url you see in the browser so there is a potentially odd redirect chain going on there. The original blocked urls are probably the immediate issue (although the 302 redirects and region/language redirect logic might be putting more complication on top of that).
-
The best way to test this is to head into Search Console and use the Robots.txt tester. If a URL is being blocked, or suspect it is, just add that URL to be tested and it will show you.
https://support.google.com/webmasters/answer/6062598?hl=en
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
Product page as homepage
Hello, Is it ok that to use the homepage of website as a product page directly where you present all your products on your homepage or can it penalise you to do that ? and in that case, is it better to have a homepage that you don't rank and create a subpage for your product page. Thank you,
Intermediate & Advanced SEO | | seoanalytics1 -
Wrong meta descriptions showing in the SERPS
We recently launched a new site on https, and I'm seeing a few errors in the SERPS with our meta descriptions as our pages are starting to get indexed. We have the correct meta data in our code but it's being output in Google differently. Example: http://imgur.com/ybqxmqg Is this just a glitch on Google's side or is there an obvious issue anyone sees that I'm missing? Thanks guys!
Intermediate & Advanced SEO | | Brian_Owens_10 -
Utf-8 symbols in the Title or Meta Description?
Has somebody any experience (pros or cons) to using utf-8 symbols in the Title or in the Meta Description tags?
Intermediate & Advanced SEO | | Yosef
Expedia uses it:
http://prntscr.com/74ofrv 74ofrv0 -
Alt tag for src='blank.gif' on lazy load images
I didn't find an answer on a search on this, so maybe someone here has faced this before. I am loading 20 images that are in the viewport and a bit below. The next 80 images I want to 'lazy-load'. They therefore are seen by the bot as a blank.gif file. However, I would like to get some credit for them by giving a description in the alt tag. Is that a no-no? If not, do they all have to be the same alt description since the src name is the same? I don't want to mess things up with Google by being too aggressive, but at the same time those are valid images once they are lazy loaded, so would like to get some credit for them. Thanks! Ted
Intermediate & Advanced SEO | | friendoffood0 -
Robots.txt vs noindex
I recently started working on a site that has thousands of member pages that are currently robots.txt'd out. Most pages of the site have 1 to 6 links to these member pages, accumulating into what I regard as something of link juice cul-d-sac. The pages themselves have little to no unique content or other relevant search play and for other reasons still want them kept out of search. Wouldn't it be better to "noindex, follow" these pages and remove the robots.txt block from this url type? At least that way Google could crawl these pages and pass the link juice on to still other pages vs flushing it into a black hole. BTW, the site is currently dealing with a hit from Panda 4.0 last month. Thanks! Best... Darcy
Intermediate & Advanced SEO | | 945010 -
Block Level Link Juice
I need a better understanding of how links in different parts of the page pass juice. Much has been written about how footer links pass less juice than other parts of the page. The question I have is that if a page has a hypothetical 1000 points of Link Juice and can pass on +/-800 points via links, and I have 1 and only 1 link in the footer to another page, does it pass the full 800 points? Or... since footers only pass a small fraction of link juice, it passes lets say 80 points, and the other 720 points stays locked up on the page. This question is a hypothetical - I'm just trying to understand relationships. I don't know if I've explained the question too well, but if someone could answer i it, or point me in the right direction, I would appreciate it.
Intermediate & Advanced SEO | | CsmBill0 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560