Not sure how we're blocking homepage in robots.txt; meta description not shown
-
Hi folks!
We had a question come in from a client who needs assistance with their robots.txt file.
Metadata for their homepage and select other pages isn't appearing in SERPs. Instead they get the usual message "A description for this result is not available because of this site's robots.txt – learn more".
At first glance, we're not seeing the homepage or these other pages as being blocked by their robots.txt file: http://www.t2tea.com/robots.txt.
Does anyone see what we can't? Any thoughts are massively appreciated!
P.S. They used wildcards to ensure the rules were applied for all locale subdirectories, e.g. /en/au/, /en/us/, etc.
-
I can see the meta descriptions in SERPs. do you have any sample pages where it does not show up?
-
According to screamingfrog the current line:
Line:40 http://www.t2tea.com/on/demandware.store/
Is the line on robots.txt is causing you an issue.
-
Hi,
It looks like they are 302 redirecting the homepage to internal language/region specific storefronts but are doing that based on an internal url structure that includes /on/demandware.store/ which is indeed being blocked in the robots.txt. It looks like those urls are then being 301 redirected to the user friendly url you see in the browser so there is a potentially odd redirect chain going on there. The original blocked urls are probably the immediate issue (although the 302 redirects and region/language redirect logic might be putting more complication on top of that).
-
The best way to test this is to head into Search Console and use the Robots.txt tester. If a URL is being blocked, or suspect it is, just add that URL to be tested and it will show you.
https://support.google.com/webmasters/answer/6062598?hl=en
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What happens to crawled URLs subsequently blocked by robots.txt?
We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed. I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page. The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling. Which is the better practice?
Intermediate & Advanced SEO | | AspenFasteners1 -
After hack and remediation, thousands of URL's still appearing as 'Valid' in google search console. How to remedy?
I'm working on a site that was hacked in March 2019 and in the process, nearly 900,000 spam links were generated and indexed. After remediation of the hack in April 2019, the spammy URLs began dropping out of the index until last week, when Search Console showed around 8,000 as "Indexed, not submitted in sitemap" but listed as "Valid" in the coverage report and many of them are still hack-related URLs that are listed as being indexed in March 2019, despite the fact that clicking on them leads to a 404. As of this Saturday, the number jumped up to 18,000, but I have no way of finding out using the search console reports why the jump happened or what are the new URLs that were added, the only sort mechanism is last crawled and they don't show up there. How long can I expect it to take for these remaining urls to also be removed from the index? Is there any way to expedite the process? I've submitted a 'new' sitemap several times, which (so far) has not helped. Is there any way to see inside the new GSC view why/how the number of valid URLs in the indexed doubled over one weekend?
Intermediate & Advanced SEO | | rickyporco0 -
Homepage is deindexed in Google
Please help for some reason my website home page has disappeared, we have been working on the site but nothing that I can think of which would block it. There are no warnings in google console? Can anyone lend a hand in understanding what has gone wrong, I would really appreciate it. The site is: http://www.discountstickerprinting.co.uk/ Seems to be working again but I had to fetch the home page in google console, any idea why this has happened cannot afford a heat op at this age lol?
Intermediate & Advanced SEO | | BobAnderson0 -
Capitalization of first letter of each word in meta description. Catches more attention, but may this lead to google ignoring the meta description then more frequently?
Capitalization of first letter of each word in meta description. Catches more attention, but may this lead to google ignoring the meta description then more frequently? Same for an occasional capitalized FREE in meta description. Anybody had experience with this?
Intermediate & Advanced SEO | | lcourse1 -
Is re-branding safe?
I am not entirely pleased with my website's name and have been willing to change it for years. I feel it is not brand-able. But since its an old domain name and overall figures of DA, PR, Moz score etc. are very good, I have been wary of changing the name and doing a 301 permanent re-direct from the existing name to the new one. Please suggest me if I should go for it. If yes, what are the best practices to go about it.
Intermediate & Advanced SEO | | KS__0 -
Is a Rel Canonical Sufficient or Should I 'NoIndex'
Hey everyone, I know there is literature about this, but I'm always frustrated by technical questions and prefer a direct answer or opinion. Right now, we've got recanonicals set up to deal with parameters caused by filters on our ticketing site. An example is that this: http://www.charged.fm/billy-joel-tickets?location=il&time=day relcanonicals to... http://www.charged.fm/billy-joel-tickets My question is if this is good enough to deal with the duplicate content, or if it should be de-indexed. Assuming so, is the best way to do this by using the Robots.txt? Or do you have to individually 'noindex' these pages? This site has 650k indexed pages and I'm thinking that the majority of these are caused by url parameters, and while they're all canonicaled to the proper place, I am thinking that it would be best to have these de-indexed to clean things up a bit. Thanks for any input.
Intermediate & Advanced SEO | | keL.A.xT.o0 -
Interlinking vs. 'orphaning' mobile page versions in a dynamic serving scenario
Hi there, I'd love to get the Moz community's take on this. We are working on setting up dynamic serving for mobile versions of our pages. During the process of planning the mobile version of a page, we identified a type of navigational links that, while useful enough for desktop visitors, we feel would not be as useful to mobile visitors. We would like to remove these from our mobile version of the page as part of offering a more streamlined mobile page. So we feel that we're making a fine decision with user experience in mind. On any single page, the number of links removed in the mobile version would be relatively few. The question is: is there any danger in “orphaning” the mobile versions of certain pages because links don’t exist pointing to those pages on our mobile pages? Is this a legitimate concern, or is it enough that none of the desktop versions of pages are orphaned? We were not sure whether it’s even possible, in Googlebot’s eyes, to orphan a mobile version of a page if we use dynamic serving and if there are no orphaned desktop versions of our pages. (We also plan to link to "full site" in the footer.) Thank you in advance for your help,
Intermediate & Advanced SEO | | Eric_R
Eric0 -
Meta No INDEX and Robots - Optimizing Crawl Budget
Hi, Sometime ago, a few thousand pages got into Google's index - they were "product pop up" pages, exact duplicates of the actual product page but a "quick view". So I deleted them via GWT and also put in a Meta No Index on these pop up overlays to stop them being indexed and causing dupe content issues. They are no longer within the index as far as I can see, i do a site:www.mydomain.com/ajax and nothing appears - So can I block these off now with robots.txt to optimize my crawl budget? Thanks
Intermediate & Advanced SEO | | bjs20100