Robots.txt Blocked Most Site URLs Because of Canonical
-
Had a bit of a "Gotcha" in Magento.
We had Yoast Canonical Links extension which worked well , but then we installed Mageworx SEO Suite.. which broke Canonical Links.
Unfortunately it started putting www.mysite.com/catalog/product/view/id/516/ as the Canonical Link - and all URLs with /catalog/productview/* is blocked in Robots.txt
So unfortunately We told Google that the correct page is also a blocked page. they haven't been removed as far as I can see but traffic has certainly dropped.
We have also , at the same time had some Site changes grouping some pages & having 301 redirects.
Resubmitted site map & did a fetch as google.
Any other ideas? And Idea how long it will take to become unblocked?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 redirecting a site that currently links to the target site
I have a personal blog that has a good amount of back links pointing at it from high quality relevant authoritative sites in my niche. I also run a company in the same niche. I link to a page on the company site from the personal blog article that has bunch of relevant links pointing at it (as it's highly relevant to the content on the personal blog). Overview: Relevant personal blog post has a bunch of relevant external links pointing at it (completely organic). Relevant personal blog post then links (externally) to relevant company site page and is helping that page rank. Question: If I do the work to 301 the personal blog to the company site, and then link internally from the blog page to the other relevant company page, will this kill that back link or will the internal link help as much as the current external link does currently? **For clarity: ** External sites => External blog => External link to company page VS External sites => External blog 301 => Blog page (now on company blog) => Internal link to target page I would love to hear from anyone that has performed this in the past 🙂
Intermediate & Advanced SEO | | Keyword_NotProvided0 -
Large robots.txt file
We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately) Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down. Does anybody have any experience with a robots.txt of that size?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Not sure how we're blocking homepage in robots.txt; meta description not shown
Hi folks! We had a question come in from a client who needs assistance with their robots.txt file. Metadata for their homepage and select other pages isn't appearing in SERPs. Instead they get the usual message "A description for this result is not available because of this site's robots.txt – learn more". At first glance, we're not seeing the homepage or these other pages as being blocked by their robots.txt file: http://www.t2tea.com/robots.txt. Does anyone see what we can't? Any thoughts are massively appreciated! P.S. They used wildcards to ensure the rules were applied for all locale subdirectories, e.g. /en/au/, /en/us/, etc.
Intermediate & Advanced SEO | | SearchDeploy0 -
Why isn't the canonical tag on my client's Magento site working?
The reason for this mights be obvious to the right observer, but somehow I'm not able to spot the reason why. The situation:
Intermediate & Advanced SEO | | Inevo
I'm doing an SEO-audit for a client. When I'm checking if the rel=canonical tag is in place correctly, it seems like it: view-source:http://quickplay.no/fotball-mal.html?limit=15) (line nr 15) Anyone seing something wrong with this canonical? When I perform a site:http://quickplay.no/ search, I find that there's many url's indexed that ought to have been picked up by the canonical-tag: (see picture) ..this for example view-source:http://quickplay.no/fotball-mal.html?limit=15 I really can't see why this page is getting indexed, when the canonical-tag is in place. Anybody who can? Sincerely 🙂 GMdWg0K0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Best strategy for "product blocks" linking to sister site? Penguin Penalty?
Here is the scenario -- we own several different tennis based websites and want to be able to maximize traffic between them. Ideally we would have them ALL in 1 site/domain but 2 of the 3 are a partnership which we own 50% of and why are they are off as a separate domain. Big question is how do we link the "products" from the 2 different websites without looking spammy? Here is the breakdown of sites: Site1: Tennis Retail website --> about 1200 tennis products Site2: Tennis team and league management site --> about 60k unique visitors/month Site3: Tennis coaching tip website --> about 10k unique visitors/month The interesting thing was right after we launched the retail store website (site1), google was cranking up and sending upwards of 25k search impressions/day within the first 45 days. Orders kept trickling in and doing well overall for first launching. Interesting thing was Google "impressions" peaked at about 60 days post launch and then started trickling down farther and farther and now at about 3k-5k impressions/day. Many keywords phrases were originally on page 1 (position 6-10) and now on page 3-8 instead. Next step was to start putting "product links" (3 products per page) on site2 and site3 -- about 10k pages in total with about 6 links per page off to the product page (1 per product and 1 per category). We actually divided up about 100 different products to be displayed so this would mean about 2k links per product depending on the page. FYI, those original 10k pages from site2 and site3 already rank very well in Google and have been indexed for the past 2+ years in there. Most popular word on the sites is Tennis so very related. Our rationale was "all the websites are tennis related" and figured that the links on the latest and greatest products would be good for our audience. Pre-Penguin, we also figured this strategy would also help us rank for these products as well for when users are searching on them. We are thinking through since traffic and gone down and down and down from the peak of 45 days ago, that Penguin doesn't like all these links -- so what to do now? How to fix it and make the Penguin happy? Here are a couple of my thoughts on fixing it: 1. Remove the "category link" in our "product grouping" which would cut down the link by 1/3rd. 2. Place a "nofollow" on all the links for the other "product links". This would allow us to get the "user clicks" from these while the user is on that page. 3. On our homepage (site2 & site3), place 3 core products that change frequently (weekly) and showcase the latest and greatest products/deals. Thought is to NOT use the "nofollow" on these links since it is the homepage and only about 5 links overall. Heck part of me debated on taking our top 1000 pages (from the 10k page) and put the links ONLY on those and distribute about 500 products on them so this would mean only 2 links per product -- it would mean though about 4k links going there. Still thinking #2 above could be better? Any other thoughts would be great! Thanks, Jeremy
Intermediate & Advanced SEO | | jab10000 -
Question regarding error url while checking in open-site explorer tool
Hello friends, My website home url & inner page url shows error while checking the open-site site explorer tool from SEOMoz, for a website** eg.www.abc.com website as**
Intermediate & Advanced SEO | | zco_seo
**"Oh Hey! It looks like that URL redirects to www.abc.com/error.aspx?aspxerrorpath=/default.aspx. Would you like to see data for that URL instead?"**May I know the reason, why this url showing this result while checking back link report from the tool?**May I know on what basis this tool is evaluating the website url as well?****May I know, Will this affect the Google SERPs for this website?**Thanks0