Canonicalisation and Dynamic Pages
-
We have an e-commerce single page app hosted at https://www.whichledlight.com and part of this site is our search results page (http://www.whichledlight.com/t/gu10-led-bulbs?fitting_eq=GU10). To narrow down products on the results we make heavy use of query parameters. From an SEO perspective we are telling GoogleBot to not index pages that include these query parameters to prevent duplicate content issues and to not index pages where the combination of query parameters has resulted in no results being returned. The only exception to this is the page parameter.
We are posting here to check our homework so to speak. Does the above sound sensible? Although we have told GoogleBot to not index these pages, Moz will still crawl them (to the best of my knowledge), so we will continue to see crawl errors within our Moz reports where in fact these issues don't exist. Is this true? Is there anyway to make Moz ignore pages with certain query parameters?
Any other suggestions to improve the SEO of our results pages is most appreciated. Thanks
-
no problem!
-
Sorry to scare you I read it wrong I apologize.
-
Hi there!
(sorry this is someone else at Truelux.. the original poster of this question is currently driving somewhere )
As far as I know.. our robots file is at https://www.whichledlight.com/robots.txt
Isn't it all commented out though?Unless you're viewing a different one?
Cheers
Jon -
Yes but currently your robots.txt file is set to block access to your entire site from search engines. You can block those query parameters though within a robots.txt file.
-
Thanks for the reply. The change we made in Webmaster Tools to ignore query parameters was only does yesterday, so I guess it makes sense they still appear right now>
Are there any implications to updating our robots.txt to prevent further crawls of these pages?
-
I found quite a few query parameters being indexed by Google. They are showing up within the serps you can view this by going to Google and searching for site:whichledlight.com. I would either canonicalize those pages or update your robots.txt file to address the issue.
Also your robots.txt might need to be updated right now it reads:
# User-agent: * # Disallow: /
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to find orphan pages
Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks!
Technical SEO | | KJH-HAC1 -
Indexed pages
Just started a site audit and trying to determine the number of pages on a client site and whether there are more pages being indexed than actually exist. I've used four tools and got four very different answers... Google Search Console: 237 indexed pages Google search using site command: 468 results MOZ site crawl: 1013 unique URLs Screaming Frog: 183 page titles, 187 URIs (note this is a free licence, but should cut off at 500) Can anyone shed any light on why they differ so much? And where lies the truth?
Technical SEO | | muzzmoz1 -
Delete indexed spam pages
Hi everyone, I'm hoping someone had this same situation, or may know of a solution. One of our sites was recently pharmahacked 😞 We found an entire pharmaceutical site in one of the folder of our site. We were able to delete it, but now Google is showing us on not found error for those pages we deleted. First, I guess the question is will this harm us? If so, anyway we can fix this? Obliviously we don't want to do a 303 redirect for spam pages. Thanks!
Technical SEO | | Bridge_Education_Group0 -
Page for page 301 redirects from old server to new server
Hi guys:
Technical SEO | | cindyt-17038
I have a client who is moving their entire ecommerce site from one hosting platform (Yahoo Store) to another (BigCommerce) and from one domain to another. The old domain is registered with the Yahoo as of yesterday and we have redirected the old domain (at the domain level) to the new domain. However, we are having trouble getting the pages to redirect page for page. Currently they are all redirecting to the new domain home page. We did just move the old domain from GoDaddy to Yahoo yesterday thinking this would solve it however as of this morning the old pages are still redirecting to the home page of the new domain. To complete the 301 redirect picture, we uploaded the redirects (all relative links for both from and to) to BigCommerce. And while the domain was hosted at GoDaddy with a redirect to the new domain, they were working. We moved the domain to Yahoo because of email issues thinking it should still work. Is it possibly just a waiting game now as the change populates across the DNS? old url to test:
rock-n-roll-action-figures.com/fender-jazz-bass-miniature-guitar-replica-classic-red-finish.html0 -
50,000 pages or a page with parameters
I have a site with about 12k pages on a topic... each of these pages could use another several pages to go into deeper detail about the topic. So, I am wondering, for SEO purposes would it be better to have something like 50,000 new pages for each sub topic or have one page that I would pass parameters to and the page would be built on the fly in code behind. The drawback to the one page with parameters is that the URL would be static but the effort to implement would be minimal. I am also not sure how google would index a single page with parameters. The drawback to the 50k pages model is the dev effort and possibly committed some faux pas by unleashing so many links to my internal pages. I might also have to mix aspx with html because my project can't be that large. Anyone here ever have this sort of choice to make? Is there a third way I am not considering?
Technical SEO | | Banknotes0 -
How do fix twin home pages
Search engine analysis is indicating that my site has twin home pages (www.mysite.com and http://mysite.com). The error message I'm getting is: "your website resides at both www.mysite.com and mysite.com. My uploaded index page is a .htm page (not .html). I don't know if that matters. Can someone explain how this happened and what I can do to fix it? Thanks!
Technical SEO | | finalfrontier0 -
How can I prevent duplicate content between www.page.com/ and www.page.com
SEOMoz's recent crawl showed me that I had an error for duplicate content and duplicate page titles. This is a problem because it found the same page twice because of a '/' on the end of one url. e.g. www.page.com/ vs. www.page.com My question is do I need to be concerned about this. And is there anything I should put in my htaccess file to prevent this happening. Thanks!
Technical SEO | | onlineexpression
Karl0 -
How do I know which page a link is from
I've got an interesting situation. I hope you can help. I have a list of links but I'm not sure which pages of my site they are from. How do I know which page a specific link is from? Thanks in advance.
Technical SEO | | VinceWicks0