Query parameters for normalization in Bing
-
Almost everyday I get this-
Query parameters for normalization found on www.sitename.com
Site: www.sitename.comDate: 3/26/2013Priority: LowBing has detected new parameters in your URLsAnyone know why? We aren't changing anything. I have read it has to do with internal urls but I can find out what internal urls this is a problem with.
-
We see this too. We have canonicals in place, and we still see the error. And there's no insight into which parameters are causing issues.
-
Here's an answer straight from Duane Forrester of Bing:
"It means that those parameters may be causing Bing to think you have duplicate content issues. If your content can appear on two individual URLs, that can be an issue, as we don't know which one you want indexed, ranked, etc. So, the tools we offer allow you to control this by telling us to ignore a parameter. We can suggest parameters we find, but it's your choice on if you want to tell us to ignore them (and the attendant URLs) or not.
For example, if you have a /print/ folder on your site, you can tell us to ignore everything under the "print parameter". By entering "print" as the parameter to be ignored, we'll skip indexing the content held in the print folder on your site."
Duane doesn't say so in the article, but you can adjust your parameter settngs in Bing Webmaster Tools. Info here:
http://www.bing.com/webmaster/help/ignore-url-parameters-d7496c65
Hope this helps! Best of luck with your SEO.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I remove certain parameters from the canonical URL?
For example, https://www.jamestowndistributors.com/product/epoxy-and-adhesives?page=2&resultsPerPage=16 is the paginated URL of the category https://www.jamestowndistributors.com/product/epoxy-and-adhesives/. Can I remove the &resultsPerPage= variation from the canonical without it causing an issue? Even though the actual page URL has that parameter? I was thinking of using this: instead of: What is the best practice?
Intermediate & Advanced SEO | | laurengdicenso0 -
2015 Disavow Links on Bing?
In years past I was told not to disavow links in Bing unless the site had an issue. This was driven home when a site we were working on disavowed the links in google and saw the site recover after a few months, then when they disavowed the same links in Bing and the rankings dropped 20% over the next few months. The reasoning was that Bing was looking more at the qty of links, and didn't analyze links the way Google does. So even though you might disavow links in Google you might not want to disavow those same links in Bing. Does this still hold true in 2015? I want to get the community's opinion on this topic, should the same links be disavowed in Bing that are disavowed in Google? Why or why not?
Intermediate & Advanced SEO | | K-WINTER1 -
URL Parameters as a single solution vs Canonical tags
Hi all, We are running a classifieds platform in Spain (mercadonline.es) that has a lot of duplicate content. The majority of our duplicate content consists of URL's that contain site parameters. In other words, they are the result of multiple pages within the same subcategory, that are sorted by different field names like price and type of ad. I believe if I assign the correct group of url's to each parameter in Google webmastertools then a lot these duplicate issues will be resolved. Still a few questions remain: Once I set f.ex. the 'page' parameter and i choose 'paginates' as a behaviour, will I let Googlebot decide whether to index these pages or do i set them to 'no'? Since I told Google Webmaster what type of URL's contain this parameter, it will know that these are relevant pages, yet not always completely different in content. Other url's that contain 'sortby' don't differ in content at all so i set these to 'sorting' as behaviour and set them to 'no' for google crawling. What parameter can I use to assign this to 'search' I.e. the parameter that causes the URL's to contain an internal search string. Since this search parameter changes all the time depending on the user input, how can I choose the best one. I think I need 'specifies'? Do I still need to assign canonical tags for all of these url's after this process or is setting parameters in my case an alternative solution to this problem? I can send examples of the duplicates. But most of them contain 'page', 'descending' 'sort by' etc values. Thank you for your help. Ivor
Intermediate & Advanced SEO | | ivordg0 -
Redirect to url with parameter
I have a wiki (wiki 1) where many of the pages are well index in google. Because of a product change I had to create a new wiki (wiki 2) for the new version of my product. Now that most of my customers are using the new version of my product I like to redirect the user from wiki 1 to wiki 2. An example of a redirect could be from wiki1.website.com/how_to_build_kitchen to wiki2.website.com/how_to_build_kitchen. Because of a technical issue the url I redirect to, needs to have a parameter like "?" so the example will be wiki2.website.com/how_to_build_kitchen? Will the search engines see it as I have two pages with same content?
Intermediate & Advanced SEO | | Debitoor
wiki2.website.com/how_to_build_kitchen
and
wiki2.website.com/how_to_build_kitchen? And will the SEO juice from wiki1.website.com/how_to_build_kitchen be transfered to wiki2.website.com/how_to_build_kitchen?0 -
Parameter Strings & Duplicate Page Content
I'm managing a site that has thousands of pages due to all of the dynamic parameter strings that are being generated. It's a real estate listing site that allows people to create a listing, and is generating lots of new listings everyday. The Moz crawl report is continually flagging A LOT (25k+) of the site pages for duplicate content due to all of these parameter string URLs. Example: sitename.com/listings & sitename.com/listings/?addr=street name Do I really need to do anything about those pages? I have researched the topic quite a bit, but can't seem to find anything too concrete as to what the best course of action is. My original thinking was to add the rel=canonical tag to each of the main URLs that have parameters attached. I have also read that you can bypass that by telling Google what parameters to ignore in Webmaster tools. We want these listings to show up in search results, though, so I don't know if either of these options is ideal, since each would cause the listing pages (pages with parameter strings) to stop being indexed, right? Which is why I'm wondering if doing nothing at all will hurt the site? I should also mention that I originally recommend the rel=canonical option to the web developer, who has pushed back in saying that "search engines ignore parameter strings." Naturally, he doesn't want the extra work load of setting up the canonical tags, which I can understand, but I want to make sure I'm both giving him the most feasible option for implementation as well as the best option to fix the issues.
Intermediate & Advanced SEO | | garrettkite0 -
Ranking 1st on Google, but not in top 50 on Bing and Yahoo?
Hi Mozzers, Roughly 2 weeks ago we were ranked:
Intermediate & Advanced SEO | | Travis-W
#2 on Google for "African American Business Owner Mailing Lists"
#2 on Bing
#2 on Yahoo Now we are ranked
#1 on Google #50 on Bing
#50 on Yahoo I noticed a lot of our other keywords improved on Google during this period but vanished from the other 2 search engines. Other KWs include
"Apartment Owner Mailing Lists " (#4 on Google)
"Community College Mailing Lists (#3 on Google)
etc. What gives?
Thoughts?0 -
1200 pages no followed and blocked by robots on my site. Is that normal?
Hi, I've got a bunch of notices saying almost 1200 pages are no-followed and blocked by robots. They appear to be comments and other random pages. Not the actual domain and static content pages. Still seems a little odd. The site is www.jobshadow.com. Any idea why I'd have all these notices? Thanks!
Intermediate & Advanced SEO | | astahl110 -
Query / Discussion on Subdomain and Root domain passing authority etc
I've seen Rands video on subdomains and best pratices at
Intermediate & Advanced SEO | | James77
http://www.seomoz.org/blog/whiteboard-friday-the-microsite-mistake
http://www.seomoz.org/blog/understanding-root-domains-subdomains-vs-subfolders-microsites I have a question/theory though and it is related to an issue I am having. We have built our website, and now we are looking at adding 3rd party forums and blogs etc (all part of one CMS). The problem is these need to to be on a seperate subdomain to work correctly (I won't go into the specific IT details but this is what I have been advised by my IT guru's). So I can have something like:
http://cms.mysite.com/forum/ Obviously after reading Rands post and other stuff this is far from ideal. However I have another Idea - run the CMS from root and the main website from the www. subdomain. EG
www.mysite.com
mysite.com/blog Now my theory is that because so many website (possibly the majority - especially smaller sites) don't use 301 redirects between root and www. that search engines may make an exception in this case and treat them both as the same domain, so it could possibly be a way of getting round the issue. This is just a theory of mine, based solely on my thoughts that there are so many websites out there that don't 301 root to www. or vice versa, that possibly it would be in the SE's self interest to make an exception and count these as one domain, not 2. What are your thoughts on this and has there been any tests done to see if this is the case or not? Thanks0