Robots.txt question
-
I notice something weird in Google robots. txt tester
I have this line
Disallow: display=
in my robots.text but whatever URL I give to test it says blocked and shows this line in robots.text
for example this line is to block pages like
http://www.abc.com/lamps/floorlamps?display=table
but if I test
http://www.abc.com/lamps/floorlamps or any page
it shows as blocked due to Disallow: display=
am I doing something wrong or Google is just acting strange? I don't think pages with no display= are blocked in real.
-
Yes - there is bug in your robots.txt. You should wrote some as:
Disallow: /?display=table
or:
Disallow: /?display=*
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unanswered questions in forums
What should be done with forum questions that go unanswered for a long time (i.e. year or longer)? Are these types of questions valuable content? Should we opt out of having these types of pages indexed? Since these pages are just one sentences doesn't seem like it is adding value to the site.
Intermediate & Advanced SEO | | nandaMesa0 -
Robots.txt Blocking - Best Practices
Hi All, We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line. We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files? Thanks! User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
Intermediate & Advanced SEO | | ReunionMarketing0 -
Internal links question
I've read that Google frowns upon large numbers of internal links. We're building a site that helps users browse a list of shows via dozens of genres. If the genres are expose, say, as a pulldown menu as opposed to a list of static links, and selecting the pulldown option filters the list of shows, would those genres count against our internal links count?
Intermediate & Advanced SEO | | TheaterMania0 -
Google Business Places SEO Question
Hi All, I have set up different Google Business Places listings for each of my locations for my business. I want each location to rank for their retrospective keywords. So from an SEO point of view to help these rank.. When choosing Keywords/Categories for my listing , should I use my main keyword + my location example - I have a carpet cleaner hire depot in London and I want to rank for London so should my keyword be just carpet cleaner hire or Carpet Cleaner Hire London. My other depot is say in the location of Watford so should that listing have keywords that contain Watford in them ?. Also - Should the description contain my localised keywords ? I have looked around and cant' find much on this topic Any pointers would be greatly appreciated thanks Sarah
Intermediate & Advanced SEO | | SarahCollins0 -
.htaccess question/opinion/advice needed
Hello, I am trying to achieve 3 different things on my .htaccess I just want to make sure I am doing it the right or best way because I don't have much experience working on this kind of files. I am trying to: a) Redirect www.mysite.com/index.html to www.mysite.com so I don't get a duplicate content/tag error. b) Redirect mysite.com to www.mysite.com c) Get rid of the file extensions; www.mysite.com/stuff.html to www.mysite.com/stuff This is the code that I'm currently using and it seems to work fine, however I would like someone with experience to take a look so I can avoid internal server errors and other kinds of issues. I grabbed each piece of code from different posts and tutorials. Options +FollowSymlinks
Intermediate & Advanced SEO | | Eblan
RewriteEngine on Index Rewrite RewriteRule ^index.(htm|html|php) http://www.mysite.com/ [R=301,L] RewriteRule ^(.*)/index.(htm|html|php) http://www.mysite.com/$1/ [R=301,L] RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.html -f
RewriteRule ^(.*)$ $1.html Options +FollowSymlinks
RewriteEngine on
Rewritecond %{http_host} ^mysite.com [nc]
Rewriterule ^(.*)$ http://www.mysite.com/$1 [r=301,nc] Thanks a lot!0 -
Duplicate Content Question
My understanding of duplicate content is that if two pages are identical, Google selects one for it's results... I have a client that is literally sharing content real-time with a partner...the page content is identical for both sites, and if you update one page, teh otehr is updated automatically. Obviously this is a clear cut case for canonical link tags, but I'm cuious about something: Both sites seem to show up in search results but for different keywords...I would think one domain would simply win out over the other, but Google seems to show both sites in results. Any idea why? Also, could this duplicate content issue be hurting visibility for both sites? In other words, can I expect a boost in rankings with the canonical tags in place? Or will rankings remain the same?
Intermediate & Advanced SEO | | AmyLB0 -
Disallow my store in robots.txt?
Should I disallow my store directory in robots.txt? Here is the URL: https://www.stdtime.com/store/ Here are my reasons for suggesting this: SEOMOZ finds crawl "errors" in there that I don't care about I don't think I care if the search engines index those pages I only have one product, and it is not an impulse buy My product has a 60 day sales cycle, so price is less important than features
Intermediate & Advanced SEO | | raywhite0 -
Worldwide Stores - Duplicate Content Question
Hello, We recently added new store views for our primary domain for different countries. Our primary url: www.store.com Different Countries URLS: www.store.com/au www.store.com/uk www.store.com/nz www.store.com/es And so forth and so on. This resulted in an almost immediate rankings drop for several keywords which we feel is a result of duplicate content creation. We've thousands of pages on our primary site. We've assigned a "no follow" tags to all store views for now, and trying to roll back the changes we did. However, we've seen some stores launching in different countries with same content, but with a country specific extensions like .co.uk, .co.nz., .com.au. At this point, it appears we have three choices: 1. Remove/Change duplicate content in country specific urls/store views. 2. Launch using .co.uk, .com.au with duplicate content for now. 3. Launch using .co.uk, .com.au etc with fresh content for all stores. Please keep in mind, option 1, and 3 can get very expensive keeping hundreds of products in untested territories. Ideally, we would like test first and then scale. However, we'd like to avoid any duplicate penalties on our main domain. Thanks for your help and answers on the same.
Intermediate & Advanced SEO | | globaleyeglasses0